On 7/10/2016 5:24 PM, Clifford Heath wrote:
> On 10/07/16 19:11, Don Y wrote:
>> ... handled from a central controller...
>
> Why should the control be centralised? I can think of many
> situations where you need different things controlled from
> different places.
My original quote:
"This is important as it shows how the multiple displays, etc.
can be handled from a central controller without ever burdening
any individual task with knowledge of more than a *single*
display, etc.
You appear to be conflating "control" with "controller".
The control *algorithm* resides in a single CPU. Why shouldn't it?
Distributing the algorithm adds a fair bit of complexity for
very little gain (in this case -- the "process" isn't that taxing
that it needs lots of MIPS).
And, the "displays, etc." can be HANDLED from that (single) controller.
No mention of how many or where they are located -- just that they are
"multiple" and *controlled* (i.e., DRIVEN) from the central controller.
The *I/O's* are where the inefficiency traditionally resides.
Typically, one or more (24") equipment racks are located in a
"control room". The racks house the "controller" and a boatload
of (costly, standardized) I/O interfaces. The I/O interfaces are
tethered to the sensors and actuators in the field by *miles*
of (typ) #18-20AWG wire run through cable trays to connect to the
sensors located 100+ "electrical feet" away from the controller.
(100' of wire can travel a surprisingly SHORT distance when it has to
be "routed" in a non-point-to-point fashion!)
I.e., 25 I/O's consume a mile+ of conductors (a single 100' pair for
each).
All of these I/O's terminate on large barrier/terminal strips/DIN rails
in the back of the equipment rack(s). From there, travelers connect
the strips to the actual I/O interfaces which, in turn, are connected
to the controller/display/UI/etc. (wiring the interfaces to the terminal
strips can take a work-week (!) -- noting that each conductor must be
individually labeled to ensure each end can be replaced to its intended
location if ever disconnected, dressed in an appropriate harness, etc.
And, you haven't BEGUN to address the field wiring!)
[You aren't "mass producing" these equipment racks as they tend to
have their hardware "tuned" to the particular installation. The number
and types of each I/O can vary from one installation to the next.
Even within a given "facility".]
The equipment rack becomes a piece of furniture. It can't practically
be moved due to the girth of its field wiring harness. And, the labor
involved in MOVING those terminations to a new location! This tends to
make the "control room" application specific. Or, cause the control
room to migrate into the physical process space (which can have
some advantages -- but at some cost!)
Some of the I/O's will not be capable of driving long cable runs.
So, often "black box" signal conditioners are added *at* the (remote)
sensors/actuators -- just to get the signals to/from the equipment
rack.
All of these little boxes (in the field and in the equipment rack)
tend to have idiot lights to give you a reassurance that they are
powered up, working properly (e.g., if an input "goes open", you want
to see some indication of that cuz the controller may have no way of
knowing that "for fact" -- a 4-20mA sender can report open/short but
only if there is a data path to the controller for that information!).
To combat this wiring nightmare, move the signal conditioning AND
data acquisition *into* the field, proximate to the sensors and
actuators to which they interface. Send "messages" back to the central
controller instead of the actual physical *signals* involved.
The controller then needs no I/O's -- other than user interface,
persistent store and communications link. No more equipment rack.
No more miles of #18AWG -- strung by union electricians, etc. No
more "buffers" along the way. Just lots of little "I/O servers"
tied to the sensors (that you had to purchase, regardless!)
[Nothing "new" here, either. There are indu$trial control bu$$e$
that do the$e $ort$ of thing$]
Typically, a "Supervisor" is responsible for monitoring the process;
dealing with exceptions that the controller can't practically address.
An "Operator" is often available as there are times when someone
needs to actually put eyes/hands on an actuator/sensor in the field
(while someone else continues to shepherd the process).
[Sometimes, Operators/Supervisors are shared among simultaneously
running processes]
If, for example, the Supervisor notices something amiss (or, is informed
of something wonky by the controller), *he* has to figure out how to
resolve the problem -- usually without halting the process!
[Doing so can cost you 4-8 hours of production; not something The
Boss would like, especially if the problem turned out to be a clogged
pitot tube, gunked up pump impeller, etc. -- things that could be
fixed or replaced (swapped out) without compromising the "process"]
He can run whatever diagnostics the System Designer gave him.
And, he can look at the idiot lights ("check engine") inside
the equipment rack to try to get a feel for where the problem
might lie.
But, often, he'll need to dispatch an Operator to check on
some physical aspect of the process. Something that can't be
inferred from an observation of idiot lights -- nor rectified
without "hands on":
"Bill, why don't you climb up to the inlet air handler
in the mezzanine and see why I'm getting these low temp
readings. Maybe we've got a bad RTD up there..."
>> without ever burdening
>> any individual task with knowledge of more than a *single*
>> display, etc.]
>
> In many cases, more than a single display is desirable.
> For example, most A/V systems have their own display,
> but can also be controlled from a phone. (note that
> the phone sends commands, it's not actually a central
> controller).
"without ever burdening any INDIVIDUAL task with knowledge
of MORE THAN A *SINGLE* display, etc."
Being able to partition the namespace makes it intuitive to
exploit independent "machines", each with a particular set
of responsibilities (incl UI's). Just like creating a
"process" with (stdin, stout, stderr) defined by its PARENT,
you can create a namespace appropriate for a particular
undertaking and pass that to the task responsible for that
undertaking. The task then doesn't have to deal with
identifying its "display", "keyboard", etc.
E.g., update_display() vs. inlet_air_temperature_control_loop(),
pump_flow_rate_control_loop(), etc.
[All of this, of course, is "obvious" -- with the caveat that
the application hasn't just been partitioned into smaller
"tasks" but, rather, that the address space, name space,
communication space, etc. have also been appropriately decomposed.
It's AS IF there were a bunch of individual co-operating
machines working on the problem, each ISOLATED from each other
except for very visible communication paths -- not just a set
of "tasks" (which may or may not be truly isolated).]
As adding yet another "machine" is trivial, there are natural
consequences that can be exploited in the design!
Note that each of these "black boxes" in the field can't *just*
be a black box; too much information would be hidden within that
would have a dramatic impact on the operation and troubleshooting
of the "process" (application).
Whereas the signal conditioning boxes originally had "idiot
lights", these boxes can have comparable function. But, these
can have a variety of I/O's instead of some "store bought"
notion of how many of which type SHOULD be supported in a
single "box". E.g., a 4-20mA sender for *one* process variable.
A single "check engine" light would be useless. Note that the
signal conditioners in the equipment rack had the advantage that
the "system console" was nearby so a user could *probe* some
wiring and correlate that to reported conditions on that display.
In the remote case, the console isn't (easily) accessible! Unless
you have the equivalent of a "remote display" (and a means by
which it can communicate with the REAL console)
And, you want a display that can be reasonably generic -- not tied
to a particular type (or mix!) of I/O's. I.e., a digital input
conveys different information than an analog one. An LED might
suffice to convey the sensed state of an input (or commanded of an
output) but would be ineffective at conveying an analog reading
or setting.
So, add a "real" display -- though not a "full featured" display
(those are more costly *and* aren't typically used "in operation"
for anything more than signalling faults, etc. -- don't piss
away monies needlessly!)
And, in keeping with the leanness of the I/O servers, put the
*minimum* amount of support in the node for the display. No
need to render fonts, draw graphic primitives, etc. Just
export the framebuffer as a resource. Let a task running in
the central controller scribble on /node5/framebuffer just
as easily as it would on /console/framebuffer! NOTHING "extra"
to support that capability!
We're not trying to play full motion video so the bandwidth
requirements are insignificant.
[Nothing new *here*, either! /cf./ Sun Ray, Pano Box, etc.]
Likewise, export an interface to any "user input" devices
(even if it's just a couple of "soft buttons" alongside the
display surface!) that reside on the remote(s). Again,
NOTHING extra to support them that isn't already needed
elsewhere in the design!
During normal/nominal operation, the display can indicate
"OK" -- or whatever -- to summarize that everything that
this node handles is operating properly. Or, "ERROR"
if a fault is discovered. In each case, a task IN THE
CENTRAL CONTROLLER is painting that "image" into the
display. The local node has no idea what it *means*!
Because the *controller* understand the users, it can opt to
paint informative messages on the remote display (as well
as on the system console):
"Inlet air temperature low. Test pin #6"
(because the controller knows how each I/O is wired to each
particular remote device along with its "application specific"
name -- not just "analog input #2")
As this STILL underutilizes the display's abilities, that task
(the one that has it's "/display" bound to *this* node's
"./framebuffer") that is responsible for updating *this*
display -- and, likely, no others! -- might, instead, normally
opt to display the current values for all sensors and settings
for all actuators/effectors served by this node. Possibly in
a predefined sequence if display real-estate is scarce. And,
in units of measure that are appropriate for the application!
"Inlet air temperature: 38C"
"Inlet air moisture: 35g/m^3"
"Blower speed: 200RPM"
"Outlet air temperature: 30C"
...
As such, an Operator nearby (by coincidence or having been
explicitly dispatched by the Supervisor) can see these values
and reassure himself that all is well. Or, get advanced
warning of a pending fault before it becomes enough of
a problem for the controller to require remedial action.
With the controlling task monitoring the user INPUT controls
that have been exported by that node, the Operator can interact
with that task -- though on a more limited scale than possible
from the system console (because this is not as "rich" a
resource as that!).
When troubleshooting a "low temperature" condition on an inlet
air sensor, the Operator might *direct* the task executing on the
central controller (but using the remote's user i/f!) to:
- !display the current temperature being sensed by "this" RTD
- "Hmmm... seems really low!"
- (cautiously puts hand on the physical mechanism in question...)
- "Yeah, it *is* really cool! So, maybe it's not a temperature
sensor fault but, rather, the heating unit in the air handler
located up on the mezzanine might be misbehaving!"
- !display the current setting/status for the heating unit
- "Hmm... it claims to be off. So, no wonder this mechanism is cool!"
- !command heating unit ON
- (twiddles thumbs for a while)
- "Nope, mechanism is still ice cold. Asking the heating unit to
report its status *claims* everything is OK. I guess I'll have
to go PHYSICALLY check the heating unit..."
- checks wiring, probes for voltages on the I/O connectors
- "Hmmm... no power, here. Either this contactor/OPTO22 has failed
or there's a tripped breaker, somewhere..."
- now, using a *different* UI (proximal to the "distant" air handler)
he continues interacting with the central controller's "Diagnostic"
process to further resolve THAT problem
Note that the user doesn't have to keep track of *which* heater to
command "on" ("Hmmm... is it heater #3? Or, #4?") because the
diagnostic task that is presently bound to that UI has been designed
with that context in mind! It's *obvious*, to it (and the parent
that spawned it AFTER creating the namespace for it to use) that
the only heater that makes sense is the one that is associated with
that *control* task -- and, now, *diagnostic* task!
Each of the displays involved (reaction vessel air sensor, inlet air
handler and system console) operate independently of each other -- and
the actions initiated by the user at each are subjected to constraints
that the *process* might be required to impose ("Sorry, I can't turn
the heater on, now, because a dehydration process is active")
>> The takeaways/executive summary:
>> - resources appear in namespaces
>
> Good idea. Not a new idea.
>
>> - any new INDEPENDANT namespace can be constructed by binding
>> names from an existing namespace to NEW names in the new
>> namespace
>> - a resource may appear in multiple namespaces concurrently
>> and with different names
>
> Good ideas. Not new ideas.
*NONE* of these are new ideas!
Display servers have been around for 35+ years. I'd imagine it's
not a stretch from there to generic "I/O servers" (i.e., a temperature
server, a hygrometer server, a motor server, a manometer server, etc.)
Process containers/protected namespaces for probably a decade+ more
than that. Ditto multitasking (though "threads" post-date processes).
Naming resources in a UNIFIED namespace eventually gave way to
*isolated*/protected namespaces (in much the same way that UNIFIED
address spaces were broken into *isolated*/disjoint protection domains).
Transparent support for remote resources in the "local" namespace
(i.e., the developer doesn't know if "/uart" is a local resource
or a *remote* one -- let alone how to "discover" it!)
Language support for IPC/RPC/RMI without all the low-level related
crud (i.e., "channel <- message" instead of "manually" setting
up a connection/socket, resolving addresses/ports, crafting an IDL
and IDL compiler, marshalling arguments, etc.)
All *OLD* technologies!
Yet, note how few designs avail themselves of these mechanisms! As if
"multitasking" was the only tool that could help design simpler, more
robust/maintainable systems. :-/
[And, the "excuse" that these mechanisms are "expensive" is a dodge!
I've been steadily migrating them to smaller and smaller implementations
over the past 15+ years.]
>> The biggest problem lies in the use of a central controller in the
>> implementation.
>
> So drop it. Allow devices to export their control interfaces
> (in a discoverable way, see below) and allow other devices
> (plural) to send commands to those.
Doesn't make sense to have the remotes tell the system what they
are and how they are used. The *system* has been designed with
a specific set of I/O's and requirements in mind. The remotes
have been SELECTED to fulfill those needs.
If the system could "discover" an extra air handling unit, it
wouldn't know how to *use* it! (where are its ducts plumbed in
relationship to the process air flow? what ROLE had the system
designers envisioned for this AHU? Is it to bolster control
of temperature? moisture? Reduce the static pressure on an
upstream AHU? etc)
A remote simply indicates its presence (MAC) on POST. The system
consults its configuration management subsystem to "give meaning"
to each of these devices -- as well as verify that those that
are required are, in fact, present.
[You don't just "install" a node; you have to "introduce" it to
the CM subsystem so the system as a whole knows how it will be
using it]
>> THE PROBLEM
>> The real issue lies with the user interface devices
>
> No. That's easily solved by use of a "dialog manager",
> which virtualises the human-computer communication needs,
> and adapts them to the available display hardware. Again,
> these are old ideas, at least as old as the 1980's.
> Apollo even had a product called "Dialog Manager".
That means nodes need to understand user interactions.
They don't understand "temperatures" or "pressures" or
"flow rates" -- but you want them to understand user
interactions? They exist just to "save wire" (the
signal conditioning and data acquisition hardware would
have been present, regardless! It's just been moved into
the field instead of the equipment rack).
Its far easier to just let the remote node be an "I/O server"
(including display/keyboard server).
These sorts of user interactions are already, typically, included
in the applications' designs -- but, always from the "system
console"; from the *single* user. I.e., historically, the Supervisor
can command the inlet air heater ON from the central console; letting
an Operator do it from somewhere in the field is not significantly
different. Esp if there is virtually no (software) cost to providing
that "remote UI"!)
What you want, IMO, is a consistent way of presenting "exceptions"
to the "remote" user. So a remote display that wants to complain
of a shorted RTD *OR* a communication failure presents that information
to the user in a consistent manner. I.e., the former can be notified
with the help of the central controller (as above). But, the latter
can't (cuz the controller is inaccessible) -- it has to be "handled"
by the remote node itself!
>> I liken this to early Windows (printer) drivers that would throw up
>> ...
>> Then, to add insult to injury, took the focus away from <whatever>
>
> That's because the device driver was operating at a level below
> the dialog manager (in this case, the Windows UI). As you say,
> the problem was solved by hooking it up differently, making the
> Windows UI available to the driver.
>
>> One possible approach is to just overlay <whatever> and wait for
>> some sort of acknowledgement.
>> Another approach is to alter the display in some unique way
>> (invert it, flash it, etc.) to draw attention to the fact that
>> a notification is pending.
>
> These are all just "human factors" design questions. They're
> complicated by the need to manage parallel processes - and to
> avoid switching the user's train of thought needlessly - but
They are *easier* in the distributed approach because the
user isn't trying to mix "operational activities" (like
monitoring the running process) with "diagnostic activities".
It's the equivalent of overlaying a "diagnostics window"
on the system console so the operational issues are no longer
"a distraction"... (except you can't do that as someone has to
be minding the store!)
> they must be tackled by modeling the communication on the
> user's mental processes, not on the hardware or the physical
> implementation. That's what a dialog manager must do.
>
>> Preferences? Anything I've not considered?
>
> In my opinion the interesting problem here is how nodes and
> controllers can discover the capabilities present in the
> network. Mere enumeration (like USB device enumeration) is
> not enough - that just shows what devices exist, not what
> purpose they serve or even how they are connected. Discovery
> by category (as implied by your namespaces) is not enough.
> It needs to be richer than this. DNS-SD is an example of a
> design that tries to solve this problem; it allows sending
> a query like "where is the closest A3 color printer to me?".
Doesn't apply. The *process* (application) is the thing that
assigns meaning to resources. The "nearest heater" means nothing
to the nodes (or the system) -- regardless of how you define "near".
What the application cares about is *roles* associated with
specific resources.
E.g., there may be a temperature sensor on the output of the inlet
air handler -- used by the control loop that regulates inlet air
temperature and moisture. In some installations, this might ALSO
act as the "input air temperature" for the reaction vessel. Or,
if there is too much "transport" (ductwork), another sensor might
be located *at* the reaction vessel to ensure more accurate
knowledge of the ACTUAL temperature of air entering the process.
At the same time, the input air temperature might be addressed with
a cascaded control loop to ensure this "input process air" is
at the desired operating point. The application needs to know
these things to make tuning more efficient, adjust alarm response
times, etc.
If the application discovered that it had a "gun turret azimuth motor"
available, it wouldn't know how to shoot down enemy aircraft! :>
A colleague has suggested what might be a clever hack to address
the problem. But, I think it will require a change to the hardware.
Of course, the photoplots went off a week ago to the board house!
It sure would have been nice for the solution to have appeared
earlier! My error being the assumption that I'd "fix it in software"
without acknowledging that some things *can't* be fixed, there!
(e.g., hard to add a "motor" to a device just by tweeking the code!)
I've spent the past couple days looking for cheap ways to patch
the existing design (so I don't lose the time that was spent
getting the boards). But, I suspect I will soon have to switch to
figuring out how to *modify* the design and layout some new artwork.
Then, update the specs (this past week's effort :< ) to reflect this.
(sigh) "The best laid plans..." Trying to get things done quickly
inevitably takes longer. :< But, in the grand scheme of things,
its not really a big delay. Just not as aggressive a schedule as
I had originally hoped...