Reply by Don Y May 14, 20172017-05-14
On 5/4/2017 7:54 AM, Tim Wescott wrote:
> On Wed, 03 May 2017 23:14:09 -0700, Don Y wrote: > >> I want to push a bit stream through a series of *identical* devices and >> have (identical) software running on each determine if that particular >> device is being addressed, or not. >> >> It seems like the simplest way to do so is to require each device to >> examine an address field in the packet and, if '0', assume the data that >> follows is intended for its consumption. If not, decrement the address >> and pass the packet along. >> >> (I.e., the protocol would require a SERIAL_IN and SERIAL_OUT signals) >> >> Of course, this means a failure of any device renders those that follow >> it inaccessible. That's acceptable. >> >> It also means there is a latency proportional to the device address >> inherent in the protocol. Also acceptable (the data source can take >> that into consideration when creating the message stream). And, the >> incremental latency can be made pretty small. >> >> Another downside is the cost of updating data on *all* (multiple) >> devices as this requires N messages. >> >> An alternate message format (or, a cleverer encoding of the above) could >> be used to allow a single COMPOSITE message to be propagated in which a >> node encountering a '0' address field strips the data that is intended >> for its use and then reconstructs a message bearing a '0' address field >> with the BALANCE of the data that it then passes to the next device, in >> series. In effect, letting each subsequent device strip off "its" data >> before passing the remainder along. >> >> [When no data remains, the message dissipates] >> >> As such, you could "address" any set of contiguous addresses by creating >> an initial message of the form: >> <first_target_ID> <first_datum> ... <last_datum> >> where the number of data present in the message implicitly defines the >> *last_target_id* relative to the first_target_id. >> >> Any problems with this sort of kludged encapsulation? > > I've done this, it worked -- the customer went through some personnel > changes and the project died before it could get shipped, but cest la vie. > > I wouldn't overthink things vis-a-vis the composite message -- just send > lots of little messages and be happy. Complexity leads to bugs.
I've run some simulations with different protocol proposals. The "cascaded messages" scheme is a win when number of nodes is high, individual messages short (e.g., "ON", "OFF", "LEFT", "RIGHT", etc.) and update rate high (or, time between updates short -- e.g., to support a short keep alive interval). So far, I've only looked at bit-oriented protocols, not "character-oriented". But, I think character oriented would probably make things worse for the individual-message-per-node case. E.g., in one case, I have ~190 nodes and ~5 different message types. The most common messages are coded in two bits (00, 01) while the less common messages are in three (100, 101, 110, 111/unused). So, in the "common case", I can refresh the entire set of nodes with a single 7[addr]+(190*2)[data]+9[CRC] bit packet ... 396b total. If, instead, I deliver individual messages to each of the 190 nodes, I spend 190[nodes]*(7[addr]+2[data]+4[CRC]) = 2470b -- a bit more than 6 times the bandwidth (assuming a "free" means of detecting individual message extents in each case). I.e., a 2400 "baud" (but bit-oriented) link would allow for one update per second instead of 6.
> Use all ones in the address field as a 'broadcast' address. > > Or do the address decrementing by shifting down one bit, and consider the > message "yours" if the least significant bit is zero. That only allows > eight devices, but if you foresee a lot of traffic with identical > messages for selected sets of devices, this would move you on down the > road.
It doesn't scale well to larger "networks". I'm presently looking at alternate ways of encoding the "destination/skip-over" address as well as the potential for using a tagged message format -- so the protocol layer wouldn't need to know anything about the "content" of the message set. This would allow me to use exactly the same code in different applications and just have an up-call from the protocol handler to the message decoder -- letting the protocol handler know how/when/what to propagate without having to consult with the decoder in making that decision. Otherwise, the protocol layer code would have to understand the content of the message(s) to determine what portion to forward along.
Reply by Don Y May 8, 20172017-05-08
On 5/8/2017 3:53 PM, Martin Riddle wrote:
>> So far, the problems that I've identified are related to error analysis >> (and recovery) along with distributed clock synchronization. Hard failures >> are (relatively) easy to address: fix the damn thing! :> > > Go over to Hack-a-day, theres someone that daisy chained some audrinos > and just let each audrino grab the last 5 bytes of the control packet > and send the remaing bytes to the next device. Addressing is by the > position of the data in the packet. > He has no error control, but the idea of all the devices running the > same code is what your looking for.
The code is trivial. The question concerns identifying "issues" to which this approach might be more vulnerable (see above). For example, if messages "dissipate" when they reach their intended node, then nodes at the far end see much less traffic than near-end nodes. They spend less resources processing (propagating) messages than their upstream peers. And, they also are slower to notice when the network has *crashed*. Etc.
Reply by Martin Riddle May 8, 20172017-05-08
On Sat, 6 May 2017 12:29:11 -0700, Don Y <blockedofcourse@foo.invalid>
wrote:

>On 5/6/2017 9:14 AM, Martin Riddle wrote: >> On Wed, 3 May 2017 23:14:09 -0700, Don Y <blockedofcourse@foo.invalid> >> wrote: >> >>> I want to push a bit stream through a series of *identical* devices >>> and have (identical) software running on each determine if that >>> particular device is being addressed, or not. >>> >>> It seems like the simplest way to do so is to require each device >>> to examine an address field in the packet and, if '0', assume the >>> data that follows is intended for its consumption. If not, >>> decrement the address and pass the packet along. >>> >>> (I.e., the protocol would require a SERIAL_IN and SERIAL_OUT >>> signals) >>> >>> Of course, this means a failure of any device renders those that >>> follow it inaccessible. That's acceptable. >>> >>> It also means there is a latency proportional to the device address >>> inherent in the protocol. Also acceptable (the data source can >>> take that into consideration when creating the message stream). >>> And, the incremental latency can be made pretty small. >>> >>> Another downside is the cost of updating data on *all* (multiple) >>> devices as this requires N messages. >>> >>> An alternate message format (or, a cleverer encoding of the above) >>> could be used to allow a single COMPOSITE message to be propagated >>> in which a node encountering a '0' address field strips the data >>> that is intended for its use and then reconstructs a message >>> bearing a '0' address field with the BALANCE of the data that it >>> then passes to the next device, in series. In effect, letting >>> each subsequent device strip off "its" data before passing the >>> remainder along. >>> >>> [When no data remains, the message dissipates] >>> >>> As such, you could "address" any set of contiguous addresses >>> by creating an initial message of the form: >>> <first_target_ID> <first_datum> ... <last_datum> >>> where the number of data present in the message implicitly defines >>> the *last_target_id* relative to the first_target_id. >>> >>> Any problems with this sort of kludged encapsulation? >> >> You would need two fields, one the 'Target Address' and a 'Address >> Counter' > >No. The "address counter" serves dual purpose: when the count >attains '0', the message is intended for the current device. >I.e., it indicates the number of devices *upstream* from the >targeted device that should be skipped. > >[Of course, you can define other semantics] > >> The Target Address is the device to be controlled. >> The Counter address is the Address that each device increments and >> passes to the next device. > >The only advantage, there, is that it then *tells* the device what it's >position (address) happens to be. If such a capability was needed, >you could kludge that capability into this structure -- incurring that >overhead only when needed. > >> But I don't see why a 8 pin address header is a bad idea. You can >> still do a pass thru serial chain. plus the devices dont need to >> enumerate. > >Depends on the number of devices, the distance between them, the >nominal data rates, etc. E.g., if you have to handle a network >spanning hundreds of feet, you'd want to minimize the number of >conductors (and associated line drivers/receivers). If you wanted >to handle a network with hundreds of "devices" (note that a device >is an arbitrary construct -- a physical device could, potentially, >consist of 30 devices, if you opted to treat them as such!), then >the width of the address field could exceed a fixed size. > >The logical idea of increasing the width of the address field then >penalizes smaller networks as well as impacting maximum data rate >(more symbols to xmit with "no" information content!). So, you could >encode the "address"/ID in a manner that allows for it to expand to suit >the needs of the particular network/message -- without unduly burdening >short messages/nearby targets or long messages/distant ones. > >> I've done a 2 device serial pass thru chain, but they were hard coded >> addresses. > >So far, the problems that I've identified are related to error analysis >(and recovery) along with distributed clock synchronization. Hard failures >are (relatively) easy to address: fix the damn thing! :>
Go over to Hack-a-day, theres someone that daisy chained some audrinos and just let each audrino grab the last 5 bytes of the control packet and send the remaing bytes to the next device. Addressing is by the position of the data in the packet. He has no error control, but the idea of all the devices running the same code is what your looking for. Cheers
Reply by Jack May 8, 20172017-05-08
Il giorno domenica 7 maggio 2017 16:16:24 UTC+2, Les Cargill ha scritto:
 
> I wrote a proposal once for a MODBUS system to automate addressing, but > it required a priori knowlege of order of devices on the bus, powering > them up in order and nobody could guarantee that. Stuff was wound > every which way, so it was decided that explicit addressing was > preferred.
If you know the position of the devices, you can use something simila to LIN Bus Shunt Method to autumatically address the devices. But you need some additionaly circuitry. Bye Jack
Reply by Don Y May 7, 20172017-05-07
On 5/7/2017 7:21 AM, Les Cargill wrote:
> Don Y wrote: >> On 5/4/2017 4:47 PM, Les Cargill wrote: >>> Don Y wrote: >>>> I want to push a bit stream through a series of *identical* devices >>>> and have (identical) software running on each determine if that >>>> particular device is being addressed, or not. >>>> >>>> It seems like the simplest way to do so is to require each device >>>> to examine an address field in the packet and, if '0', assume the >>>> data that follows is intended for its consumption. If not, >>>> decrement the address and pass the packet along. >>>> >>>> (I.e., the protocol would require a SERIAL_IN and SERIAL_OUT >>>> signals) >>>> >>>> Of course, this means a failure of any device renders those that >>>> follow it inaccessible. That's acceptable. >>> >>> <snip> >>> >>> It's way simpler to just have a 422/485 bus. >> >> Then you have to have a way of making each device "unique". >> This typically means a configuration activity that is probably >> non-trivial. And/or stocking different "part numbers" for >> what are essentially the same component but with different >> addresses, etc. > > You have to have some way of managing this anyway. > > in your example - do you somehow know that the third device in the > chain is always the "bluralizer"? I suppose that makes sense.
In the pinball example, the guy who designed the wiring harness knows what order the "nodes" are visited by the wire -- and, this order doesn't change from machine to machine. The order is unlikely to change -- after manufacture or deployment. This is not true of "computers" and devices with assigned addresses. Also, as there will be an inherently LIKELY order, it will be trivial to deduce the order and adjust a table (that would almost surely exist) in the software. E.g., it would be unusual for the light adjacent to the left flipper to be followed by the light in the top RIGHT pop bumper and then the right flipper, then the top left pop bumper then the light adjacent to the very first light... etc.
> But these things are a truck roll anyhow, and 2.5 minutes sending a node > address isn't that much more work.
"Minutes"?
> I wrote a proposal once for a MODBUS system to automate addressing, but > it required a priori knowlege of order of devices on the bus, powering > them up in order and nobody could guarantee that. Stuff was wound > every which way, so it was decided that explicit addressing was > preferred.
Assigning addresses in this scheme would be trivial: 3 ADDRESS 14 would cause the first 3 devices to propagate the command thereby informing the fourth device (in the chain) that its address is "14". But, what does that buy you? Unless you can then reconfigure the network to a bus topology (to eliminate the processing delay through each device)
>> And, protocols to handle the case where two devices are >> NOT unique (i.e., duplicate address). > > In reality, that cannot be done for ostensibly "serial" > stuff. CAN is different[1]; but it's still a CM fail. > > [1] I don't really recall what happens there.
In the daisy-chain case, its not possible for more than one device to process a message intended for *one* device. (You could implement a message that is designed to be processed by some subset of devices -- even things like "every other", "every third", etc.)
>> Plus all the bus contention/arbitration possibilities, etc. >> (if you want to get data *back* from the devices) >> >> For example, with the cascaded scheme, you can wire all >> of the lamps on the playfield of a pinball machine to power >> and ground; then daisy chain a data line through a "smart >> lamp driver" located adjacent to each individual lamp. >> >> That *one* "lamp bus" signal would allow the processor to >> turn any lamp on or off (dim, flash, whatever) regardless >> of the number of lamps on the playfield. The system >> software would only need to know the lamp "order" of the >> prototype as all future playfields (for that model) would >> be wired in the same daisy-chain order. >> >> When a lamp driver fails, it could be replaced with a >> new component without any need for reconfiguration, >> testing for duplicate bus addresses, etc. > > That's true. That's a sort of "degenerate" case where your > proposal makes sense. > > It still smacks of "look! If we're real clever, we can > eliminate a whole wire!" which makes me eye-roll :)
OK, I'll ADD the wire back into the bundle -- and just mark it "spare". Does that feel better? Maybe add two or three, just in case? Why doesn't RG6 have a spare conductor? Seems silly to only use 2; then have to deal with power injectors when you want to power an in-line amplifier up on the mast... Why not run 4-pair (or 2-pair) around the interior of an automobile so the various modules can implement ethernet to talk to each other? Or, put a NIC at each of those lamps on the pinball playfield? You're also ignoring the fact that the devices then become identical. You (user) can replace ANY failed/suspect device with a generic device of the same type -- without having to do any followup configuration of the device or the system. I've been surprised at how many (sub-)applications I have that can benefit from NOT needing to configure specific devices to have specific addresses; cases where it would be great if a user could simply replace a defective unit with a factory fresh one without also having to "configure" it. But, that's largely possible because I can control the cabling technology and topology. E.g., whenever I add another node to the network in the office, I have to run a hole new wire from the switch, FOLLOWING the existing wire bundle as it makes its way around the room to the location of the new node. 20 years ago, I'd just unplug the coax feeding the device "downstream", insert the new node with an appropriate length of coax to that downstream node and be done with it. [Of course, the network is disturbed *while* this is happening (as the downstream PORTION would be in my daisy-chain scheme) but that's an infrequent event and one in which I participate] Nowadays, I have to track the lengths of all of the cables leaving the switch(es) -- so I have an idea how long of a cable I will need to service a new addition: "I'm going to be locating this new node between the color printer and the VM server. The cable to the printer is 13 ft long and the VM server is 18 ft. So, aim for about 15 ft and hope for the best (cuz there's no place to "store" any service loops and coming up short will mean removing the cable from the bundle and starting over)" OTOH, I can move the color printer to another *room* and not have to change any configuration tables! (wouldn't have had to with 10Base2, either, but WOULD have with this daisy-chain scheme)
>> The same power/ground could be distributed to all of the >> "smart hammer drivers" sited adjacent to each of the >> solenoids/actuators on the playfield. That *one* >> "hammer bus" signal would allow the processor to turn >> any solenoid on or off (including supporting a pull-in >> coil, maximum duration timer, shorted/open coil detector, >> etc.) regardless of the number of drivers on the playfield. >> >> [The lamp driver and hammer driver could be similar save for >> current handling capacities and other "use related" issues >> in their software; you probably don't care if a lamp >> stays lit indefinitely -- but, a coil often has a duty cycle >> limit!] >> >> Likewise, you could have a "switch driver" that is distributed >> around the playfield to sense contact closures arising from >> the ball's motions. >> >> You could do this with *other* busses -- including those that >> require explicit addressing -- but with considerably more pain. > > Right.
Reply by Les Cargill May 7, 20172017-05-07
Don Y wrote:
> On 5/4/2017 4:47 PM, Les Cargill wrote: >> Don Y wrote: >>> I want to push a bit stream through a series of *identical* devices >>> and have (identical) software running on each determine if that >>> particular device is being addressed, or not. >>> >>> It seems like the simplest way to do so is to require each device >>> to examine an address field in the packet and, if '0', assume the >>> data that follows is intended for its consumption. If not, >>> decrement the address and pass the packet along. >>> >>> (I.e., the protocol would require a SERIAL_IN and SERIAL_OUT >>> signals) >>> >>> Of course, this means a failure of any device renders those that >>> follow it inaccessible. That's acceptable. >> >> <snip> >> >> It's way simpler to just have a 422/485 bus. > > Then you have to have a way of making each device "unique". > This typically means a configuration activity that is probably > non-trivial. And/or stocking different "part numbers" for > what are essentially the same component but with different > addresses, etc. >
You have to have some way of managing this anyway. in your example - do you somehow know that the third device in the chain is always the "bluralizer"? I suppose that makes sense. But these things are a truck roll anyhow, and 2.5 minutes sending a node address isn't that much more work. I wrote a proposal once for a MODBUS system to automate addressing, but it required a priori knowlege of order of devices on the bus, powering them up in order and nobody could guarantee that. Stuff was wound every which way, so it was decided that explicit addressing was preferred.
> And, protocols to handle the case where two devices are > NOT unique (i.e., duplicate address). >
In reality, that cannot be done for ostensibly "serial" stuff. CAN is different[1]; but it's still a CM fail. [1] I don't really recall what happens there.
> Plus all the bus contention/arbitration possibilities, etc. > (if you want to get data *back* from the devices) > > For example, with the cascaded scheme, you can wire all > of the lamps on the playfield of a pinball machine to power > and ground; then daisy chain a data line through a "smart > lamp driver" located adjacent to each individual lamp. > > That *one* "lamp bus" signal would allow the processor to > turn any lamp on or off (dim, flash, whatever) regardless > of the number of lamps on the playfield. The system > software would only need to know the lamp "order" of the > prototype as all future playfields (for that model) would > be wired in the same daisy-chain order. > > When a lamp driver fails, it could be replaced with a > new component without any need for reconfiguration, > testing for duplicate bus addresses, etc. >
That's true. That's a sort of "degenerate" case where your proposal makes sense. It still smacks of "look! If we're real clever, we can eliminate a whole wire!" which makes me eye-roll :)
> The same power/ground could be distributed to all of the > "smart hammer drivers" sited adjacent to each of the > solenoids/actuators on the playfield. That *one* > "hammer bus" signal would allow the processor to turn > any solenoid on or off (including supporting a pull-in > coil, maximum duration timer, shorted/open coil detector, > etc.) regardless of the number of drivers on the playfield. > > [The lamp driver and hammer driver could be similar save for > current handling capacities and other "use related" issues > in their software; you probably don't care if a lamp > stays lit indefinitely -- but, a coil often has a duty cycle > limit!] > > Likewise, you could have a "switch driver" that is distributed > around the playfield to sense contact closures arising from > the ball's motions. > > You could do this with *other* busses -- including those that > require explicit addressing -- but with considerably more pain.
Right. -- Les Cargill
Reply by Don Y May 6, 20172017-05-06
On 5/6/2017 9:14 AM, Martin Riddle wrote:
> On Wed, 3 May 2017 23:14:09 -0700, Don Y <blockedofcourse@foo.invalid> > wrote: > >> I want to push a bit stream through a series of *identical* devices >> and have (identical) software running on each determine if that >> particular device is being addressed, or not. >> >> It seems like the simplest way to do so is to require each device >> to examine an address field in the packet and, if '0', assume the >> data that follows is intended for its consumption. If not, >> decrement the address and pass the packet along. >> >> (I.e., the protocol would require a SERIAL_IN and SERIAL_OUT >> signals) >> >> Of course, this means a failure of any device renders those that >> follow it inaccessible. That's acceptable. >> >> It also means there is a latency proportional to the device address >> inherent in the protocol. Also acceptable (the data source can >> take that into consideration when creating the message stream). >> And, the incremental latency can be made pretty small. >> >> Another downside is the cost of updating data on *all* (multiple) >> devices as this requires N messages. >> >> An alternate message format (or, a cleverer encoding of the above) >> could be used to allow a single COMPOSITE message to be propagated >> in which a node encountering a '0' address field strips the data >> that is intended for its use and then reconstructs a message >> bearing a '0' address field with the BALANCE of the data that it >> then passes to the next device, in series. In effect, letting >> each subsequent device strip off "its" data before passing the >> remainder along. >> >> [When no data remains, the message dissipates] >> >> As such, you could "address" any set of contiguous addresses >> by creating an initial message of the form: >> <first_target_ID> <first_datum> ... <last_datum> >> where the number of data present in the message implicitly defines >> the *last_target_id* relative to the first_target_id. >> >> Any problems with this sort of kludged encapsulation? > > You would need two fields, one the 'Target Address' and a 'Address > Counter'
No. The "address counter" serves dual purpose: when the count attains '0', the message is intended for the current device. I.e., it indicates the number of devices *upstream* from the targeted device that should be skipped. [Of course, you can define other semantics]
> The Target Address is the device to be controlled. > The Counter address is the Address that each device increments and > passes to the next device.
The only advantage, there, is that it then *tells* the device what it's position (address) happens to be. If such a capability was needed, you could kludge that capability into this structure -- incurring that overhead only when needed.
> But I don't see why a 8 pin address header is a bad idea. You can > still do a pass thru serial chain. plus the devices dont need to > enumerate.
Depends on the number of devices, the distance between them, the nominal data rates, etc. E.g., if you have to handle a network spanning hundreds of feet, you'd want to minimize the number of conductors (and associated line drivers/receivers). If you wanted to handle a network with hundreds of "devices" (note that a device is an arbitrary construct -- a physical device could, potentially, consist of 30 devices, if you opted to treat them as such!), then the width of the address field could exceed a fixed size. The logical idea of increasing the width of the address field then penalizes smaller networks as well as impacting maximum data rate (more symbols to xmit with "no" information content!). So, you could encode the "address"/ID in a manner that allows for it to expand to suit the needs of the particular network/message -- without unduly burdening short messages/nearby targets or long messages/distant ones.
> I've done a 2 device serial pass thru chain, but they were hard coded > addresses.
So far, the problems that I've identified are related to error analysis (and recovery) along with distributed clock synchronization. Hard failures are (relatively) easy to address: fix the damn thing! :>
Reply by Martin Riddle May 6, 20172017-05-06
On Wed, 3 May 2017 23:14:09 -0700, Don Y <blockedofcourse@foo.invalid>
wrote:

>I want to push a bit stream through a series of *identical* devices >and have (identical) software running on each determine if that >particular device is being addressed, or not. > >It seems like the simplest way to do so is to require each device >to examine an address field in the packet and, if '0', assume the >data that follows is intended for its consumption. If not, >decrement the address and pass the packet along. > >(I.e., the protocol would require a SERIAL_IN and SERIAL_OUT >signals) > >Of course, this means a failure of any device renders those that >follow it inaccessible. That's acceptable. > >It also means there is a latency proportional to the device address >inherent in the protocol. Also acceptable (the data source can >take that into consideration when creating the message stream). >And, the incremental latency can be made pretty small. > >Another downside is the cost of updating data on *all* (multiple) >devices as this requires N messages. > >An alternate message format (or, a cleverer encoding of the above) >could be used to allow a single COMPOSITE message to be propagated >in which a node encountering a '0' address field strips the data >that is intended for its use and then reconstructs a message >bearing a '0' address field with the BALANCE of the data that it >then passes to the next device, in series. In effect, letting >each subsequent device strip off "its" data before passing the >remainder along. > >[When no data remains, the message dissipates] > >As such, you could "address" any set of contiguous addresses >by creating an initial message of the form: > <first_target_ID> <first_datum> ... <last_datum> >where the number of data present in the message implicitly defines >the *last_target_id* relative to the first_target_id. > >Any problems with this sort of kludged encapsulation?
You would need two fields, one the 'Target Address' and a 'Address Counter' The Target Address is the device to be controlled. The Counter address is the Address that each device increments and passes to the next device. But I don't see why a 8 pin address header is a bad idea. You can still do a pass thru serial chain. plus the devices dont need to enumerate. I've done a 2 device serial pass thru chain, but they were hard coded addresses. Cheers
Reply by Don Y May 5, 20172017-05-05
On 5/4/2017 10:30 PM, Robert Wessel wrote:
> On Thu, 4 May 2017 13:17:18 -0700, Don Y <blockedofcourse@foo.invalid> > wrote: > >> On 5/4/2017 12:53 PM, Robert Wessel wrote: >>> On Thu, 4 May 2017 10:52:40 -0700, Don Y <blockedofcourse@foo.invalid> >>> wrote: >>> >>>> I'd like to come up with a scheme that I could apply to a variety of >>>> similarly "open ended" network configurations -- that have very >>>> different physical characteristics (e.g., a 10 ft long chain >>>> and a 300 ft long chain -- trading data rate for network size) >>> >>> At some point you need to worry about reinventing Ethernet. And many >>> MCU's are available with Ethernet for a very small incremental cost. >>> Although perhaps not at the very low end. A huge advantage is that >>> you can punt much of the infrastructure to the customer ("It's >>> Ethernet, deal with it."). >> >> Depends on the nature of the "messages" being passed -- their >> content and frequency. >> >> E.g., would you spring for the cost of a NIC, MCU capable >> of supporting a TCP (or even just UDP) "stack" -- just to >> command individual hammer drivers on/off at some sub-sonic rate? > > I'm not sure I can answer that definitively, even for myself, but as a > first order approximation, if it's a separate box, the first choice of > interconnect should be Ethernet or one of its wireless cousins (WiFi, > Bluetooth), if a wireless connection is required. And UDP+IP does not > impose much of a burden, even on a OS-less MCU.
Why the "separate box" criteria? Should car manufacturers replace CAN with Ethernet? Should PC manufacturers use ethernet to connect their floppy disk drive to the CPU (on the other end of the motherboard)? See my pinball machine example, elsewhere, this thread. Should the pushbutton controls for my garage door opener use ethernet frames to pass "open" and "close" commands to the actuator head? Or, individual keystrokes from the external keypad to the controller in the actuator?
> Which is not to say that there aren't going to be exceptions. But I'm > pretty sure things are at the point where you need to justify *not* > using Ethernet on an external connection.
Should my doorbell use an ethernet connection? My thermostat to talk to the furnace? As I said, it depends on the nature of the data ("messages") being exchanged and their frequency. My CCTV cameras are IP based -- because they move a fair bit of data, continuously, and I need (choose) a bit of processing power AT the camera. It's easier to run CAT5 with a PoE injector than to run a separate power connection and coaxial video cable back to <something>. Esp if I want to be able to add more cameras without having to upgrade an N-port "camera card" in a PC... (N = 4 or 8)
>> OTOH, running a bunch of wires from *one* set of hammer drivers >> (accessed "in parallel" from an MCU port) out to a bunch of >> solenoids dramatically changes how you wire them, how you >> deal with expansion, etc. > > Direct drive stuff is one obvious exception. But you're asking about > devices you're sending packets to.
Why can't I send packets to solenoids? Again, see the pinball example. One wall of the house is glass. Blinds on each window. Do I run ethernet to each window? Or, do I install a "blind controller" <somewhere> and pass messages over a low speed serial interface to tiny motes mounted on the motors for each of the individual blinds: 3OPEN causes the 4th motor "from here" -- "external" to "here" -- to open while: 2CLOSE causes the 3rd motor to close.
>> With Ethernet, you'd have to ensure each node had a unique address >> so you could distribute the message to all and know that the >> right node acted on it -- AND, that no OTHER node acted on it! > > Any device claiming to be Ethernet should have the process for > attaching a unique MAC address well established. Use the MAC number > as the device serial number (pre-pend some of your own digits for > sanity, on the sticker you attach to the outside of the device).
But, then they aren't IDENTICAL devices! I can rephrase my problem and claim I'm using MCU's with serial numbers laser etched into their silicon from the factory. Now, I have a means of uniquely identifying each device -- even if I talk to them using CAN or any other transport layer. No need to rely on wiring "position" to identify a device! Remove the MAC address and solve the problem with ethernet. I.e., you have to ADD the MAC address back in -- or, some other "unique identifier". Apples-apples.
>> By interposing each node in a serial chain, a node need not >> care about it's "address" -- it's *implied* (when the ID byte >> is 0, the message is for you. Otherwise, its for someone >> DOWNSTREAM from you). So, you can produce identical devices >> and still treat them as individually unique. >> >> Of course, this makes binding addresses to physical nodes >> a separate issue: you either carefully document the >> physical (electrical?) order of the nodes at deployment >> *or* introduce a "configuration step" in which you >> empirically derive the (address,node) mapping. >> >> "I just activated node 1. What do you see as having >> changed in the field? OK, I'll write that down. Now >> proceeding to node 2..." >> >> This would be tedious *if* the field configuration changed, >> often. But, if it is largely static and only updated as >> needs evolve (or, in response to failures), then its almost >> assumed that some degree of manual intervention will be >> required (i.e., someone has to swap out the defective node >> or splice a new node into the chain <somewhere> -- which >> may not be at the *end*) > > The implied address approach doesn't really fix the problem. Sure > someone attached *something* to port 7 of controller 13, but did they > do that correctly?
Sure, someone bought a box of 50 model 123 Ethernet valve controllers. And, I can now "see" the controller that they just cabled to my system. Did they attach the correct device? Did they wire it to the correct valve? Or, is it wired to a light bulb, instead? Using a different transport protocol doesn't tell you anything about the outside world.
> IOW, is that water valve on 13/7 actually attached > to the cold supply to the motor cooling system, or did someone attach > it to the hot water line to the reactor heater?
Is the water valve on 12:34:56:78:9A:BC port 7 actually attached to the cold supply to the motor cooling system, or did someone attach it to the hot water line to the reactor heater? Or, to the hand towel dispenser in the men's room?
> Or is what they > plugged into 13/7 actually attach the electrical power switch for the > ammonia pump motor? Or heck, maybe it *is* an actual accelerometer, > of the type you wanted, but did they plug the X axis one into 13/7 > like the should have, or did they get the Y or Z axis one?
How is any of this made BETTER or mode robust/reliable by using Ethernet?
> There are also diagnostic advantages to being able see the entire > network from one place.
Why can't I see the daisy-chained devices?
> It also offers some opportunities for > redundancy (controller 13 appears to be unresponsive, have the > monitoring system send a predefined safeing command to all the devices > 13 was supposed to be controlling).
How do you *talk* to those devices if the controller that handles them is unresponsive? Why does ethernet to an unresponsive controller give you that ability but daisy-chain to an unresponsive controller doesn't? That's a separate design issue. And, you have to decide how much you want to invest in your solution to address those conditions. If I sense a water leak in the house, I have no way of knowing WHERE it is. Perhaps a toilet is "running on". Or, maybe a hose to the washing machine has ruptured. Or, someone left a faucet partially open. (all have happened, here) In my case, I can shut off water TO THE HOUSE. But, not to the toilet, individually. Or, the faucet. So, the magnitude of the leak weighs into how I attempt to protect the house. If it's a slow leak, chances are, its a toilet running on or a faucet that wasn't turned off completely. Send a message to the occupants -- even if they are away on vacation. If no response, maybe let the leak continue (running up the water bill for this month) *or* shut off the house supply -- until you *need* to turn it back on (e.g., to water the yard). If I wanted more options in handling this "failure", I'd spend more resources on better isolating the source of the leak AND limiting the damage that it could potentially cause. If I want to be able to turn off ALL the solenoids because I suspect one solenoid controller may be crashed with its solenoid engaged (which could cause it to burn up), then I'd add a disconnect in the power feed to the "solenoid bus". Or, a special disconnect for just that one solenoid.
> If you need higher levels of > redundancy, having two Ethernet ports on devices is also easy, and > having two parallel sets of Ethernet switches, with a few > interconnects, leaves you with two links to any device.
So, the lights on the pinball machine's playfield should have *two* ethernet connections? The irrigation valves should similarly have two?
> Certainly the setup and verification procedures will be *different*. I > see modest-to-large advantages in a fair number of situations for the > Ethernet approach, and small-to-modest advantages for the custom > approach in (fewer others), and a majority where it's a wash.
I'd be interested in the modest-to-large advantages you see in controlling individual irrigation valves. Or, lights/actuators on a pinball machine playfield. Or, the lighted pushbuttons on a large broadcast studio video mixer: <https://www.shutterstock.com/video/clip-12972671-stock-footage-broadcast-tv-studio-production-vision-switcher-broadcast-video-mixer-pan-right.html> (Do you really think there is one *giant* PCB hiding behind all of those LPB's and a gazillion "output drivers" wired, in parallel, to service all those lamps/buttons? Or, do you think, perhaps, they are organized in IDENTICAL subassemblies hiding behind the top cover and wired together using some sort of message passing bus? Do you think it is Ethernet based? Think about what the messages being passed around, there, are likely to contain.)
Reply by Robert Wessel May 5, 20172017-05-05
On Thu, 4 May 2017 13:17:18 -0700, Don Y <blockedofcourse@foo.invalid>
wrote:

>On 5/4/2017 12:53 PM, Robert Wessel wrote: >> On Thu, 4 May 2017 10:52:40 -0700, Don Y <blockedofcourse@foo.invalid> >> wrote: >> >>> I'd like to come up with a scheme that I could apply to a variety of >>> similarly "open ended" network configurations -- that have very >>> different physical characteristics (e.g., a 10 ft long chain >>> and a 300 ft long chain -- trading data rate for network size) >> >> At some point you need to worry about reinventing Ethernet. And many >> MCU's are available with Ethernet for a very small incremental cost. >> Although perhaps not at the very low end. A huge advantage is that >> you can punt much of the infrastructure to the customer ("It's >> Ethernet, deal with it."). > >Depends on the nature of the "messages" being passed -- their >content and frequency. > >E.g., would you spring for the cost of a NIC, MCU capable >of supporting a TCP (or even just UDP) "stack" -- just to >command individual hammer drivers on/off at some sub-sonic rate?
I'm not sure I can answer that definitively, even for myself, but as a first order approximation, if it's a separate box, the first choice of interconnect should be Ethernet or one of its wireless cousins (WiFi, Bluetooth), if a wireless connection is required. And UDP+IP does not impose much of a burden, even on a OS-less MCU. Which is not to say that there aren't going to be exceptions. But I'm pretty sure things are at the point where you need to justify *not* using Ethernet on an external connection.
>OTOH, running a bunch of wires from *one* set of hammer drivers >(accessed "in parallel" from an MCU port) out to a bunch of >solenoids dramatically changes how you wire them, how you >deal with expansion, etc.
Direct drive stuff is one obvious exception. But you're asking about devices you're sending packets to.
>With Ethernet, you'd have to ensure each node had a unique address >so you could distribute the message to all and know that the >right node acted on it -- AND, that no OTHER node acted on it!
Any device claiming to be Ethernet should have the process for attaching a unique MAC address well established. Use the MAC number as the device serial number (pre-pend some of your own digits for sanity, on the sticker you attach to the outside of the device).
>By interposing each node in a serial chain, a node need not >care about it's "address" -- it's *implied* (when the ID byte >is 0, the message is for you. Otherwise, its for someone >DOWNSTREAM from you). So, you can produce identical devices >and still treat them as individually unique. > >Of course, this makes binding addresses to physical nodes >a separate issue: you either carefully document the >physical (electrical?) order of the nodes at deployment >*or* introduce a "configuration step" in which you >empirically derive the (address,node) mapping. > > "I just activated node 1. What do you see as having > changed in the field? OK, I'll write that down. Now > proceeding to node 2..." > >This would be tedious *if* the field configuration changed, >often. But, if it is largely static and only updated as >needs evolve (or, in response to failures), then its almost >assumed that some degree of manual intervention will be >required (i.e., someone has to swap out the defective node >or splice a new node into the chain <somewhere> -- which >may not be at the *end*)
The implied address approach doesn't really fix the problem. Sure someone attached *something* to port 7 of controller 13, but did they do that correctly? IOW, is that water valve on 13/7 actually attached to the cold supply to the motor cooling system, or did someone attach it to the hot water line to the reactor heater? Or is what they plugged into 13/7 actually attach the electrical power switch for the ammonia pump motor? Or heck, maybe it *is* an actual accelerometer, of the type you wanted, but did they plug the X axis one into 13/7 like the should have, or did they get the Y or Z axis one? There are also diagnostic advantages to being able see the entire network from one place. It also offers some opportunities for redundancy (controller 13 appears to be unresponsive, have the monitoring system send a predefined safeing command to all the devices 13 was supposed to be controlling). If you need higher levels of redundancy, having two Ethernet ports on devices is also easy, and having two parallel sets of Ethernet switches, with a few interconnects, leaves you with two links to any device. Certainly the setup and verification procedures will be *different*. I see modest-to-large advantages in a fair number of situations for the Ethernet approach, and small-to-modest advantages for the custom approach in (fewer others), and a majority where it's a wash.