EmbeddedRelated.com
Forums

Binary protocol design: TLV, LTV, or else?

Started by Aleksandar Kuktin January 8, 2014
On 2014-01-12, Don Y <this@isnotme.com> wrote:
> On 1/12/2014 3:55 AM, Simon Clubley wrote: >> >> If either of those were accessible in the way a 10Base2 cable could be, >> then the answer is probably yes. :-) > > Well, I suppose you could take out a 15 foot ladder and climb up > onto a deployed device *IN USE* and start tugging on cables. > Of course, if you did so, "disrupting the network" would be > the least of your concerns (I'd worry more about breaking your > neck from the fall or getting electrocuted as you climb over a > device that isn't intended to be "walked on" :> )
It wasn't physically possible to do that in all environments unfortunately. Consider, for example, some possible office environments from the 1990s. These days, if someone disrupts their own connection, it's only their own device which is affected, but in that timeframe you might have had a 10Base2 connection going from device to device within a region of a building. Simon. -- Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP [Note: email address not currently working as the system is physically moving] Microsoft: Bringing you 1980s technology to a 21st century world
Hi Simon,

On 1/12/2014 12:47 PM, Simon Clubley wrote:
> On 2014-01-12, Don Y<this@isnotme.com> wrote: >> On 1/12/2014 3:55 AM, Simon Clubley wrote: >>> >>> If either of those were accessible in the way a 10Base2 cable could be, >>> then the answer is probably yes. :-) >> >> Well, I suppose you could take out a 15 foot ladder and climb up >> onto a deployed device *IN USE* and start tugging on cables. >> Of course, if you did so, "disrupting the network" would be >> the least of your concerns (I'd worry more about breaking your >> neck from the fall or getting electrocuted as you climb over a >> device that isn't intended to be "walked on" :> ) > > It wasn't physically possible to do that in all environments unfortunately.
Of course! Nor is it likely that you'll have a dozen or more nodes for a single individual (or, an entire subnet, for that matter)! Being able to use a (bus) network *in* a product instead of having to run control cables to a central "electronics cabinet" (star) makes a *huge* difference in installation and maintenance costs! E.g., a licensed electrician is required to "run cable" in most facilities. You want to run sense leads from thermocouples, dew point sensors, anemometers, etc. to a "controller" and you spend several days of that electrician's time routing each cable to the equipment cabinet. And, those costs vary depending on how easy it is to get from points A,B,C... to that cabinet. It also determines where you can *locate* that cabinet (without "optional" supplemental signal conditioning). OTOH, if you can wire all the field devices at the manufacturing facility and just have *one* cable that the electrician has to route (besides "utilities"), then installation costs drop by several kilobucks!
> Consider, for example, some possible office environments from the 1990s. > These days, if someone disrupts their own connection, it's only their own > device which is affected, but in that timeframe you might have had a 10Base2 > connection going from device to device within a region of a building.
Of course! But, in my case, they're *all* "my" connections. And, I'd be aware of what sort of traffic is live on the network when I opted to disconnect a host (which can be done without interrupting the rest of the segment provided you aren't *moving* that host and necessitating a "cable adjustment"). I see more issues with twisted pair wiring because it "looks innocent"; people aren't "intimidated" by it. And, the connectors are total crap. Worse yet, they *almost* work when the locking tab snaps off -- until the connector works its way loose (because someone moved the piece of equipment into which it was plugged). Then, we have all the home-made cables to contend with (it seems much easier to build a robust BNC-terminated cable than a twisted pair... for one thing, you don't need a magnifying glass to inspect your work!) [I received an accusatory message the other day claiming that *I* "broke the printer". I replied: "Your handyman was there drilling holes in the counters. Wanna bet he moved the printer to do that? Wanna bet there's a cable to/from the printer that is now not seated properly in its jack?" Long silence. "Um, next time you're here, could you please fix the printer cable for us?"] (And, we'll ignore the unfortunate "compatibility" with RJ11's...) One thing that was great about orange hose was that **nobody** messed with it! :>
On Sun, 12 Jan 2014 19:47:02 +0000 (UTC), Simon Clubley
<clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:

>On 2014-01-12, Don Y <this@isnotme.com> wrote: >> On 1/12/2014 3:55 AM, Simon Clubley wrote: >>> >>> If either of those were accessible in the way a 10Base2 cable could be, >>> then the answer is probably yes. :-) >> >> Well, I suppose you could take out a 15 foot ladder and climb up >> onto a deployed device *IN USE* and start tugging on cables. >> Of course, if you did so, "disrupting the network" would be >> the least of your concerns (I'd worry more about breaking your >> neck from the fall or getting electrocuted as you climb over a >> device that isn't intended to be "walked on" :> ) > >It wasn't physically possible to do that in all environments unfortunately. > >Consider, for example, some possible office environments from the 1990s. >These days, if someone disrupts their own connection, it's only their own >device which is affected, but in that timeframe you might have had a 10Base2 >connection going from device to device within a region of a building.
The nasty thing about 10Base2 is that the cable shield should be grounded at _exacltly_one point, usually at one of the terminator resistance. Thus if the BNC connector touched a grounded metallic cable duct, the network failed. Thus, you had to cover the connectors with some insulating material and also make sure that any T-connector disconnected from a device did not make contact with any grounded objects.
On Sun, 12 Jan 2014 13:47:07 -0700, Don Y <this@isnotme.com> wrote:

>Being able to use a (bus) network *in* a product instead of having >to run control cables to a central "electronics cabinet" (star) >makes a *huge* difference in installation and maintenance costs!
Since branches are not allowed in 10Base2, you have to run the bus via _all_ devices, one cable to the T-connector and an other cable back, quickly extending past the 200 m limit. In the 10Base5 days, the thick cable was run the shortest way around the building and long AUI cables were run from each computer to the vampire tap transceiver sitting on the RG-8 bus cable. Later on external 10Base2 transceivers with AUI 15 connectors could be placed optimally along the shortest bus path and again connect the device via the AUI cable to the transceiver. With the use of integrated transceivers and T-connectors, you had to route the Ethernet traffic back and forth, loosing most of the benefits of a bus structure.
On 1/12/2014 9:09 PM, upsidedown@downunder.com wrote:
> On Sun, 12 Jan 2014 13:47:07 -0700, Don Y<this@isnotme.com> wrote: > >> Being able to use a (bus) network *in* a product instead of having >> to run control cables to a central "electronics cabinet" (star) >> makes a *huge* difference in installation and maintenance costs! > > Since branches are not allowed in 10Base2, you have to run the bus via > _all_ devices, one cable to the T-connector and an other cable back, > quickly extending past the 200 m limit.
I don't design aircraft carriers! :> 10m is more than enough to run from one end of a piece of equipment to the other -- stopping at each device along the way. 10Base2 was a win when you had lots of devices "lined up in a row" where it was intuitive to just "daisy chain" them together. E.g., imagine what a CAN bus deployment would look like if it had to adhere to a physical star topology (all those "nodes" sitting within inches of each other yet unable to take advantage of their proximity for cabling economies -- instead, having to run individual drops off to some central "hub/switch") [As we were rolling our own hardware, no need for T's -- two BNC's on each device: upstream + downstream.]
> In the 10Base5 days, the thick cable was run the shortest way around > the building and long AUI cables were run from each computer to the > vampire tap transceiver sitting on the RG-8 bus cable.
But AUI cables were *long*, of necessity. You simply couldn't route (as in "bend") the coax to get everywhere the bus wanted to *be*!
> Later on external 10Base2 transceivers with AUI 15 connectors could be > placed optimally along the shortest bus path and again connect the > device via the AUI cable to the transceiver. > > With the use of integrated transceivers and T-connectors, you had to > route the Ethernet traffic back and forth, loosing most of the > benefits of a bus structure.
You could create a "spoked wheel" distribution pattern -- each spoke being a network segment. E.g., when I ran 10Base2 here, I ran a cable into each room to service just the nodes within that room. No need to "return" from the (electrically) far end of the spoke... just let the segment end, there! In a typical office environment, you don't have the same sort of "high node density" that I have (simply because I have less space to cram everything into! :< ) So, the ability to run a cable from one device to the next device SITTING RIGHT BESIDE IT was a huge win -- instead of having to run wires from each of these to a *third* point that tied everything together. For example, I just wired a "computer lab" where the machines sit next to each other (~4 ft apart). Almost exactly 200 ft of cable despite the fact that the two machines farthest apart are less than 15 ft as the crow flies -- and could easily have been tethered together with ~40ft of coax. <shrug>
On Sun, 12 Jan 2014 10:11:06 -0600, Robert Wessel
<robertwessel2@yahoo.com> wrote:

>*Token-Ring was in many ways a PITA to work with, and not really all >that reliable, but it made 10base2 look like a complete joke in terms >of reliability.
Wiring TR was a PITA and the NICs initially were too complex to be reliable ... but that got fixed and TR's predictable timing made analyzing systems and programming reliably timed delivery - particularly across repeaters - easier even than on CAN. DDI rings had the same good features (and, of course, the same bad ones). YMMV, George
Hi George,

On 1/13/2014 2:05 PM, George Neuner wrote:
> On Sun, 12 Jan 2014 10:11:06 -0600, Robert Wessel > <robertwessel2@yahoo.com> wrote: > >> *Token-Ring was in many ways a PITA to work with, and not really all >> that reliable, but it made 10base2 look like a complete joke in terms >> of reliability. > > Wiring TR was a PITA and the NICs initially were too complex to be
Connectors were expensive. But, with a centralized MAU/hub/switch, the same sort of "star topology" related issues prevail.
> reliable ... but that got fixed and TR's predictable timing made > analyzing systems and programming reliably timed delivery - > particularly across repeaters - easier even than on CAN.
At one time, I did an analysis that suggested even 4Mb TR would outperform 10Mb ethernet when you were concerned with temporal guarantees. Of course, you can develop a token passing protocol atop ethernet. But, kind of defeats most of the reasons for *using* ethernet! (esp if you don't want to constrain the network size/topology ahead of time)
> DDI rings had the same good features (and, of course, the same bad > ones).
Not fond of optical "switches"? :>
On Mon, 13 Jan 2014 15:14:52 -0700, Don Y <this@isnotme.com> wrote:

>Hi George, > >On 1/13/2014 2:05 PM, George Neuner wrote: >> On Sun, 12 Jan 2014 10:11:06 -0600, Robert Wessel >> <robertwessel2@yahoo.com> wrote: >> >>> *Token-Ring was in many ways a PITA to work with, and not really all >>> that reliable, but it made 10base2 look like a complete joke in terms >>> of reliability. >> >> Wiring TR was a PITA and the NICs initially were too complex to be > >Connectors were expensive. But, with a centralized MAU/hub/switch, >the same sort of "star topology" related issues prevail. > >> reliable ... but that got fixed and TR's predictable timing made >> analyzing systems and programming reliably timed delivery - >> particularly across repeaters - easier even than on CAN. > >At one time, I did an analysis that suggested even 4Mb TR would >outperform 10Mb ethernet when you were concerned with temporal >guarantees.
Which was one of the touted features of TRN. Unfortunately for TRN, approximately zero users actually cared about that.
Hi Robert,

On 1/13/2014 4:21 PM, Robert Wessel wrote:

>>> reliable ... but that got fixed and TR's predictable timing made >>> analyzing systems and programming reliably timed delivery - >>> particularly across repeaters - easier even than on CAN. >> >> At one time, I did an analysis that suggested even 4Mb TR would >> outperform 10Mb ethernet when you were concerned with temporal >> guarantees. > > Which was one of the touted features of TRN. Unfortunately for TRN, > approximately zero users actually cared about that.
It's too bad that "fast" has won out over "predictable" (in many things -- not just network technology). IIRC, SMC was the only firm making TR silicon. (maybe TI had some offerings?) Not sure if they even offer any, currently. [I think I still have some TR connectors, NICs and even a "hub" stashed... somewhere]
On Mon, 13 Jan 2014 17:35:02 -0700, Don Y <this@isnotme.com> wrote:

>Hi Robert, > >On 1/13/2014 4:21 PM, Robert Wessel wrote: > >>>> reliable ... but that got fixed and TR's predictable timing made >>>> analyzing systems and programming reliably timed delivery - >>>> particularly across repeaters - easier even than on CAN. >>> >>> At one time, I did an analysis that suggested even 4Mb TR would >>> outperform 10Mb ethernet when you were concerned with temporal >>> guarantees. >> >> Which was one of the touted features of TRN. Unfortunately for TRN, >> approximately zero users actually cared about that. > >It's too bad that "fast" has won out over "predictable" (in >many things -- not just network technology). > >IIRC, SMC was the only firm making TR silicon. (maybe TI had some >offerings?) Not sure if they even offer any, currently. > >[I think I still have some TR connectors, NICs and even a "hub" >stashed... somewhere]
Heck, I've still got a ring running... I'm not sure how much presence SMC had in TRN, Thomas Conrad and Madge were the big non-IBM players. IBM, of course, had its own chipsets, and they did sell them to other vendors.