Reply by Grant Edwards January 17, 20142014-01-17
On 2014-01-14, Don Y <This.is@not.Me> wrote:
> On 1/14/2014 12:04 AM, upsidedown@downunder.com wrote: >> On Mon, 13 Jan 2014 18:00:09 -0700, Don Y<this@isnotme.com> wrote: > >>> The problem is you have to layer a *different* protocol onto those >>> media if you want deterministic behavior. AND, prevent any >>> "noncompliant" traffic from using the medium at the same time. >>> >>> E.g., you could have "office equipment" and "process control >>> equipment" sharing a token-passing network and *still* have >>> guarantees for the process control subsystems. Not the case >>> with things like ethernet (unless you create a special >>> protocol stack for those devices and/or interpose some bit of >>> kit that forces them to "behave" properly). >> >> Horrible idea of putting office Windows machines in the same network >> as some real process control. These days firewalls are used between >> the networks and often even special gateways in a DMZ. > > I don't see a reference to "Windows" anywhere in the above...
Everywhere I've ever been on four different continents, "office equipment" means "Windows". -- Grant Edwards grant.b.edwards Yow! I selected E5 ... but at I didn't hear "Sam the Sham gmail.com and the Pharoahs"!
Reply by Aleksandar Kuktin January 15, 20142014-01-15
I was busy for a few days and unable to be present for the discussion.

To my shock, you people have produced way more content then I expected 
and than I can consume right now, so it'll take me a few days to catch up 
to everything (especially, just like my device, I am also resource 
constrained - trying to run a full time job and two non-trivial projects 
at the same time (this being one of those two) is quite taxing, to say 
the least).
Reply by January 15, 20142014-01-15
On Tue, 14 Jan 2014 21:59:35 -0500, George Neuner
<gneuner2@comcast.net> wrote:

>On Mon, 13 Jan 2014 22:40:40 -0600, Les Cargill ><lcargill99@comcast.com> wrote: > >>As you are no doubt aware, what happened there was Ethernet >>switching and it well and truly solved the collision problem. >> >>Most, if not all packets live on a collision domain with exactly >>two NICs on it - except for 802.11>x< , where just about >>any concept smuggled form the other old standards doubtless lives in >>the air link. > >The collision domain is now the destination port(s) in the switch. >Switch buffering can impose (bounded but) arbitrary latency or drop >packets entirely if buffer capacity is exceeded. > >The trend has been to place more and more memory into switches so that >vendors can claim "no dropped packets". But that has resulted in a >pervasive latency problem commonly known as "bufferbloat". >http://cacm.acm.org/magazines/2012/1/144810-bufferbloat/fulltext > >George
For hard real time applications, the value is useless, if it arrives after the deadline. On the other hand, loosing some sample values now and then is usually not a big deal, as long as the loss is detected (serial numbers etc.). A lot of buffer space (either in the TCP/IP stack or in the transmission queue in an Ethernet switch) can be quite harmful, if a message has been obsoleted while in queue. When the frame has finally been forwarded, it is obsolete, since no one is interested in it any more, but still it floats around the network, potentially causing congestion in an other switch along the path. So when designing a large realtime system, you have to think what traffic is transported in which way, such as TCP/IP pipes vs. MAC/UDP frames, various QoS assignments in switches etc. so that the whole system behaves gracefully even when approaching an overload situation.
Reply by George Neuner January 14, 20142014-01-14
Hi Don,

On Mon, 13 Jan 2014 15:14:52 -0700, Don Y <this@isnotme.com> wrote:

>> DDI rings had the same good features (and, of course, the same bad >> ones). > >Not fond of optical "switches"? :>
CDDI ran over copper wire 8-) At up to 200Mbps - until GbEthernet came along, it was the fastest (standard) copper in town. George
Reply by George Neuner January 14, 20142014-01-14
On Mon, 13 Jan 2014 22:40:40 -0600, Les Cargill
<lcargill99@comcast.com> wrote:

>As you are no doubt aware, what happened there was Ethernet >switching and it well and truly solved the collision problem. > >Most, if not all packets live on a collision domain with exactly >two NICs on it - except for 802.11>x< , where just about >any concept smuggled form the other old standards doubtless lives in >the air link.
The collision domain is now the destination port(s) in the switch. Switch buffering can impose (bounded but) arbitrary latency or drop packets entirely if buffer capacity is exceeded. The trend has been to place more and more memory into switches so that vendors can claim "no dropped packets". But that has resulted in a pervasive latency problem commonly known as "bufferbloat". http://cacm.acm.org/magazines/2012/1/144810-bufferbloat/fulltext George
Reply by Robert Wessel January 14, 20142014-01-14
On Tue, 14 Jan 2014 17:47:59 -0600, Les Cargill
<lcargill99@comcast.com> wrote:

>Robert Wessel wrote: >> On Tue, 14 Jan 2014 13:04:24 -0600, Les Cargill >> <lcargill99@comcast.com> wrote: >> >>> Don Y wrote: >>>> By contrast, token passing networks gave assigned timeslots. >>> >>> NO, they do not. TDM has timeslots; token >>> passing { ARCnet, Token Ring } work differently >>> for different switch topologies. Classic coax* >>> Token Ring is a ring, and each NIC forwards on behalf >>> of its neighbor unless the destination address is >>> the NIC's address. >> >> >> Jut to pick a nit... "Token-Ring" is both a general name for a >> networking scheme, and a particular networking technology, popularized >> by IBM and standardized as 802.5. >> > >Yep. > >> In the case of the latter, there never was any coax support for TRN, >> although other token passing systems did support coax. The old thick >> cables were *shielded* twisted pair, but definitely not coax. > >I believe that there was 802.5 over coax in some form. We >used to have to connect it for a regression test >before each release. > >Although this: >http://interfacecom.blogspot.com/2011/08/network-interface-controller.html > >"Madge 4 / 16 Mbit / s Token Ring ISA-16 NIC" > >Hopefully, that NIC is not an Arcnet NIC masquerading as Token >Ring :)
The top (long) card is a Madge TRN adapter (which is where the poorly placed caption belongs), the half-high/short card below that is a generic Ethernet NIC with a thinnet and 10baseT connection, probably an NE2000 clone of some sort. I can't quite read the back of the chip which might offer more specifics. https://en.wikipedia.org/wiki/File:EISA_TokenRing_NIC.JPG https://en.wikipedia.org/wiki/File:Network_card.jpg
>> IBM >> made provisions for using the "IBM Cabling System" (mainly all the STP >> from wall ports to the patch panels to transport the "Category A" (aka >> coax) 3270 terminal connections over STP, (you needed appropriate >> baluns*) and twinax (5250) stuff**, as well as a bunch of other things >> (blue for loops, LSCs for 8100 MCL loops, adapters for WE-404 >> connectors for the store systems, adapters for async/serial devices). >> It was quite a menagerie. >> > >Sounds like it! I never dealt with actual IBM stuff; I worked >for people who competed with them. > >> IBM did offer a networking option for 3174 (3270 terminal >> controllers), that allowed you to use a 3270 coax card in a PC as a >> network adapter - the 3174 bridged that onto the TRN it was connected >> to, but that was pretty removed from actual TRN. >> >> >> *These were the "red" baluns (you could do "Category B" connections >> too, those needed the yellow baluns). The standard baluns were >> integrated into adapter cables (data connector on one end, balun in >> the middle - with the color code - and the coax connector on the >> other). >> > >Wow, that's kind of a mess! Realistically, by the time I got to dealing >with Token Ring ( mid-90s ) it was largely UTP or STP into RJ45. > >We had some doohickey that had a BNC connector that we had to >run the regression test with, but I can't really remember what it was.
Was it networking? Perhaps the 3174 thing I mentioned ("3174 Peer Communications"). From the PC's perspective, it looked much like a Token Ring card one past the device driver. IBM provided DOS and OS/2 ("LAN Support") drivers that made it pretty transparent. There was a different, and earlier, IBM networking product, PC-Net (or "IBM PC Network" or something like that), that used coax in one of its two forms (the broadband version), and needed a head unit to translate between the send and receive frequencies. It was more a shared bus, but the broadband version would be (semi) star wired. While pretty much obsolete at that point, IBM was still supporting it.
Reply by Les Cargill January 14, 20142014-01-14
Robert Wessel wrote:
> On Tue, 14 Jan 2014 13:04:24 -0600, Les Cargill > <lcargill99@comcast.com> wrote: > >> Don Y wrote: >>> By contrast, token passing networks gave assigned timeslots. >> >> NO, they do not. TDM has timeslots; token >> passing { ARCnet, Token Ring } work differently >> for different switch topologies. Classic coax* >> Token Ring is a ring, and each NIC forwards on behalf >> of its neighbor unless the destination address is >> the NIC's address. > > > Jut to pick a nit... "Token-Ring" is both a general name for a > networking scheme, and a particular networking technology, popularized > by IBM and standardized as 802.5. >
Yep.
> In the case of the latter, there never was any coax support for TRN, > although other token passing systems did support coax. The old thick > cables were *shielded* twisted pair, but definitely not coax.
I believe that there was 802.5 over coax in some form. We used to have to connect it for a regression test before each release. Although this: http://interfacecom.blogspot.com/2011/08/network-interface-controller.html "Madge 4 / 16 Mbit / s Token Ring ISA-16 NIC" Hopefully, that NIC is not an Arcnet NIC masquerading as Token Ring :)
> IBM > made provisions for using the "IBM Cabling System" (mainly all the STP > from wall ports to the patch panels to transport the "Category A" (aka > coax) 3270 terminal connections over STP, (you needed appropriate > baluns*) and twinax (5250) stuff**, as well as a bunch of other things > (blue for loops, LSCs for 8100 MCL loops, adapters for WE-404 > connectors for the store systems, adapters for async/serial devices). > It was quite a menagerie. >
Sounds like it! I never dealt with actual IBM stuff; I worked for people who competed with them.
> IBM did offer a networking option for 3174 (3270 terminal > controllers), that allowed you to use a 3270 coax card in a PC as a > network adapter - the 3174 bridged that onto the TRN it was connected > to, but that was pretty removed from actual TRN. > > > *These were the "red" baluns (you could do "Category B" connections > too, those needed the yellow baluns). The standard baluns were > integrated into adapter cables (data connector on one end, balun in > the middle - with the color code - and the coax connector on the > other). >
Wow, that's kind of a mess! Realistically, by the time I got to dealing with Token Ring ( mid-90s ) it was largely UTP or STP into RJ45. We had some doohickey that had a BNC connector that we had to run the regression test with, but I can't really remember what it was.
> **Green "twinax impedance matching device". >
-- Les Cargill
Reply by Les Cargill January 14, 20142014-01-14
upsidedown@downunder.com wrote:
> On Tue, 14 Jan 2014 12:44:38 -0600, Les Cargill > <lcargill99@comcast.com> wrote: > >> Don Y wrote: >>> Hi Robert, >>> >>> On 1/13/2014 9:55 PM, Robert Wessel wrote: >>> >>> [attrs elided] >>> >>>>>> The problem is you have to layer a *different* protocol onto those >>>>>> media if you want deterministic behavior. AND, prevent any >>>>>> "noncompliant" traffic from using the medium at the same time. >>>>>> >>>>>> E.g., you could have "office equipment" and "process control >>>>>> equipment" sharing a token-passing network and *still* have >>>>>> guarantees for the process control subsystems. Not the case >>>>>> with things like ethernet (unless you create a special >>>>>> protocol stack for those devices and/or interpose some bit of >>>>>> kit that forces them to "behave" properly). >>>>> >>>>> As you are no doubt aware, what happened there was Ethernet >>>>> switching and it well and truly solved the collision problem. >>>>> >>>>> Most, if not all packets live on a collision domain with exactly >>>>> two NICs on it - except for 802.11>x< , where just about >>>>> any concept smuggled form the other old standards doubtless lives in >>>>> the air link. >>>> >>>> And given that most 100Mb and faster (wired) Ethernet links are full >>>> duplex these days, there's effectively no collision domain at all. >>>> >>>> OTOH, that doesn't prevent the switch from dropping packets if the >>>> destination port is sufficiently busy. >>> >>> Or for the actions of one node influencing the delivery of traffic >>> from *another* node! ("Betty in accounting is printing a lengthy >>> report -- the CNC machines have stopped as their input buffers are >>> now empty...") >> >> >> So VLANs... and 802.11Q or other CoS/QoS .... > > As strange as it may sound, Ethernet is used on Airbus A350 and A380 > planes.
There's nothing strange about it.
> Of course the devices have strict throughput control > mechanisms in the form of the AFDX protocol > http://en.wikipedia.org/wiki/Avionics_Full-Duplex_Switched_Ethernet >
Yep. -- Les Cargill
Reply by Robert Wessel January 14, 20142014-01-14
On Tue, 14 Jan 2014 13:04:24 -0600, Les Cargill
<lcargill99@comcast.com> wrote:

>Don Y wrote: >> By contrast, token passing networks gave assigned timeslots. > >NO, they do not. TDM has timeslots; token >passing { ARCnet, Token Ring } work differently >for different switch topologies. Classic coax* >Token Ring is a ring, and each NIC forwards on behalf >of its neighbor unless the destination address is >the NIC's address.
Jut to pick a nit... "Token-Ring" is both a general name for a networking scheme, and a particular networking technology, popularized by IBM and standardized as 802.5. In the case of the latter, there never was any coax support for TRN, although other token passing systems did support coax. The old thick cables were *shielded* twisted pair, but definitely not coax. IBM made provisions for using the "IBM Cabling System" (mainly all the STP from wall ports to the patch panels to transport the "Category A" (aka coax) 3270 terminal connections over STP, (you needed appropriate baluns*) and twinax (5250) stuff**, as well as a bunch of other things (blue for loops, LSCs for 8100 MCL loops, adapters for WE-404 connectors for the store systems, adapters for async/serial devices). It was quite a menagerie. IBM did offer a networking option for 3174 (3270 terminal controllers), that allowed you to use a 3270 coax card in a PC as a network adapter - the 3174 bridged that onto the TRN it was connected to, but that was pretty removed from actual TRN. *These were the "red" baluns (you could do "Category B" connections too, those needed the yellow baluns). The standard baluns were integrated into adapter cables (data connector on one end, balun in the middle - with the color code - and the coax connector on the other). **Green "twinax impedance matching device".
Reply by January 14, 20142014-01-14
On Tue, 14 Jan 2014 13:04:24 -0600, Les Cargill
<lcargill99@comcast.com> wrote:


>TDM is *a way*, but it's not *THE* way. If you >can deal with retransmission, then Ethernet provides a lot >from bandwidth for a lot less money.
I was once looking for using 68360 and PPC QUICC communication processor TSA for TDMA multiplexing a low number of bits from a large number of nodes, but it did not materialize. One interesting alternative using at least some Ethernet hardware is the Ethernet Powerlink http://en.wikipedia.org/wiki/Ethernet_Powerlink which also can efficiently transfer a few bits from each mode.