EmbeddedRelated.com
Forums

RS485 CSMA/CD protocol

Started by therion September 22, 2005
> If you used Ethernet in a hard RT system and > did not design it properly, thats your fault, not > the protocols.....
Ethernet is used in hard RT vital safety-critical systems all the time. It's a matter of system design to make sure that collisions or a network failure doesn't result in unsafe conditions. Of course, if a Token advocate designed a CSMA/CD system you would end up with a disaster, because they would take the simplicity of working collision detection/backoff algorithms that work and layer on all sorts of crap to guarantee timeslots etc and end up with a system that doesn't work. In fact I would be very critical of a safety-critical system where a network failure (here I'm lumping "excessive collisions" in with "network failure" which isn't too far off for Ethernet but of course the Token advocates will jump all over me for this) results in unsafe conditions. Of course, I work in an industry where we take pride in our vital relays and vital processors having to meet more stringent failsafe standards than the detonation systems in thermonuclear weapons :-). But unlike say aerospace or road vehicle systems, my industry has the advantage that setting all signals to stop, dropping speed commands, and applying full-service brakes is the failsafe condition. It's hard to claim that shutting down an airplane's engines or applying full braking on a road vehicle is a failsafe condition...! Tim.
"Tim Shoppa" <shoppa@trailing-edge.com> writes:
> The token ring advocates always pointed out how certain circumstances > could lead to collision detection not working nicely. They were also > extremely unconfortable with the lack of guaranteed bandwidth for each > master etc. > > But the token ring vs collision detection wars for general-purpose > networking were fought and token ring lost. In real life the concerns > that the token ring advocates had about collisions just don't happen, > even on highly saturated ethernets.
the big transition for ethernet was adding listen before transmit (and adapting t/r cat5 hub/spoke) my vague recollection was early ethernet was 3mbit/sec, didn't do listen before transmit and had this big thick cables ... looked a lot like the pcnet cables (which i believe did 1mbit/sec but used a tv head-end type implementation). when we had come up with 3-tier architecture and were out pitching in in executive presentations http://www.garlic.com/~lynn/subtopic.html#3tier we were getting a lot of push-back from the saa and token-ring folks. some characterized the saa effort as trying to put the client/server genie back into the bottle ... aka maintain the terminal emulation operation http://www.garlic.com/~lynn/subnetwork.html#emulation and since we were pitching enet ... the token-ring people were also getting really upset. some t/r person from the dallas engineering & science center had done a report that showed enet typically only got 1mbyte/sec thruput (we conjectured that they based the numbers on old 3mbit/sec implementation before listen before transmit). research had done a new bldg. up on the hill ... and it was completely wired with cat5 supposedly for t/r ... but they found that they were getting higher thruput and lower latency using it for star-wired 10mbit ethernet (even compared to 16mbit t/r). adapting the t/r hub&spoke cat5 configurations to ethernet tended to reduce the worst case latency on listen before transmit. this improved further by making the hub active ... so worst case was longest leg to the hub rather than latency across the hub between two longest legs. then a paper came out in 88 acm sigcomm showing that a typical 10mbit ethernet star-wired hub configuration with all stations doing worst case, low-level device driver loop transmitting minimum sized packets was getting aggregate effective thruput of 85 percent of media capacity. misc. past refs: http://www.garlic.com/~lynn/2000b.html#11 "Mainframe" Usage http://www.garlic.com/~lynn/2000f.html#38 Ethernet efficiency (was Re: Ms employees begging for food) http://www.garlic.com/~lynn/2000f.html#39 Ethernet efficiency (was Re: Ms employees begging for food) http://www.garlic.com/~lynn/2001j.html#20 OT - Internet Explorer V6.0 http://www.garlic.com/~lynn/2002.html#38 Buffer overflow http://www.garlic.com/~lynn/2002b.html#4 Microcode? (& index searching) http://www.garlic.com/~lynn/2002b.html#9 Microcode? (& index searching) http://www.garlic.com/~lynn/2002q.html#40 ibm time machine in new york times? http://www.garlic.com/~lynn/2002q.html#41 ibm time machine in new york times? http://www.garlic.com/~lynn/2003g.html#54 Rewrite TCP/IP http://www.garlic.com/~lynn/2003j.html#46 Fast TCP http://www.garlic.com/~lynn/2003k.html#57 Window field in TCP header goes small http://www.garlic.com/~lynn/2003p.html#13 packetloss bad for sliding window protocol ? http://www.garlic.com/~lynn/2004e.html#13 were dumb terminals actually so dumb??? http://www.garlic.com/~lynn/2004e.html#17 were dumb terminals actually so dumb??? http://www.garlic.com/~lynn/2004k.html#8 FAST TCP makes dialup faster than broadband? http://www.garlic.com/~lynn/2004p.html#55 IBM 3614 and 3624 ATM's http://www.garlic.com/~lynn/2005g.html#4 Successful remote AES key extraction http://www.garlic.com/~lynn/2005h.html#12 practical applications for synchronous and asynchronous communication http://www.garlic.com/~lynn/2005i.html#43 Development as Configuration -- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
On 2005-09-23, Rene Tschaggelar <none@none.net> wrote:

>>>IMO, this collision protocol was one of the biggest >>>mistakes in history of IT.
You _must_ be trolling...
>> Why? It works brilliantly for Ethernet. In my experience, >> token passing is horribly complex. > > No it doesn't work brilliant. It is crap. It doesn't have a > deterministic responsetime.
So what? The vast, vast majority of applications don't _need_ a deterministic response time. What is needed is simple, cheap, and high peak transfer rates. If you're doing some sort of hard-realtime stuff, then use a an appropriate protocol. Use CAN or ARCnet or whatever. The choice of inappropriate tools isn't the tool's fault.
> The token protocol such as in Arcnet is much better. Each node > gets a slot time every 150ms or so. At the time the two > battled, the Ethernet was 10MBit, and ARCNet was 2.5MBit. But > under load Arcnet performed much better. While Arcnet was > standing at somewhat below 2.5MBit over the bus, independent > on the number of nodes and traffic, Ethernet went right down > to zero with increasing load.
For the past 15 years, Ethernet has worked great for everything I've ever used it for. What more can a guy ask?
> But the marketing guys just saw the 10MBit vs 2.5MBit. > Too bad. Ethernet improved the bandwidth but the responsetime > is still not deterministic, unless all nodes are connect > to a switch. The switch avoids collisions, of course.
Besides, everybody uses switches these days. Ethernet is, for all practical purposes, a point-to-point protocol used between two endpoints. -- Grant Edwards grante Yow! Somewhere in DOWNTOWN at BURBANK a prostitute is visi.com OVERCOOKING a LAMB CHOP!!
On Fri, 23 Sep 2005 17:06:18 +0200, Rene Tschaggelar <none@none.net>
wrote:

>Grant Edwards wrote: > >> On 2005-09-23, Rene Tschaggelar <none@none.net> wrote: >> >>>therion wrote: >>> >>>>Hi, I would need to implement an CSMA/CD L1 protocol code for RS485 in my >>>>new AVR project for home automation. >>>>Any link, hint or help for the similar source code? >>> >>>This collision protocol only makes sense with multiple >>>masters, not in a master-slave setup. And if you have >>>multiple masters, then better implement a token protocol. >>>IMO, this collision protocol was one of the biggest >>>mistakes in history of IT. >> >> >> Why? It works brilliantly for Ethernet. In my experience, >> token passing is horribly complex. > >No it doesn't work brilliant. It is crap. It doesn't have a >deterministic responsetime. The token protocol such as in >Arcnet is much better. Each node gets a slot time every 150ms >or so. >At the time the two battled, the Ethernet was 10MBit, and >ARCNet was 2.5MBit. But under load Arcnet performed much >better. While Arcnet was standing at somewhat below 2.5MBit >over the bus, independent on the number of nodes and traffic, >Ethernet went right down to zero with increasing load. > >But the marketing guys just saw the 10MBit vs 2.5MBit. >Too bad. Ethernet improved the bandwidth but the responsetime >is still not deterministic, unless all nodes are connect >to a switch. The switch avoids collisions, of course.
A protocol that can be used with RS485 buffers, at high bit rates is SDLC. This is a superset of HDLC. It also uses token passing. Each nodes transmits one bit time later what it receives. Regards Anton Erasmus
On Fri, 23 Sep 2005 17:16:42 +0200, Rene Tschaggelar <none@none.net>
wrote:

>It is less that the tokenring advocates were uncomfortable. >A realtime system requires a defined response time that suits >the physical installation this system should control.
When well defined response times are needed, I would not mess with any tokens but use a traditional fixed single master multiple slave system.
>A car control system requires responsetimes in the millisecond >region and you wouldn't want your cars cpu to retry some bullshit >while you want the car to stop. Realtime response has nothing >to do with being comfortable, it has to do with lives.
For applications, in which the need is for some high priority emergency messages and ordinary messages, I would use CAN and not mess with tokens that would require lost token handling etc. Paul
ref:
http://www.garlic.com/~lynn/2005q.html#17 Ethernet, Aloha and CSMA/CD
http://www.garlic.com/~lynn/2005q.html#18 Ethernet, Aloha and CSMA/CD

the whole saa & terminal emulation forever
http://www.garlic.com/~lynn/subnetwork.html#emulation

overflowed into a number of areas.

romp/pcrt
http://www.garlic.com/~lynn/subtopic.html#801

had done a customer 16bit 4mbit/sec t/r card ... and then the
group was mandated that they had to use the PS2 microchannel
16mbit/sec t/r card for RIOS/6000.

the problem was that the PS2 card had the SAA and terminal emulation
design point where configurations had 300 PCs per t/r lan; bridged,
sharing common theoritical 16mbit (but in acutally much less), not
routers, no gateways, etc. SNA didn't have a network layer ... just
table of physical mac addresses ... modulo when APPN was introduced.
We use to kid the person responsible for APPN that he should stop
wasting his time trying to further kludge up SNA (the SNA group had
non-concurred with even announcing APPN, there was a several week
escalation process and the APPN announcement letter was rewritten to
not imply any relationship between APPN and SNA).

In any case, the pc/rt & rios market segment was supposedly
high-performance workstations, client/server, and distributed
environment. The custom pcrt 16bit 4mbit/sec t/r card actually had
higher per card thruput than the PS2 32bit 16mbit/sec t/r card (again
the saa terminal emulation paradigm.

the pcrt/rios market segment required high per card thruput for high
performance workstations and servers (in client/server environments
where traffic is quite asymmetrical).

in this period, a new generation of hub/spoke enet cards were
appearing (with new generation of enet controller chips like the 16bit
amd lance), where each card was capable of sustaining full 10mbit (aka
a server could transmit 10mbit/secs serving a client base having
aggregate 10mbit/sec requirements).

by comparison, the microchannel 16mbit t/r environment actually had
lower aggregate thruput and longer latencies ... AND the available
cards had per card thruput designed to meet the terminal emulation
market requirements (and one could say that the lack of high thruput
per card also inhibited the evoluation of client/server ... as well as
the 3-tier middle-layer/middleware paradigm that we were out pushing).

my wife had co-authored and presented a response to a gov. request for
high integrity and operational campus-like distributed environment ...
in which she had originally formulated a lot of the 3-tier principles.
http://www.garlic.com/~lynn/subtopic.html#3tier

we then expanded on the concepts and were making 3-tier and "middle
layer" presentations at customer executive seminars ... heavly laced
with high-performance routers aggregating large number of enet
segments. instead of having 300 machines bridged, sharing single
16mbit t/r, you had 300 "clients" spread across ten or more enet
segments ...  with servers having dedicated connectivity to the
high-speed routers. other components then were used to stage and
complete the 3-tier architecture. a couple of past posts in answer
to question on the origins of middleware
thtp://www.garlic.com/~lynn/96.html#16 middle layer
thtp://www.garlic.com/~lynn/96.html#17 middle layer

this also contributed to the work that we did coming up with
the ha/cmp product
http://www.garlic.com/~lynn/subtopic.html#hacmp

-- 
Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
Anne & Lynn Wheeler <lynn@garlic.com> writes:
> in this period, a new generation of hub/spoke enet cards were > appearing (with new generation of enet controller chips like the > 16bit amd lance), where each card was capable of sustaining full > 10mbit (aka a server could transmit 10mbit/secs serving a client > base having aggregate 10mbit/sec requirements).
... and the street price of the new generation of 16bit enet cards capable of sustaining 10mbit/sec/card was heading towards $49 ... while the ps2 microchannel 16mbit t/r cards (where you were lucky to get much more 1mbit/sec/card, aks the per card sustained thruput was less than the pc/rt 16bit 4mbit/sec t/r card) were holding in at over $900. -- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/
Rene Tschaggelar wrote:
> It is less that the tokenring advocates were uncomfortable. > A realtime system requires a defined response time that suits > the physical installation this system should control. > A car control system requires responsetimes in the millisecond > region and you wouldn't want your cars cpu to retry some bullshit while > you want the car to stop. Realtime response has nothing > to do with being comfortable, it has to do with lives.
Oops, sorry officer. I was tuning my radio and I guess I caused the brakes to stop working for a while...
therion wrote:
> Hi, I would need to implement an CSMA/CD L1 protocol code for RS485 in my > new AVR project for home automation. > Any link, hint or help for the similar source code?
I have implemented a multi-master packet oriented framework over RS485 using HDLC-style framing and byte stuffing, checksums and positive acknowledgement. It didn't use receive monitoring during transmit, instead it waited for the line to become idle (inactivity in the receiver) for a short sustained period, then transmitted into the wilderness and used timers, random backoffs and retries upon non-receipt of an ACK. Collisions were actually quite rare, and when they did happen the checksums and ACKs took care of them. I doubt it would win any awards, but it was trivial to implement and worked well enough for the environment it was in. Regards, Paul.
Grant Edwards wrote:
> On 2005-09-23, Rene Tschaggelar <none@none.net> wrote:
>>>>IMO, this collision protocol was one of the biggest >>>>mistakes in history of IT. > > You _must_ be trolling...
Sure. Always.
>>>Why? It works brilliantly for Ethernet. In my experience, >>>token passing is horribly complex. >> >>No it doesn't work brilliant. It is crap. It doesn't have a >>deterministic responsetime. > > So what? The vast, vast majority of applications don't _need_ > a deterministic response time. What is needed is simple, > cheap, and high peak transfer rates.
Synchronizing databases, transfering audio and video, the usual.
>>But the marketing guys just saw the 10MBit vs 2.5MBit. >>Too bad. Ethernet improved the bandwidth but the responsetime >>is still not deterministic, unless all nodes are connect >>to a switch. The switch avoids collisions, of course. > > Besides, everybody uses switches these days. Ethernet is, for > all practical purposes, a point-to-point protocol used between > two endpoints.
Guess why the more expensive switch technology overtook the cheap hubs ? Simply because it is pretty useless witout them. 100MBit delivering close to zero with 10 simultaneous nodes. We experienced that once. Smalltalk was writing huge files at that time. To avoid congestion, we had a flag in the lab, and only the one with the flag was allowed to start this particulat process. Rene -- Ing.Buero R.Tschaggelar - http://www.ibrtses.com & commercial newsgroups - http://www.talkto.net