EmbeddedRelated.com
Forums

Smalles Ethernet (no TCP/IP) implementation, even 10Mbps

Started by Unknown January 22, 2013
On 2013-01-23, David Brown <david.brown@removethis.hesbynett.no> wrote:

> If you want to monitor your data, stick Wireshark on the network (as > always with Wireshark, you need to use a switch with mirroring on a > port, or find a good old fashioned hub rather than a switch). > > If you want to write PC code to communicate with your system, it's easy > with Linux - just open a raw socket and away you go.
You've always got to run those programs as root, which gets rather old after a while.
> You can happily use languages like Python for this to do it quickly > and simply.
I know. I was the one who added raw socket support to Python because I had to deal with the headache of troubleshooting stuff that uses raw Ethernet. :)
> With Windows, life is more complicated - but it is still possible.
From what I've gathered it's rather painful and is difficult to make work across different windows versions.
> Raw Ethernet is faster, more predictable and has lower overheads than IP > - even UDP. That is why it is used for protocols like ATA over Ethernet.
It does eliminate the overhead of address resolution using ARP. The overhead due to UDP and IP is pretty small.
> What you can't do, of course, is use standard programs that work with > UDP, TCP/IP, or other IP-based protocols. You can't use a DHCP server > to organise your network, or ftp to update software, or telnet to test > your application.
If you're using UDP/IP then DHCP is nice, and so is tftp for firmware updates. All that "nice" stuff does start to add up. OTOH, writing equivalent functionaly for raw Ethernet also takes a bit of work.
> Having said that, I have no doubts that CAN is a better choice in this > particular application (based on the info we have been given so far, of > course).
I'd have to agree with that. It looked to me like he was going to have to put a 3-port Ethernet switch in every one of his widgets so he could daisy-chain them (unless he can live with a star topology with a switch at the center). The 10base2 thinnet solution using 50 Ohm coax and BNC "T" connectors would be cute, but good luck getting parts. Can you even buy PHY chips and magnetics for 10base2 these days? CAN is dead simple as long as you don't try to do something like DeviceNet or its brethren. -- Grant Edwards grant.b.edwards Yow! I'm a fuschia bowling at ball somewhere in Brittany gmail.com
On Wed, 23 Jan 2013 23:56:48 +0000 (UTC), Grant Edwards
<invalid@invalid.invalid> wrote:

>On 2013-01-23, David Brown <david.brown@removethis.hesbynett.no> wrote:
>> What you can't do, of course, is use standard programs that work with >> UDP, TCP/IP, or other IP-based protocols. You can't use a DHCP server >> to organise your network, or ftp to update software, or telnet to test >> your application. > >If you're using UDP/IP then DHCP is nice, and so is tftp for firmware >updates. All that "nice" stuff does start to add up. OTOH, writing >equivalent functionaly for raw Ethernet also takes a bit of work.
Or manage it (a simple SNMP server is easy), or monitor it (ICMP), or let it set its clock accurately, or... And inevitably someone is going to want to access the widget from across a router... The packet headers are trivial, and the difficulty of implementing ARP and DHCP (and possibly auto-config for the no-DHCP scenario) is darn small given the benefits. In most cases, of course.
On Wed, 23 Jan 2013 13:22:05 +0100, David Brown
<david@westcontrol.removethisbit.com> wrote:

>On 22/01/13 18:18, Oliver Betz wrote: >> David Brown wrote: >> >> [...] >> >>>>> km). With CAN, you only drive dominant - recessive is by termination >>>>> resistors, and is thus much slower on long high-capacitance lines. In >>>> >>>> since it is a transmission line, the dominant -> recessive transition >>>> is also fast on long lines. >>>> >>>> The cable capacitance is less a problem than the lumped capacitance of >>>> transceivers etc. >>> >>> That's true - but the dominant to recessive transition by terminating >>> resistor is still slower than a driven transition. It is the limiting >>> factor in the speed vs. length trade-off. >> >> Do you have numbers? >> >> I made tests only with approx. 30m twisted pair cable and it's more >> than one year ago, but IIRC there was no big difference in rise/fall >> time. >> >> Of course, you shouldn't measure with a standard 10:1scope probe >> beause it adds too much capacitance to the node. >> >> Oliver >> > >No, I don't have numbers - and it's been a while since I have viewed a >CAN bus on a scope. But I remember a distinct difference in the edge rates. > >For some rough numbers, high quality twisted pair (Cat 5 Ethernet) is >about 50 pF/m, so at 30 m that is 1.5 nF. With 55 ohm termination, >that's an RC-constant of around 80 ns. If we assume one RC time is good >enough for a solid transition, that's still 8% of your 1 Mbit time slot. > Add in the propagation delay on the cable, extra capacitance for the >nodes, and greater impedance on typical CAN cables (compared to Cat 5), >and you can see why it is this transition that is often the limiting >factor in speed*length for a CAN bus. > ><http://www.softing.com/home/en/industrial-automation/products/can-bus/more-can-bus/bit-timing/practical-bus-length.php?navanchor=3010538> > >With CAN at 1 Mbps, 30 m is the limit for a reliable bus (depending on >the number of nodes and the cable type, of course). With RS-485, where >both transitions are driving, 500 m should not be a problem at 1 Mbps.
The RS-485 specifies 100 kbit/s at 4000 ft, thus 115k2 at 1 km should be OK, But 1 Mbit/s at 500 m really ? Do you have a reference ? The reason for the quite limited range with CAN such as 250 kbit/s at 250 m is mainly due to the propagation delay issue. For proper arbitration, the total propagation delay must be less than 1/10 bit time, thus limiting the network size. To create larger networks, you need to create smaller networks with gateway stations in each network. The distance between each gateway can be huge (even satellite links), but each gateway performs the normal CAN arbitration within that CAN subnet as would any true Ethernet switch would do.
On 24/01/13 00:56, Grant Edwards wrote:
> On 2013-01-23, David Brown <david.brown@removethis.hesbynett.no> wrote: > >> If you want to monitor your data, stick Wireshark on the network (as >> always with Wireshark, you need to use a switch with mirroring on a >> port, or find a good old fashioned hub rather than a switch). >> >> If you want to write PC code to communicate with your system, it's easy >> with Linux - just open a raw socket and away you go. > > You've always got to run those programs as root, which gets rather old > after a while.
I didn't think about that - I've only used such programs during testing, and it's not a big problem to use root on a development machine. But I agree that using root for something you run often is not nice - even with setuid or CAP_NET_RAW (if I've got that one right).
> >> You can happily use languages like Python for this to do it quickly >> and simply. > > I know. I was the one who added raw socket support to Python because I > had to deal with the headache of troubleshooting stuff that uses raw > Ethernet. :) >
Well, I owe you my thanks here. It made some development work I did much easier. I haven't used raw Ethernet much in practice - I only used it as a stepping stone to getting LWIP working on an MPC microcontroller. But it was easy to set up, and easy to work with on the PC (using Python, as root...), and let me troubleshoot my MAC setup.
>> With Windows, life is more complicated - but it is still possible. > > From what I've gathered it's rather painful and is difficult to make > work across different windows versions. >
I believe the common method is to use WinPCap - the library that Wireshark uses. But I'd hate to have to try to get something like that working on the latest "telephone" windows!
>> Raw Ethernet is faster, more predictable and has lower overheads than IP >> - even UDP. That is why it is used for protocols like ATA over Ethernet. > > It does eliminate the overhead of address resolution using ARP. The > overhead due to UDP and IP is pretty small. > >> What you can't do, of course, is use standard programs that work with >> UDP, TCP/IP, or other IP-based protocols. You can't use a DHCP server >> to organise your network, or ftp to update software, or telnet to test >> your application. > > If you're using UDP/IP then DHCP is nice, and so is tftp for firmware > updates. All that "nice" stuff does start to add up. OTOH, writing > equivalent functionaly for raw Ethernet also takes a bit of work. >
Basically, I think that if you are communicating over Ethernet with a PC (Windows, Linux, whatever) or other standard equipment, then IP with UDP and/or TCP is definitely the right choice. But raw Ethernet is quite a realistic option for a closed network between nodes that you have full control over, and the low and predictable latencies can fit far better in many industrial applications than UDP.
>> Having said that, I have no doubts that CAN is a better choice in this >> particular application (based on the info we have been given so far, of >> course). > > I'd have to agree with that. It looked to me like he was going to > have to put a 3-port Ethernet switch in every one of his widgets so he > could daisy-chain them (unless he can live with a star topology with a > switch at the center). The 10base2 thinnet solution using 50 Ohm coax > and BNC "T" connectors would be cute, but good luck getting parts. > Can you even buy PHY chips and magnetics for 10base2 these days?
I don't think coax Ethernet used magnetics - it certainly did not have good isolation. I remember getting buzzed when connecting coax Ethernet cables to computers on different mains circuits.
> > CAN is dead simple as long as you don't try to do something like > DeviceNet or its brethren. >
Absolutely. mvh., David
In article <kdptc0$4ss$1@reader1.panix.com>, invalid@invalid.invalid 
says...
> > On 2013-01-23, David Brown <david.brown@removethis.hesbynett.no> wrote:
.....
> > With Windows, life is more complicated - but it is still possible. > > From what I've gathered it's rather painful and is difficult to make > work across different windows versions.
What isn't difficult across Windows versions -- Paul Carpenter | paul@pcserviceselectronics.co.uk <http://www.pcserviceselectronics.co.uk/> PC Services <http://www.pcserviceselectronics.co.uk/fonts/> Timing Diagram Font <http://www.gnuh8.org.uk/> GNU H8 - compiler & Renesas H8/H8S/H8 Tiny <http://www.badweb.org.uk/> For those web sites you hate
On 24/01/13 08:16, upsidedown@downunder.com wrote:
> On Wed, 23 Jan 2013 13:22:05 +0100, David Brown > <david@westcontrol.removethisbit.com> wrote: > >> On 22/01/13 18:18, Oliver Betz wrote: >>> David Brown wrote: >>> >>> [...] >>> >>>>>> km). With CAN, you only drive dominant - recessive is by termination >>>>>> resistors, and is thus much slower on long high-capacitance lines. In >>>>> >>>>> since it is a transmission line, the dominant -> recessive transition >>>>> is also fast on long lines. >>>>> >>>>> The cable capacitance is less a problem than the lumped capacitance of >>>>> transceivers etc. >>>> >>>> That's true - but the dominant to recessive transition by terminating >>>> resistor is still slower than a driven transition. It is the limiting >>>> factor in the speed vs. length trade-off. >>> >>> Do you have numbers? >>> >>> I made tests only with approx. 30m twisted pair cable and it's more >>> than one year ago, but IIRC there was no big difference in rise/fall >>> time. >>> >>> Of course, you shouldn't measure with a standard 10:1scope probe >>> beause it adds too much capacitance to the node. >>> >>> Oliver >>> >> >> No, I don't have numbers - and it's been a while since I have viewed a >> CAN bus on a scope. But I remember a distinct difference in the edge rates. >> >> For some rough numbers, high quality twisted pair (Cat 5 Ethernet) is >> about 50 pF/m, so at 30 m that is 1.5 nF. With 55 ohm termination, >> that's an RC-constant of around 80 ns. If we assume one RC time is good >> enough for a solid transition, that's still 8% of your 1 Mbit time slot. >> Add in the propagation delay on the cable, extra capacitance for the >> nodes, and greater impedance on typical CAN cables (compared to Cat 5), >> and you can see why it is this transition that is often the limiting >> factor in speed*length for a CAN bus. >> >> <http://www.softing.com/home/en/industrial-automation/products/can-bus/more-can-bus/bit-timing/practical-bus-length.php?navanchor=3010538> >> >> With CAN at 1 Mbps, 30 m is the limit for a reliable bus (depending on >> the number of nodes and the cable type, of course). With RS-485, where >> both transitions are driving, 500 m should not be a problem at 1 Mbps. > > The RS-485 specifies 100 kbit/s at 4000 ft, thus 115k2 at 1 km should > be OK, But 1 Mbit/s at 500 m really ? Do you have a reference ? >
I ran a quick google, and found a useful note from Maxim: <http://www.maximintegrated.com/app-notes/index.mvp/id/3884> They show 1 Mbps at 500 m for a "conventional" RS-485 driver, and double that for more powerful drivers with pre-emphasis. Remember, the standards say one thing - but the chip manufacturers go above and beyond that in search of a sale. I doubt if the RS-485 standard talks about 52 Mbps rates supported by current high-speed drivers.
> The reason for the quite limited range with CAN such as 250 kbit/s at > 250 m is mainly due to the propagation delay issue. > > For proper arbitration, the total propagation delay must be less than > 1/10 bit time, thus limiting the network size.
My understanding was that you had closer to 1/2 bit time for stabilisation - the bus needs to be stable by the time you get the bit sample point. But certainly the propagation delay is a key factor here, whatever the fraction of the bit time you need. I don't really have the numbers to argue more about whether it is the dominant-to-recessive transition or the propagation delay that is the final limiting factor, or how much that depends on the bus load, cable type, etc. When you have many nodes, I expect that the termination will be a bigger factor (due to the higher capacitance), while on longer lines or special cables (such as fibre), the propagation delay will be the biggest factor. It is sometimes possible to cheat - I have run CAN buses longer and faster than theoretically possible, but with very specific situations. There were only two nodes, one at each end of the bus, and there was never any arbitration when both nodes wanted to send at the same time.
> > To create larger networks, you need to create smaller networks with > gateway stations in each network. The distance between each gateway > can be huge (even satellite links), but each gateway performs the > normal CAN arbitration within that CAN subnet as would any true > Ethernet switch would do. >
On Jan 24, 10:27&#4294967295;am, David Brown <da...@westcontrol.removethisbit.com>
wrote:
> ... > I don't think coax Ethernet used magnetics ....
Oh it certainly did, those I used came in a DIP-14 (16?). Then the PHY was behind the transformer so it took also a DC-DC convertor to power the DP8392 (-9V). Dimiter
In comp.arch.embedded,
David Brown <david@westcontrol.removethisbit.com> wrote:
> On 24/01/13 00:56, Grant Edwards wrote: >> >> I'd have to agree with that. It looked to me like he was going to >> have to put a 3-port Ethernet switch in every one of his widgets so he >> could daisy-chain them (unless he can live with a star topology with a >> switch at the center). The 10base2 thinnet solution using 50 Ohm coax >> and BNC "T" connectors would be cute, but good luck getting parts. >> Can you even buy PHY chips and magnetics for 10base2 these days? > > I don't think coax Ethernet used magnetics - it certainly did not have > good isolation. I remember getting buzzed when connecting coax Ethernet > cables to computers on different mains circuits.
10base2 does use magnetics and is isolated. On problem with that is that the entire network cable is floating and can get statically charged, that can zap you. I don't recall if there where cards with bleed resistors, but each card had some over voltage protection device over the magnetics, somewhere in the region of 1500V. And if I think about it, I seem to remember termination resistors with a little metal chain to ground your cable at one end. But that memory is a bit vague. ;-) That chain could explain your experience: If the cable was grounded somehere remote and you'd touch the cable BNC and the PC case, you could have a potential difference. You would not notice this if you touched only the BNC connector on the PC, as this was isolated. Main cause for voltage on a PC case are ungrounded PC's in my experience, not difference between mains circuits (they must be quite substantial for you to feel them). The case of ungrounded PC's is at half the mains voltage due to the capacitive voltage divider in the mains filter. -- Stef (remove caps, dashes and .invalid from e-mail address to reply by mail) Populus vult decipi. [The people like to be deceived.]
David Brown wrote:

[...]

>>> That's true - but the dominant to recessive transition by terminating >>> resistor is still slower than a driven transition. It is the limiting >>> factor in the speed vs. length trade-off. >> >> Do you have numbers?
[...]
>No, I don't have numbers - and it's been a while since I have viewed a >CAN bus on a scope. But I remember a distinct difference in the edge rates. > >For some rough numbers, high quality twisted pair (Cat 5 Ethernet) is >about 50 pF/m, so at 30 m that is 1.5 nF. With 55 ohm termination, >that's an RC-constant of around 80 ns. If we assume one RC time is good
again: It's a transmission line, the RC time constant doesn't matter! http://en.wikipedia.org/wiki/Transmission_line A wave travels along the line, we don't calculate with lumped capacitance or inductance. BTW: The termination is likely not 55 Ohms, a typical Ethernet TP cable has 100 Ohms impedance. CAN cables rather 120 Ohms. 50pF/m and 100 Ohms Z results in 500nH/m inductance, 2E6m/s (0,67c). The driver should see a resistive 50 Ohms load (two directions 100 Ohms), no capacitance, no inductance, as long as the cable is terminated with it's characteristic impedance. Losses in the dielectric and resistive losses distort and damp the signal, but that's much less an effect than the cable capacitange times the termination resistance. [...]
>With CAN at 1 Mbps, 30 m is the limit for a reliable bus (depending on >the number of nodes and the cable type, of course). With RS-485, where
That's a CAN specific issue - the round trip time including transceiver (and maybe isolation couplery) delay must be shorter than a bit time to enable the non-destructive bitwise arbitration.
>both transitions are driving, 500 m should not be a problem at 1 Mbps.
UART signalling over CAN transceivers can run also at 1MBit/s for much more than 30m. Oliver -- Oliver Betz, Munich despammed.com is broken, use Reply-To:
On Thu, 24 Jan 2013 11:43:16 +0100, Stef
<stef33d@yahooI-N-V-A-L-I-D.com.invalid> wrote:

>In comp.arch.embedded, >David Brown <david@westcontrol.removethisbit.com> wrote: >> On 24/01/13 00:56, Grant Edwards wrote: >>> >>> I'd have to agree with that. It looked to me like he was going to >>> have to put a 3-port Ethernet switch in every one of his widgets so he >>> could daisy-chain them (unless he can live with a star topology with a >>> switch at the center). The 10base2 thinnet solution using 50 Ohm coax >>> and BNC "T" connectors would be cute, but good luck getting parts. >>> Can you even buy PHY chips and magnetics for 10base2 these days? >> >> I don't think coax Ethernet used magnetics - it certainly did not have >> good isolation. I remember getting buzzed when connecting coax Ethernet >> cables to computers on different mains circuits. > >10base2 does use magnetics and is isolated. On problem with that is that >the entire network cable is floating and can get statically charged, that >can zap you. I don't recall if there where cards with bleed resistors, >but each card had some over voltage protection device over the magnetics, >somewhere in the region of 1500V. >And if I think about it, I seem to remember termination resistors with a >little metal chain to ground your cable at one end. But that memory is a >bit vague. ;-)
It did, but in my experience it was almost never used correctly. Most commonly, neither end got grounded, second place went to the folks who grounded *both* ends. It was rare to see a corrected 10base2 segment. And if you had a properly grounded segment, that would usually get "fixed" in fairly short order, as the ground connection was invariably in someone's way... To this day I'm still surprised that Ethernet survived the 10base2 debacle.