EmbeddedRelated.com
Forums

Shared Communications Bus - RS-422 or RS-485

Started by Rick C November 2, 2022
Rick C <gnuarm.deletethisbit@gmail.com> writes:
> Cablestogo has 6 inch cables for $2.99 each. I'd like to keep them > a bit shorter, but that's probably not an issue.
I thought there was a minimum length for ethernet cables because they have to have certain RF characteristics at 100mhz or 1ghz frequencies. I didn't realize they even came as short as 6 inches. Either way though, it shouldn't be an issue for your purposes.
Rick C <gnuarm.deletethisbit@gmail.com> wrote:
> On Friday, November 4, 2022 at 6:53:34 PM UTC-4, anti...@math.uni.wroc.pl wrote: > > Rick C <gnuarm.del...@gmail.com> wrote: > > > > > > How long is a piece of string? By keeping the interconnecting cables short, 4" or so, and a 5 foot cable from the PC, I don't expect problems with reflections. But it is prudent to allow for them anyway. The FTDI RS-422 cable seems to have a terminator on the receiver, but not the driver and no provision to add a terminator to the driver. > > It is pointless to add terminator to driver, there will be mismatch > > anyway and resistor would just waste transmit power. Mismatch > > at driver does not case trouble as long as ends are properly > > terminated. And when driver is at the near end and there are no > > other drivers, then it is enough to put termination only at the > > far end. So FTDI cable seem to be doing exactly what is needed. > > Yes, that's true for a single driver and multiple receivers. The point is that with multiple drivers, a terminator is needed at both ends of the cable. You have two ends to terminate, because drivers can be in the middle.
With 100 Ohm line driver in the middle sees two parts in parallel, so effectively 50 Ohm. Typical driver impedance is about 40 Ohm, so while mismatched, mismath is not too bad. Also, with multiple devices on the line there will be undesirable signals even if you have termination at both ends. In unterminated line there will be some loss, so after each reflection reflected signal will be weaker, in rough approximation multiplied by some number a < 1 (say 0.8). After n reflections signal will be multiplied by a^n and for large enough n will become negligible. Termination at given end with 1% resistor means that about 2% will be reflected (due to imperfection). This 2% is likely to be negligible. If transmitter is in the middle, there is still reflection at the end opposite to termination and at the transmitter. But mismatch at transmitter is not bad and the corresponding parameter a is much smaller than in unterminated case. So termination at one end reduces number of problematic reflections probably about 2-4 times. Which means that you can increase transfer rate by similar factor. Of course, termintion at both ends is better, but in multidrop case speed will be lower than in point-to-point link.
> You could not use FTDI RS-422 cables in the arrangement I am implementing. Every receiver would add a 120 ohm load to the line. Good thing I only need one!
Well, multiple receivers on RS-422 have limited usefulness (AFAIK your use case is called 4-wire RS-485), so no wonder that FTDI does not support it. Maybe they have something more expensive that is doing what you want.
> > > Oddly enough, the RS-485 cable has a terminator that can be connected by the user, but that would be running through the cable separately from the transceiver signals, so essentially stubbed! I guess at 1 Mbps, 5 feet is less than the rise time, so not an issue. Since the interconnections between cards will be about five feet as well, it's unlikely to be an issue. The entire network will look like a lumped load, with the propagation time on the order of the rise/fall time. Even adding in a second chassis, makes the round trip twice the typical rise/fall time and unlikely to create any issues. > > > > > > They sell cables that have 5 m of cable, with a round trip of 30 ns or so. > > Closer to 50 ns due to lower speed in cable. > > > I think that would still not be significant in this application. The driver rise/fall times are 15 ns typ, 25 ns max. > > Termination is also to kill _multiple_ reflections. In low loss line > > you can have bunch of reflection creating jitter. When jitter is > > more than 10% of bit time serial communication tends to have significant > > number of errors. At 9600 or at 100000 bits/s with short line bit > > time is long enough that jitter due to reflections in untermined > > line does not matter. Also multidrop RS-485 is far from low loss, > > each extra drop weakens signal, so reflections die faster than > > in quality point-to-point line. > > How do RS-485 drops "weaken" the signal? The load of an RS-485 device is very slight. The same result will happen with multiple receivers on RS-422.
That is general thing, not specific to RS-485. If RS-485 receiver puts 24 kOhm load on line, that is about 0.4% of line impedance. When signal passes past receiver there is corresponding power loss. There is also second effect: receiver created discontinuity, so there is reflection. And beside resitive part receiver impedance has also reactive part which means that discontinuity and reflection is bigger than implied by receiver resistance. With lower load recevier effect is smaller, but still there is fraction of percent lost or reflected. Single loss is "very slight", but they add up and increase effective line loss: with single receiver reflecting/losing 0.5 after 40 receivers 20% of signal is gone. This 20% effectively adds to normal line loss.
> I expect to be running at least 1 Mbps, possibly as high as 3 Mbps.
You probably should check if you can get such rate with short messages. If did little experiment using CH340 and CP2104. That was bi-drectional TTL level serial connection using 15 cm wires. Slave echoed each received character after mangling it a little (so I knew that it really came from the slave and not from some echo in software stack). I had trouble running CH340 above 460800 (that could be limit of program that I used). But using 1 character messages 10000 round trips took about 7s, with small influence from serial speed (almost the same result at 115200 and 230400). Also increasing message to 5 bytes gave essentially the same number of _messages_. CP2104 was better, here I could go up to 2000000. Using 5 byte messages 10000 round trips needed 2.5s up to 1500000, at 2000000 time dropped to about 1.9. When I increased message to 10 bytes it was back about 2.5s. I must admit that ATM I am not sure what this means. But this 2.5s looks significant: this means 4000 round trips per second, which is 8000 messages, which in turn is number of USB cycles. So, it seems that normally smallish messages need USB cycle (125 uS) to get trough USB bus. It seems that sometimes more than one message may go trough in a cycle (giving smaller times that I observed), but it is not clear if one can do significantly better. And CH340 shows that it may be much worse. FTDI is claimed to be very good, so maybe it is better, but I would not count on this without checking. Actually, I remember folks complaining that they needed more than millisecond to get message trough USB-serial. OTOH, your description suggest that you should be able to do what you want with much smaller message traffic, so maybe USB-serial speed is enough for you.
> One thing I'm a bit confused about, is the wiring of the EIA/TIA 568B or 568A cables. Both standards are used, but as far as I can tell, the only difference is the colors! The green and orange twisted pairs are reversed on both ends, making the cables electrically identical, other than the colors used for a given pair. The only difference is, the different pairs have different twist pitch, to help reduce crosstalk. But the numbers are not specified in the spec, so I don't see how this could matter. > > Why would the color be an issue, to the point of creating two different specs??? > > Obviously I'm missing something. I will need to check a cable before I design the boards, lol.
You may be missing fact that most folks installing network cabling do not know about transmission lines and reasons for matching pairs. And even for folks that understand theory, it is easier to check that colors are in position prescribed in the norm, than to check pairs. So, colors matter because using colors folks can get correct connetion without too much thinking. Why two specs? I think that this is artifact of history and way that standard bodies work. When half of industry is using one way and other half is using different but equally good way standard body can not say that one half is wrong, they must allow both ways. -- Waldek Hebisch
On Friday, November 4, 2022 at 11:46:16 PM UTC-4, anti...@math.uni.wroc.pl wrote:
> Rick C <gnuarm.del...@gmail.com> wrote: > > On Friday, November 4, 2022 at 6:53:34 PM UTC-4, anti...@math.uni.wroc.pl wrote: > > > Rick C <gnuarm.del...@gmail.com> wrote: > > > > > > > > How long is a piece of string? By keeping the interconnecting cables short, 4" or so, and a 5 foot cable from the PC, I don't expect problems with reflections. But it is prudent to allow for them anyway. The FTDI RS-422 cable seems to have a terminator on the receiver, but not the driver and no provision to add a terminator to the driver. > > > It is pointless to add terminator to driver, there will be mismatch > > > anyway and resistor would just waste transmit power. Mismatch > > > at driver does not case trouble as long as ends are properly > > > terminated. And when driver is at the near end and there are no > > > other drivers, then it is enough to put termination only at the > > > far end. So FTDI cable seem to be doing exactly what is needed. > > > > Yes, that's true for a single driver and multiple receivers. The point is that with multiple drivers, a terminator is needed at both ends of the cable. You have two ends to terminate, because drivers can be in the middle. > With 100 Ohm line driver in the middle sees two parts in parallel, so > effectively 50 Ohm. Typical driver impedance is about 40 Ohm, so > while mismatched, mismath is not too bad. Also, with multiple > devices on the line there will be undesirable signals even if you > have termination at both ends.
I don't want to get into a big discussion on termination, but any time a driver is in the middle of the line, it will see two loads, one for each direction of the cable. The termination only impacts the behavior of the reflection. So every driver that is not at the end of the line, will see the characteristic impedance divided by two. However, since the driver is not impedance matched to the line, that should not matter. But to prevent reflections, each end needs to be terminated, to prevent reflections from that end. The disruptions from the driver/receiver connections of intermediate chips will be small, since they are high impedance and minimal capacitance compared to the transmission line. These signals have multiple ns rise and fall times, so even with no terminations, it is unlikely to see effects from reflections from the ends of the line, much less the individual connections.
> In unterminated line there will be some loss, so after each reflection > reflected signal will be weaker, in rough approximation multiplied > by some number a < 1 (say 0.8). After n reflections signal will > be multiplied by a^n and for large enough n will become negligible. > Termination at given end with 1% resistor means that about 2% will > be reflected (due to imperfection). This 2% is likely to be negligible. > If transmitter is in the middle, there is still reflection at the > end opposite to termination and at the transmitter. But mismatch > at transmitter is not bad and the corresponding parameter a is > much smaller than in unterminated case. So termination at one > end reduces number of problematic reflections probably about 2-4 > times. Which means that you can increase transfer rate by > similar factor. Of course, termintion at both ends is better, > but in multidrop case speed will be lower than in point-to-point > link.
Multidrop is a single driver and multiple receivers. Multipoint is multiple drivers and receivers. One line will be multidrop (from PC) and the other multipoint (to the PC). The Multidrop will be single terminated since the driver needs no termination, it's impedance is well below the line impedance. The Multipoint line has a termination in the FTDI device on the receiver. Another termination will be added to the far end of the run. This is mostly insurance. I would not expect trouble if I used no terminators. I could probably use a TTL level serial cable and no RS-422 interface chips. But that's going a bit far I think. Using RS-422 is enough insurance to make the system work reliably.
> > You could not use FTDI RS-422 cables in the arrangement I am implementing. Every receiver would add a 120 ohm load to the line. Good thing I only need one! > Well, multiple receivers on RS-422 have limited usefulness (AFAIK your > use case is called 4-wire RS-485), so no wonder that FTDI does not > support it. Maybe they have something more expensive that is > doing what you want.
??? Who said FTDI does not support multiple receivers? Oh, you mean their cables only. I'm not sure why you say this has limited usefulness. But whatever. That's not a thing worth mentioning really. I'm not using FTDI anyplace other than the PC, so their device does exactly what I want. The only other, differential cable is RS-485, which I don't want to use as you have to pay more attention to timing of the driver enables.
> > > > Oddly enough, the RS-485 cable has a terminator that can be connected by the user, but that would be running through the cable separately from the transceiver signals, so essentially stubbed! I guess at 1 Mbps, 5 feet is less than the rise time, so not an issue. Since the interconnections between cards will be about five feet as well, it's unlikely to be an issue. The entire network will look like a lumped load, with the propagation time on the order of the rise/fall time. Even adding in a second chassis, makes the round trip twice the typical rise/fall time and unlikely to create any issues. > > > > > > > > They sell cables that have 5 m of cable, with a round trip of 30 ns or so. > > > Closer to 50 ns due to lower speed in cable. > > > > I think that would still not be significant in this application. The driver rise/fall times are 15 ns typ, 25 ns max. > > > Termination is also to kill _multiple_ reflections. In low loss line > > > you can have bunch of reflection creating jitter. When jitter is > > > more than 10% of bit time serial communication tends to have significant > > > number of errors. At 9600 or at 100000 bits/s with short line bit > > > time is long enough that jitter due to reflections in untermined > > > line does not matter. Also multidrop RS-485 is far from low loss, > > > each extra drop weakens signal, so reflections die faster than > > > in quality point-to-point line. > > > > How do RS-485 drops "weaken" the signal? The load of an RS-485 device is very slight. The same result will happen with multiple receivers on RS-422. > That is general thing, not specific to RS-485. If RS-485 receiver > puts 24 kOhm load on line, that is about 0.4% of line impedance. > When signal passes past receiver there is corresponding power loss.
If you are talking about the load resistance, that is trivial enough to be ignored for signal loss. The basic RS-422 devices are rated for 32 loads, and the numbers in the FTDI data sheet (54 ohms load) are with a pair of 120 ohm resistors and 32 loads.
> There is also second effect: receiver created discontinuity, so > there is reflection. And beside resitive part receiver impedance > has also reactive part which means that discontinuity and reflection > is bigger than implied by receiver resistance. With lower load > recevier effect is smaller, but still there is fraction of percent > lost or reflected. Single loss is "very slight", but they add up > and increase effective line loss: with single receiver reflecting/losing > 0.5 after 40 receivers 20% of signal is gone. This 20% effectively > adds to normal line loss.
The "reactive" part of the receiver/driver load is capacitive. That does not change with the load value. It's mostly from the packaging is my understanding, but they don't give a value in the part data sheet. I expect there's more capacitance in the 6 foot cable than the device. I don't know how you come up with the loss number.
> > I expect to be running at least 1 Mbps, possibly as high as 3 Mbps. > You probably should check if you can get such rate with short messages. > If did little experiment using CH340 and CP2104. That was bi-drectional > TTL level serial connection using 15 cm wires. Slave echoed each > received character after mangling it a little (so I knew that it > really came from the slave and not from some echo in software stack). > I had trouble running CH340 above 460800 (that could be limit of program > that I used). But using 1 character messages 10000 round trips took > about 7s, with small influence from serial speed (almost the same > result at 115200 and 230400). Also increasing message to 5 bytes > gave essentially the same number of _messages_.
I ran the numbers in one of my posts (here or in another group). My messages are around 10 char with the same echo or three more characters for a read reply. Assuming 8 kHz for the polling rate, an exchange would happen at 4 kHz. A total of 25 char gives 100 kchar/s or 800 kbps on USB or 1,000 kbps on the RS-422/RS-485 interface. So I would probably want to use something a bit faster than 1 Mbps. I think 4k messages per second will be plenty fast enough. With 128 UUT in the system that's 32 commands per second per UUT. I may want to streamline the protocol a bit to incorporate the slave selection in every command. This will be more characters per message, but more efficient overall with fewer messages. The process can be to send the same command to every UUT at the same time. Mostly this is just not an issue, until the audio tests. They take some noticeable time to execute, as they collect some amount of audio data. I might add a test for spurs, since some UUT failures clip the sinewaves due to DC bias faults and harmonic distortion would be a way to check for this. I want the testing to diagnose as much as possible. This would add another slow test. So these should be done on all UUT in parallel.
> CP2104 was better, here I could go up to 2000000. Using 5 byte > messages 10000 round trips needed 2.5s up to 1500000, at > 2000000 time dropped to about 1.9. When I increased message > to 10 bytes it was back about 2.5s. > > I must admit that ATM I am not sure what this means. But this 2.5s > looks significant: this means 4000 round trips per second, which > is 8000 messages, which in turn is number of USB cycles. So, > it seems that normally smallish messages need USB cycle (125 uS) > to get trough USB bus. It seems that sometimes more than one > message may go trough in a cycle (giving smaller times that I > observed), but it is not clear if one can do significantly better. > And CH340 shows that it may be much worse.
I used to use CH340 cables with my test fixture, but they would stop working after some time, hours I think. I think the cable had to be unplugged to get it working again. Once I realized it was the CH340 cable/drivers, I got FTDI devices and never looked back. They are triple the price, but much, much cheaper in the long run.
> FTDI is claimed to be very good, so maybe it is better, but I would > not count on this without checking. Actually, I remember folks > complaining that they needed more than millisecond to get message > trough USB-serial.
It's too early to be testing, but I will get to that. I suppose I could do loopback testing with the RS-232 cable I have now.
> OTOH, your description suggest that you should be able to do what > you want with much smaller message traffic, so maybe USB-serial > speed is enough for you.
If it doesn't run at the speed I'm thinking, it's not a big loss. There's no testing at all done with the current burn in chassis. The UUTs are tested one at a time. You can't get much slower than that. Even if it takes a minute to run a full test, that's on all 128 UUTs in parallel and it will be around 1000 times faster than what we have now! The slow part will be getting all the UUTs loaded on the test fixtures and getting the process started. Any bad UUTs will need to be pulled out and tested/debugged separately. Once they are pulled out, the testing runs until the next day when the units are labeled with a serial number and ready to ship!
> > One thing I'm a bit confused about, is the wiring of the EIA/TIA 568B or 568A cables. Both standards are used, but as far as I can tell, the only difference is the colors! The green and orange twisted pairs are reversed on both ends, making the cables electrically identical, other than the colors used for a given pair. The only difference is, the different pairs have different twist pitch, to help reduce crosstalk. But the numbers are not specified in the spec, so I don't see how this could matter. > > > > Why would the color be an issue, to the point of creating two different specs??? > > > > Obviously I'm missing something. I will need to check a cable before I design the boards, lol. > You may be missing fact that most folks installing network cabling > do not know about transmission lines and reasons for matching pairs. > And even for folks that understand theory, it is easier to check > that colors are in position prescribed in the norm, than to check > pairs. So, colors matter because using colors folks can get correct > connetion without too much thinking.
The people using the cables don't see the colors. They just plug them in.
> Why two specs? I think > that this is artifact of history and way that standard bodies work. > When half of industry is using one way and other half is using > different but equally good way standard body can not say that > one half is wrong, they must allow both ways.
But it's not different, really. It's just colors that mean nothing to anyone actually using the cables. They just want to plug them in and make things work. The color of the insulator won't change that at all. If there was something different about the wiring, then I'd say, I get it. But electrically they are identical. It's also odd, that the spec doesn't say how many turns per foot/meter are in the twisted pair. But it is different in each pair to give less crosstalk. -- Rick C. ---+ Get 1,000 miles of free Supercharging ---+ Tesla referral code - https://ts.la/richard11209
On 04/11/2022 18:11, Rick C wrote:
> On Friday, November 4, 2022 at 12:36:51 PM UTC-4, David Brown wrote: >> On 04/11/2022 15:37, pozz wrote: >>> Il 04/11/2022 10:49, David Brown ha scritto: >>>> On 04/11/2022 08:45, pozz wrote: >>>>> Il 03/11/2022 16:26, David Brown ha scritto: >>>>>> On 03/11/2022 14:00, pozz wrote: >>>>>>> Il 03/11/2022 12:42, David Brown ha scritto: >>>>>>>> On 03/11/2022 00:27, Rick C wrote: >>>>>>>>> On Wednesday, November 2, 2022 at 4:49:16 PM UTC-4, David Brown >>>>>>>>> wrote: >>>>>>>>>> On 02/11/2022 20:20, Rick C wrote: >>>>>>>>>>> On Wednesday, November 2, 2022 at 5:28:21 AM UTC-4, David Brown >>>>>>>>>>> wrote: >>>>>>>>>>>> On 02/11/2022 06:28, Rick C wrote: >>>>>>> >>>>>>> >>>>>>>> You are correct that reception is in the middle of the stop bit >>>>>>>> (typically sub-slot 9 of 16). The first transmitter will be >>>>>>>> disabled at the end of the stop bit, and the next transmitter must >>>>>>>> not enable its driver until after that point - it must wait at >>>>>>>> least half a bit time after reception before starting >>>>>>>> transmission. (It can wait longer without trouble, which is why >>>>>>>> faster baud rates are less likely to involve any complications here.) >>>>>>> >>>>>>> Do you mean that RX interrupt triggers in the middle of the stop >>>>>>> bit and not at the end? Interesting, but are you sure this is the >>>>>>> case for every UART implemented in MCUs? >>>>>> >>>>>> Of course I'm not sure - there are a /lot/ of MCU manufacturers! >>>>>> >>>>>> UART receivers usually work in the same way, however. They have a >>>>>> sample clock running at 16 times the baud clock. The start bit is >>>>>> edge triggered to give the start of the character frame. Then each >>>>>> bit is sampled in the middle of its time slot - usually at subbit >>>>>> slots 7, 8, and 9 with majority voting. So the stop bit is >>>>>> recognized by subbit slot 9 of the tenth bit (assuming 8-bit, no >>>>>> parity) - the voltage on the line after that is irrelevant. (Even >>>>>> when you have two stop bits, receivers never check the second stop >>>>>> bit - it affects transmit timing only.) What purpose would there be >>>>>> in waiting another 7 subbits before triggering the interrupt, DMA, >>>>>> or whatever? >>>>> >>>>> There's no real purpose, but it's important to know exactly when the >>>>> RX interrupt is fired from the UART. >>>>> >>>> >>>> I think it is extremely rare that this is important. I can't think of >>>> a single occasion when I have thought it remotely relevant where in >>>> the stop bit the interrupt comes. >>>> >>>>> Usually the next transmitter starts transmitting after receiving the >>>>> last byte of the previous transmitter (for example, the slave starts >>>>> replying to the master after receiving the complete message from it). >>>>> >>>> >>>> No. Usually the next transmitter starts after receiving the last >>>> byte, and /then a pause/. There will always be some handling time in >>>> software, and may also include an explicit pause. Almost always you >>>> will want to do at least a minimum of checking of the incoming data >>>> before deciding on the next telegram to be sent out. But if you have >>>> very fast handling in relation to the baud rate, you will want an >>>> explicit pause too - protocols regularly specify a minimum pause (such >>>> as 3.5 character times for Modbus RTU), and you definitely want it to >>>> be at least one full character time to ensure no listener gets >>>> hopelessly out of sync. >>> >>> In theory, if all the nodes on the bus were able to change direction in >>> hardware (exactly at the end of the stop bit), you will not be forced to >>> introduce any delay in the transmission. >> Communication is about /reliably/ transferring data between devices. >> Asynchronous serial communication is about doing that despite slight >> differences in clock rates, differences in synchronisation, differences >> in startup times, etc. If you don't have idle pauses, you have almost >> zero chance of staying in sync across the nodes - and no chance at all >> of recovery when that happens. /Every/ successful serial protocol has >> pauses between frames - long enough pauses that the idle time could not >> possibly be part of a normal full speed frame. That does not just apply >> to UART protocols, or even just to asynchronous protocols. The pause >> does not have to be as long as 3.5 characters, but you need a pause - >> just as you need other error recovery handling. > > The "idle" pauses you talk about are accommodated with the start and stop bits in the async protocol. Every character is sent with a start bit which starts the timing. The stop bit is the "fluff" time for the next character to align to the next start bit. There is no need for the bus to be idle in the sense of no data being sent. If an RS-485 or RS-422 bus is biased for undriven times, there is no need for the driver to be on through the full stop bit. Once the stop bit has driven high, it can be disabled, such as in the middle of the bit. The there is a half bit time for timing skew, which amounts to 5%, between any two devices on the bus. >
There are two levels of framing here, and two types of pauses. For UART communication, there is the "character frame" and the stop bit acts as a pause between characters. This is to give a minimum time to allow re-synchronisation of the clock timing at the receiver. It also forms, along with the start bit, a guaranteed edge for this re-synchronisation. More sophisticated serial protocols (CAN, Ethernet, etc.) do not need this because they have other methods of guaranteeing transitions and allowing the receiver to re-synchronise regularly - thus they do not need framing or idling at the character or byte level. But you always want framing and idling between message frames at a higher level. You always have an idle period that is longer than any valid character or part of a message. For example, in CAN communication you have "bit stuffing" any time you have 5 equal value bits in a row. This ensures that in the message, you never have more than 5 bits without a transition, and you don't need a fixed start or stop bit per byte in order to keep the receiver synchronised. But at the end of the CAN frame there is at least 10 bits of recessive (1) value. Any receiver that has got out of synchronisation, due to noise, startup timing, etc., will know it cannot possibly be in the middle of a frame and restart its receiver. In UART communication, this is handled at the protocol level rather than the hardware (though some UART hardware may have "idle detect" signals when more than 11 bits of high level are seen in a row). Some UART-based protocols also use a "break" signal between frames - that is a string of at least 11 bits of low level. If you do not have such pauses, and a receiver is out of step, it has no way to get into synchronisation again. Maybe you get lucky, but basically all it is seeing is a stream of high and low bits with no absolute indicator of position - and no way to tell what might be the start bit of a new character (rather than a 1 bit then a 0 bit within a character), never mind the start of a message. Usually you get enough pauses naturally in the communication, with delays between reception and reply. But if you don't have them, you must add them. Otherwise your communication will be too fragile to use in practice. You /need/ idle gaps to be able to resynchronise reliably in the face of errors (and there is /always/ a risk of errors).
> >>> Many times I'm the author of a custom protocol because some nodes on a >>> shared bus, so I'm not forced to follow any specifications. When I >>> didn't introduce any delay in the transmission, I sometimes faced this >>> issue. In my experience, the bus is heterogeneous enough to have a fast >>> replying slave to a slow master. >>> >>> >>>>> Now I think of the issue related to a transmitter that delays a >>>>> little to turn around the direction of its transceiver, from TX to >>>>> RX. Every transmitter on the bus should take into account this delay >>>>> and avoid starting transmission too soon. >>>> >>>> They should, yes. The turnaround delay should be negligible in this >>>> day and age - if not, your software design is screwed or you have >>>> picked the wrong hardware. (Of course, you don't always get the >>>> choice of hardware you want, and programmers are often left to find >>>> ways around hardware design flaws.) >>> >>> Negligible doesn't mean anything. >> Negligible means of no significance in comparison to the delays you have >> anyway - either intentional delays in order to separate telegrams and >> have a reliable communication, or unavoidable delays due to software >> processing. > > The software on the PC is not managing the bus drivers. So software delays are not relevant to bus control timing. > > >>> If thre's a poor 8 bit PIC (previous >>> transmitter) clocked at 8MHz that changes direction in TXC interrupt >>> while other interrupts are active, and there's a Cortex-M4 clocked at >>> 200MHz (next transmitter), you will encounter this issue. >>> >> No, you won't - not unless you are doing something silly in your timing >> such as failing to use appropriate pauses or thinking that 10 &micro;s >> turnarounds are a good idea at 9600 baud. And I did specify picking >> sensible hardware - 8-bit PICs were are terrible choice 20 years ago for >> anything involving high speed, and they have not improved. (Again - >> sometimes you don't have control of the hardware, and sometimes there >> can be other overriding reasons for picking something. But if your >> hardware is limited, you have to take that into account.) >>> This is more evident if, as you are saying, the Cortex-M4 is able to >>> start processing the message from the PIC at the midpoint of last stop >>> bit, while the PIC disables its driver at the *end* of the stop bit plus >>> an additional delay caused by interrupts handling. >>> >>> In this cases the half bit time is not negligible and must be added to >>> the transmission delay. >>> >> Sorry, but I cannot see any situation where that would happen in a >> well-designed communication system. >> >> Oh, and it is actually essential that the receiver considers the >> character finished half-way through the stop bit, and not at the end. >> UART communication is intended to work despite small differences in the >> baud rate - up to nearly 5% total error. By the time the receiver is >> half way through the received stop bit, and has identified it is valid, >> the sender could be finished the stop bit as its clock is almost 5% >> faster (50% bit time over the full 10 bits). The receiver has to be in >> the "watch for falling edge of start bit" state at this point, ready for >> the transmitter to start its next frame. > > Yes, why would it not be? This is why there's no need for additional delays or "gaps" in the protocol for an async interface. >
It will be in the right state at the right time, as long as it enters it when the stop bit is identified (half-way through the stop bit) rather than artificially waiting for the end of the bit time. You need gaps in the character stream at a higher level, for error recovery.
On 04/11/2022 16:40, Rick C wrote:
> On Friday, November 4, 2022 at 5:49:42 AM UTC-4, David Brown wrote: >> On 04/11/2022 08:45, pozz wrote: >>> Il 03/11/2022 16:26, David Brown ha scritto: >>>> On 03/11/2022 14:00, pozz wrote: >>>>> Il 03/11/2022 12:42, David Brown ha scritto: >>>>>> On 03/11/2022 00:27, Rick C wrote: >>>>>>> On Wednesday, November 2, 2022 at 4:49:16 PM UTC-4, David Brown wrote: >>>>>>>> On 02/11/2022 20:20, Rick C wrote: >>>>>>>>> On Wednesday, November 2, 2022 at 5:28:21 AM UTC-4, David Brown >>>>>>>>> wrote: >>>>>>>>>> On 02/11/2022 06:28, Rick C wrote: >>>>> >>>>> >>>>>> You are correct that reception is in the middle of the stop bit >>>>>> (typically sub-slot 9 of 16). The first transmitter will be >>>>>> disabled at the end of the stop bit, and the next transmitter must >>>>>> not enable its driver until after that point - it must wait at least >>>>>> half a bit time after reception before starting transmission. (It >>>>>> can wait longer without trouble, which is why faster baud rates are >>>>>> less likely to involve any complications here.) >>>>> >>>>> Do you mean that RX interrupt triggers in the middle of the stop bit >>>>> and not at the end? Interesting, but are you sure this is the case >>>>> for every UART implemented in MCUs? >>>> >>>> Of course I'm not sure - there are a /lot/ of MCU manufacturers! >>>> >>>> UART receivers usually work in the same way, however. They have a >>>> sample clock running at 16 times the baud clock. The start bit is >>>> edge triggered to give the start of the character frame. Then each >>>> bit is sampled in the middle of its time slot - usually at subbit >>>> slots 7, 8, and 9 with majority voting. So the stop bit is recognized >>>> by subbit slot 9 of the tenth bit (assuming 8-bit, no parity) - the >>>> voltage on the line after that is irrelevant. (Even when you have two >>>> stop bits, receivers never check the second stop bit - it affects >>>> transmit timing only.) What purpose would there be in waiting another >>>> 7 subbits before triggering the interrupt, DMA, or whatever? >>> >>> There's no real purpose, but it's important to know exactly when the RX >>> interrupt is fired from the UART. >>> >> I think it is extremely rare that this is important. I can't think of a >> single occasion when I have thought it remotely relevant where in the >> stop bit the interrupt comes. >>> Usually the next transmitter starts transmitting after receiving the >>> last byte of the previous transmitter (for example, the slave starts >>> replying to the master after receiving the complete message from it). >>> >> No. Usually the next transmitter starts after receiving the last byte, >> and /then a pause/. There will always be some handling time in >> software, and may also include an explicit pause. Almost always you >> will want to do at least a minimum of checking of the incoming data >> before deciding on the next telegram to be sent out. But if you have >> very fast handling in relation to the baud rate, you will want an >> explicit pause too - protocols regularly specify a minimum pause (such >> as 3.5 character times for Modbus RTU), and you definitely want it to be >> at least one full character time to ensure no listener gets hopelessly >> out of sync. >>> Now I think of the issue related to a transmitter that delays a little >>> to turn around the direction of its transceiver, from TX to RX. Every >>> transmitter on the bus should take into account this delay and avoid >>> starting transmission too soon. >> They should, yes. The turnaround delay should be negligible in this day >> and age - if not, your software design is screwed or you have picked the >> wrong hardware. (Of course, you don't always get the choice of hardware >> you want, and programmers are often left to find ways around hardware >> design flaws.) >>> >>> So I usually implement a short delay before starting a new message >>> transmission. If the maximum expected delay of moving the direction from >>> TX to RX is 10us, I could think to use a 10us delay, but this is wrong >>> in your assumption. >>> >> Implementing an explicit delay (or being confident that your telegram >> handling code takes long enough) is a good idea. >>> If the RX interrupt is at the middle of the stop bit, I should delay the >>> new transmission of 10us + half of bit time. With 9600 this is 52us that >>> is much higher than 10us. >>> >> I made no such assumptions about timings. The figures I gave were for >> using a USB 2 based interface on a PC, where the USB polling timer is at >> 8 kHz, or 125 &micro;s. That is half a bit time for 4 Kbaud. (I had doubled >> the frequency instead of halving it and said the baud had to be above 16 >> kBaud - that shows it's good to do your own calculations and not trust >> others blindly!). At 1 MBaud (the suggested rate), the absolute fastest >> the PC could turn around the bus would be 12 character times - half a >> stop bit is irrelevant. > > You are making an assumption of implementation. There is a processor in the USB cable that is implementing the UART. The driver enable control is most likely is implemented there. It would be pointless and very subject to failure, to require the main CPU to handle this timing. There's no reason to expect the driver disable to take more than a fraction of a bit time, so the "UART" needs a timing signal to indicate when the stop bit has been completed. >
I'm making the assumption that you are using appropriate hardware. No processor, just a USB device that has a "transmitter enable" signal on its UART. I'm getting the impression that you have never heard of such a UART (either in a USB-to-UART device, or as a UART peripheral elsewhere), and assume software has to be involved in enabling and disabling the transmitter. Please believe me when I say such UARTs /do/ exist - and the FTDI examples I keep giving are a case in point.
> The timing issue is not about loading another character into the transmit FIFO. It's about controlling the driver enable. >
Yes, and it is a /solved/ issue if you pick the right hardware.
> >> If you have a 9600 baud RS-485 receiver and you have a delay of 10 &micro;s >> between reception of the last bit and the start of transmission of the >> next message, your code is wrong - by nearly two orders of magnitude. >> It is that simple. >> >> If we take Modbus RTU as an example, you should be waiting 3.5 * 10 / >> 9600 seconds at a minimum - 3.65 /milli/seconds. If you are concerned >> about exactly where the receive interrupt comes in the last stop bit, >> add another half bit time and you get 3.7 ms. The half bit time is >> negligible. > > Your numbers are only relevant to Modbus. The only requirement is that no two drivers are on the bus at the same time, which requires zero delay from the end of the previous stop bit and the beginning of the next start bit. This is why the timing indication from the UART needs to be the end of the stop bit, not the middle. >
A single transmitter, while sending a multi-character message, does not need any delay between sending the full stop bit and starting the next start bit. That is obvious. And that is why a "transmission complete" signal comes at the end of the start bit sent on the transmitter side. On the receiver side, the "byte received" signal comes in the /middle/ of the stop bit, as seen by the receiver, because that could be at the /end/ of the stop bit as seen by the transmitter due to clock differences. (It could also be at the /start/ of the stop bit as seen by the transmitter.) The receiver has to prepare for the next incoming start bit as soon as it identifies the stop bit. But you want an extra delay of at least 11 bits (a character frame plus a buffer for clock speed differences) between messages - whether they are from the same transmitter or a different transmitter - to allow resynchronisation if something has gone wrong. I've explained in other posts why inter-message pauses are needed for reliable UART communication protocols. They don't /need/ to be as long as 35 bit times as Modbus specifies - 11 bit times is the minimum. If you don't understand this by now, then we should drop this point.
> >>> I know the next transmitter should make some processing of the previous >>> received message, prepare and buffer the new message to transmit, so the >>> delay is somewhat automatic, but in many cases I have small 8-bits PICs >>> and full-futured Linux box on the same bus and the Linux could be very >>> fast to start the new transmission. >>> >> So put in a delay. An /appropriate/ delay. > > You are thinking software, like most people do.
It doesn't matter whether things are software, hardware, or something in between.
> The slaves will be in logic, so the UART will have timing information relevant to the end of bits. I don't care how the master does it. The FTDI cable is alleged to "just work". Nonetheless, I will be providing for separate send and receive buses (or call it master/slave buses). Only one slave will be addressed at a time, so no collisions there, and the master can't collide with itself. >
Yes, with the bus you have described, and the command/response protocol you have described, there should be no problems with multiple transmitters on the bus, and you have plenty of inter-message idle periods. However, this Usenet thread has been mixing posts from different people, and discussions of different kinds of buses and protocols - not just the solution you picked (which, as I have said before, should work fine). I think this mixing means that people are sometimes talking at cross-purposes.
>> If you are pushing the limits of a bus, in terms of load, distance, >> speed, cable characteristics, etc., then you need to do such >> calculations carefully and be precise in your specification of >> components, cables, topology, connectors, etc. For many buses in >> practice, they will work fine using whatever resistor you pull out your >> box of random parts. For a testbench, you are going to go for something >> between these extremes. > > How long is a piece of string? By keeping the interconnecting cables short, 4" or so, and a 5 foot cable from the PC, I don't expect problems with reflections. But it is prudent to allow for them anyway. The FTDI RS-422 cable seems to have a terminator on the receiver, but not the driver and no provision to add a terminator to the driver. >
There is no point in having a terminator at a driver (unless you are talking about very high speed signals with serial resistors for slope control). You will want to add a terminator at the far end of both buses. This will give you a single terminator on the PC-to-slave bus, which is fine as it is fixed direction, and two terminators on the slave-to-PC bus, which is appropriate as it has no fixed direction. (I agree that your piece of string is of a size that should work fine without reflections being a concern.)
> Oddly enough, the RS-485 cable has a terminator that can be connected by the user, but that would be running through the cable separately from the transceiver signals, so essentially stubbed! I guess at 1 Mbps, 5 feet is less than the rise time, so not an issue. Since the interconnections between cards will be about five feet as well, it's unlikely to be an issue. The entire network will look like a lumped load, with the propagation time on the order of the rise/fall time. Even adding in a second chassis, makes the round trip twice the typical rise/fall time and unlikely to create any issues. > > They sell cables that have 5 m of cable, with a round trip of 30 ns or so. I think that would still not be significant in this application. The driver rise/fall times are 15 ns typ, 25 ns max. >
The speed of a signal in a copper cable is typically about 70% of the speed of light, giving a minimum round-trip time closer to 45 ns than 30 ns. Not that it makes any difference here.
On 05/11/2022 04:03, Paul Rubin wrote:
> Rick C <gnuarm.deletethisbit@gmail.com> writes: >> Cablestogo has 6 inch cables for $2.99 each. I'd like to keep them >> a bit shorter, but that's probably not an issue. > > I thought there was a minimum length for ethernet cables because they > have to have certain RF characteristics at 100mhz or 1ghz frequencies. > I didn't realize they even came as short as 6 inches. Either way > though, it shouldn't be an issue for your purposes.
There may be issues with minimum total length for Ethernet, but I have not heard of figures myself - usually maximum lengths are the issue. It's common to have racks with the wiring coming into patch panels, and then you need a short Ethernet cable to the switch. These cables should ideally be short - both from a cable management viewpoint, and because you always want to have as few impedance jumps as possible in the total connection between switch and end device and you want the bumps to be as close to the ends as possible. 30 cm patch cables are common, but I've also seen 10 cm cables. For the very short ones, they need to be made of very flexible material - standard cheap Ethernet cables aren't really flexible enough to be convenient to plug in and out unless you have a little more length.
On Saturday, November 5, 2022 at 6:58:24 AM UTC-4, David Brown wrote:
> On 04/11/2022 18:11, Rick C wrote: > > On Friday, November 4, 2022 at 12:36:51 PM UTC-4, David Brown wrote: > >> Communication is about /reliably/ transferring data between devices. > >> Asynchronous serial communication is about doing that despite slight > >> differences in clock rates, differences in synchronisation, differences > >> in startup times, etc. If you don't have idle pauses, you have almost > >> zero chance of staying in sync across the nodes - and no chance at all > >> of recovery when that happens. /Every/ successful serial protocol has > >> pauses between frames - long enough pauses that the idle time could not > >> possibly be part of a normal full speed frame. That does not just apply > >> to UART protocols, or even just to asynchronous protocols. The pause > >> does not have to be as long as 3.5 characters, but you need a pause - > >> just as you need other error recovery handling. > > > > The "idle" pauses you talk about are accommodated with the start and stop bits in the async protocol. Every character is sent with a start bit which starts the timing. The stop bit is the "fluff" time for the next character to align to the next start bit. There is no need for the bus to be idle in the sense of no data being sent. If an RS-485 or RS-422 bus is biased for undriven times, there is no need for the driver to be on through the full stop bit. Once the stop bit has driven high, it can be disabled, such as in the middle of the bit. The there is a half bit time for timing skew, which amounts to 5%, between any two devices on the bus. > > > There are two levels of framing here, and two types of pauses. > > For UART communication, there is the "character frame" and the stop bit > acts as a pause between characters. This is to give a minimum time to > allow re-synchronisation of the clock timing at the receiver. It also > forms, along with the start bit, a guaranteed edge for this > re-synchronisation. More sophisticated serial protocols (CAN, Ethernet, > etc.) do not need this because they have other methods of guaranteeing > transitions and allowing the receiver to re-synchronise regularly - thus > they do not need framing or idling at the character or byte level. > > But you always want framing and idling between message frames at a > higher level. You always have an idle period that is longer than any > valid character or part of a message.
<<< snip >>>
> In UART communication, this is handled at the protocol level rather than > the hardware (though some UART hardware may have "idle detect" signals > when more than 11 bits of high level are seen in a row). Some > UART-based protocols also use a "break" signal between frames - that is > a string of at least 11 bits of low level. > > If you do not have such pauses, and a receiver is out of step,
You have failed to explain how a receiver would get "out of step". The receiver syncs to every character transmitted. If all characters are received, what else do you need? How does it get "out of step"?
> it has no > way to get into synchronisation again. Maybe you get lucky, but > basically all it is seeing is a stream of high and low bits with no > absolute indicator of position - and no way to tell what might be the > start bit of a new character (rather than a 1 bit then a 0 bit within a > character), never mind the start of a message.
I have no idea what you are talking about. You have already explained above how every character is framed with a start and a stop bit. That gives a half bit time of clock misalignment to maintain sync. What would cause getting out of step? With the protocol involved, the characters for commands are unique. So if a devices sees noise on the line and does get out of sync at framing characters, it would simply not respond when spoken to. That would inherently cause a delay. So all data after that would be received correctly. The reason I'm using RS-422 instead of TTL, is the huge improvement in noise tolerance. So if the noise rate is enough to cause any noticeable problems, there's a bad design in the cabling or some fundamental flaw in the design and needs to be corrected. Actually, that makes me realize I need to have a mode where the comms are exercised and bit errors counted.
> Usually you get enough pauses naturally in the communication, with > delays between reception and reply. But if you don't have them, you > must add them. Otherwise your communication will be too fragile to use > in practice. You /need/ idle gaps to be able to resynchronise reliably > in the face of errors (and there is /always/ a risk of errors).
You haven't made your case. You've not explained how anything gets out of sync. What is your use case? But you finally mention "errors". Are you talking about bit errors in the comms? I've addressed that above. It is inherently handled in a command/response protocol, but since the problem of bit errors should be very, very infrequent, I'm not worried.
> >> Oh, and it is actually essential that the receiver considers the > >> character finished half-way through the stop bit, and not at the end.
That depends entirely on what is being done with the information. Start bit detection should start as early as possible. Enabling the transmitter driver after the last received character should not happen until the entire character is received, to the end of the stop bit. If the bus has fail-safe provisions, it's actually ok for the transmitter to disable the driver at the middle of the stop bit. The line will already be in the idle state and the passive fail-safe will maintain that. Less chance of bus contention if the next driver is enabled slightly before the end of the stop bit.
> >> UART communication is intended to work despite small differences in the > >> baud rate - up to nearly 5% total error. By the time the receiver is > >> half way through the received stop bit, and has identified it is valid, > >> the sender could be finished the stop bit as its clock is almost 5% > >> faster (50% bit time over the full 10 bits). The receiver has to be in > >> the "watch for falling edge of start bit" state at this point, ready for > >> the transmitter to start its next frame. > > > > Yes, why would it not be? This is why there's no need for additional delays or "gaps" in the protocol for an async interface. > > > It will be in the right state at the right time, as long as it enters it > when the stop bit is identified (half-way through the stop bit) rather > than artificially waiting for the end of the bit time. > > You need gaps in the character stream at a higher level, for error recovery.
If you have errors. I like systems without errors. Systems without errors are better in my opinion. I'm just sayin'. But it's handled anyway. -- Rick C. --+- Get 1,000 miles of free Supercharging --+- Tesla referral code - https://ts.la/richard11209
On Saturday, November 5, 2022 at 7:47:59 AM UTC-4, David Brown wrote:
> On 04/11/2022 16:40, Rick C wrote: > > On Friday, November 4, 2022 at 5:49:42 AM UTC-4, David Brown wrote: > >> I made no such assumptions about timings. The figures I gave were for > >> using a USB 2 based interface on a PC, where the USB polling timer is at > >> 8 kHz, or 125 &micro;s. That is half a bit time for 4 Kbaud. (I had doubled > >> the frequency instead of halving it and said the baud had to be above 16 > >> kBaud - that shows it's good to do your own calculations and not trust > >> others blindly!). At 1 MBaud (the suggested rate), the absolute fastest > >> the PC could turn around the bus would be 12 character times - half a > >> stop bit is irrelevant. > > > > You are making an assumption of implementation. There is a processor in the USB cable that is implementing the UART. The driver enable control is most likely is implemented there. It would be pointless and very subject to failure, to require the main CPU to handle this timing. There's no reason to expect the driver disable to take more than a fraction of a bit time, so the "UART" needs a timing signal to indicate when the stop bit has been completed. > > > I'm making the assumption that you are using appropriate hardware. No > processor, just a USB device that has a "transmitter enable" signal on > its UART.
How can there not be a processor? I'm using a split bus, with the PC master driving all the slave receivers and all the slave transmitters sharing the PC receive bus. Is the PC not a processor? The slaves have no USB.
> I'm getting the impression that you have never heard of such a UART > (either in a USB-to-UART device, or as a UART peripheral elsewhere), and > assume software has to be involved in enabling and disabling the > transmitter. Please believe me when I say such UARTs /do/ exist - and > the FTDI examples I keep giving are a case in point.
You are not being clear. I don't know and don't care what is inside the FTDI device. That's just magic to me, or it's like something inside the black hole, unknowable. More importantly, there is no transmitter enable on the RS-422 driver in the FTDI device, because it's not tristateable.
> > The timing issue is not about loading another character into the transmit FIFO. It's about controlling the driver enable. > > > Yes, and it is a /solved/ issue if you pick the right hardware. > > > >> If you have a 9600 baud RS-485 receiver and you have a delay of 10 &micro;s > >> between reception of the last bit and the start of transmission of the > >> next message, your code is wrong - by nearly two orders of magnitude. > >> It is that simple. > >> > >> If we take Modbus RTU as an example, you should be waiting 3.5 * 10 / > >> 9600 seconds at a minimum - 3.65 /milli/seconds. If you are concerned > >> about exactly where the receive interrupt comes in the last stop bit, > >> add another half bit time and you get 3.7 ms. The half bit time is > >> negligible. > > > > Your numbers are only relevant to Modbus. The only requirement is that no two drivers are on the bus at the same time, which requires zero delay from the end of the previous stop bit and the beginning of the next start bit. This is why the timing indication from the UART needs to be the end of the stop bit, not the middle. > > > A single transmitter, while sending a multi-character message, does not > need any delay between sending the full stop bit and starting the next > start bit. That is obvious. And that is why a "transmission complete" > signal comes at the end of the start bit sent on the transmitter side.
??? Are you talking about the buffer management signals for the software?
> On the receiver side, the "byte received" signal comes in the /middle/ > of the stop bit, as seen by the receiver, because that could be at the > /end/ of the stop bit as seen by the transmitter due to clock > differences. (It could also be at the /start/ of the stop bit as seen > by the transmitter.) The receiver has to prepare for the next incoming > start bit as soon as it identifies the stop bit.
Again, this depends entirely on what this signal is used for. For entering the state of detecting the next start bit, yes, that is the perceived middle of the stop bit.
> But you want an extra delay of at least 11 bits (a character frame plus > a buffer for clock speed differences) between messages - whether they > are from the same transmitter or a different transmitter - to allow > resynchronisation if something has gone wrong.
Again, you seem to not understand the use case. The split bus never has messages back to back on the same pair. It gets confusing because so many people have tried to talk up RS-485 using a single pair. In that case, everything is totally different. Slaves need to wait until the driver has stopped driving the bus, which means an additional bit time to account for timing errors. But RS-485 is not being used. Each bus is simplex, implementing a half-duplex protocol on the two buses.
> I've explained in other posts why inter-message pauses are needed for > reliable UART communication protocols. They don't /need/ to be as long > as 35 bit times as Modbus specifies - 11 bit times is the minimum. If > you don't understand this by now, then we should drop this point.
You are assuming a need for error tolerance. But a munged message is the problem, not resyncing. A protocol to detect an error and retransmit is very messy. I've tried that before and it messes up the protocol badly.
> >> So put in a delay. An /appropriate/ delay. > > > > You are thinking software, like most people do. > It doesn't matter whether things are software, hardware, or something in > between.
Of course it does. Since the slaves are all logic, there is no need for delays, at all. The slave driver can be enabled at any time the message has been received and the reply is ready to go.
> > The slaves will be in logic, so the UART will have timing information relevant to the end of bits. I don't care how the master does it. The FTDI cable is alleged to "just work". Nonetheless, I will be providing for separate send and receive buses (or call it master/slave buses). Only one slave will be addressed at a time, so no collisions there, and the master can't collide with itself. > > > Yes, with the bus you have described, and the command/response protocol > you have described, there should be no problems with multiple > transmitters on the bus, and you have plenty of inter-message idle periods. > > However, this Usenet thread has been mixing posts from different people, > and discussions of different kinds of buses and protocols - not just the > solution you picked (which, as I have said before, should work fine). I > think this mixing means that people are sometimes talking at cross-purposes.
Yes, it gets confusing.
> >> If you are pushing the limits of a bus, in terms of load, distance, > >> speed, cable characteristics, etc., then you need to do such > >> calculations carefully and be precise in your specification of > >> components, cables, topology, connectors, etc. For many buses in > >> practice, they will work fine using whatever resistor you pull out your > >> box of random parts. For a testbench, you are going to go for something > >> between these extremes. > > > > How long is a piece of string? By keeping the interconnecting cables short, 4" or so, and a 5 foot cable from the PC, I don't expect problems with reflections. But it is prudent to allow for them anyway. The FTDI RS-422 cable seems to have a terminator on the receiver, but not the driver and no provision to add a terminator to the driver. > > > There is no point in having a terminator at a driver (unless you are > talking about very high speed signals with serial resistors for slope > control). You will want to add a terminator at the far end of both > buses. This will give you a single terminator on the PC-to-slave bus, > which is fine as it is fixed direction, and two terminators on the > slave-to-PC bus, which is appropriate as it has no fixed direction.
It does if you are using it in a shared bus with multiple drivers. The line should still be organized as linear with minimal stubs and a terminator on each end. This is not my plan, so maybe I should stop discussing it.
> (I agree that your piece of string is of a size that should work fine > without reflections being a concern.) > > Oddly enough, the RS-485 cable has a terminator that can be connected by the user, but that would be running through the cable separately from the transceiver signals, so essentially stubbed! I guess at 1 Mbps, 5 feet is less than the rise time, so not an issue. Since the interconnections between cards will be about five feet as well, it's unlikely to be an issue. The entire network will look like a lumped load, with the propagation time on the order of the rise/fall time. Even adding in a second chassis, makes the round trip twice the typical rise/fall time and unlikely to create any issues. > > > > They sell cables that have 5 m of cable, with a round trip of 30 ns or so. I think that would still not be significant in this application. The driver rise/fall times are 15 ns typ, 25 ns max. > > > The speed of a signal in a copper cable is typically about 70% of the > speed of light, giving a minimum round-trip time closer to 45 ns than 30 > ns. Not that it makes any difference here.
The problem I have now is finding parts to use for this. These devices seem to be in a catagory that are hit hard by the shortage. My product uses the SN65C1168EPW, which is very hard to find in quantity. My customer has mentioned 18,000 units next year. I may need to get with the factory and see if they can supply me directly. -- Rick C. --++ Get 1,000 miles of free Supercharging --++ Tesla referral code - https://ts.la/richard11209
On 05/11/2022 18:23, Rick C wrote:
> On Saturday, November 5, 2022 at 7:47:59 AM UTC-4, David Brown wrote: >> On 04/11/2022 16:40, Rick C wrote: >>> On Friday, November 4, 2022 at 5:49:42 AM UTC-4, David Brown wrote: >>>> I made no such assumptions about timings. The figures I gave were for >>>> using a USB 2 based interface on a PC, where the USB polling timer is at >>>> 8 kHz, or 125 &micro;s. That is half a bit time for 4 Kbaud. (I had doubled >>>> the frequency instead of halving it and said the baud had to be above 16 >>>> kBaud - that shows it's good to do your own calculations and not trust >>>> others blindly!). At 1 MBaud (the suggested rate), the absolute fastest >>>> the PC could turn around the bus would be 12 character times - half a >>>> stop bit is irrelevant. >>> >>> You are making an assumption of implementation. There is a processor in the USB cable that is implementing the UART. The driver enable control is most likely is implemented there. It would be pointless and very subject to failure, to require the main CPU to handle this timing. There's no reason to expect the driver disable to take more than a fraction of a bit time, so the "UART" needs a timing signal to indicate when the stop bit has been completed. >>> >> I'm making the assumption that you are using appropriate hardware. No >> processor, just a USB device that has a "transmitter enable" signal on >> its UART. > > How can there not be a processor? I'm using a split bus, with the PC master driving all the slave receivers and all the slave transmitters sharing the PC receive bus. > > Is the PC not a processor?
Sure, the PC is a processor. It sends a command to the USB device, saying "send these N bytes of data out on the UART ...". The USB device is /not/ a processor - it is a converter between USB and UART. And it is the USB device that controls the transmit enable signal to the RS-485/RS-422 driver. There is no software on any processor handling the transmit enable signal - the driver is enabled precisely when the USB to UART device is sending data on the UART.
> > The slaves have no USB. > > >> I'm getting the impression that you have never heard of such a UART >> (either in a USB-to-UART device, or as a UART peripheral elsewhere), and >> assume software has to be involved in enabling and disabling the >> transmitter. Please believe me when I say such UARTs /do/ exist - and >> the FTDI examples I keep giving are a case in point. > > You are not being clear. I don't know and don't care what is inside the FTDI device. That's just magic to me, or it's like something inside the black hole, unknowable. More importantly, there is no transmitter enable on the RS-422 driver in the FTDI device, because it's not tristateable. >
As I mentioned earlier, this thread is getting seriously mixed-up. The transmit enable discussion started with /RS-485/ - long before you decided to use a hybrid bus and a RS-422 cable. You were concerned about how the PC controlled the transmitter enable for the RS-485 driver, and I have been trying to explain how this works when you use a decent UART device. You only confuse yourself when you jump to discussing RS-422 here, in this bit of the conversation. The FTDI USB to UART chip (or chips - they have several) provides a "transmitter enable" signal that is active with exactly the right timing for RS-485. This is provided automatically, in hardware - no software involved. If you connect one of these chips to an RS-485 driver, you immediately have a "perfect" RS-485 interface with automatic direction control. If you connect one of these chips to an RS-422 driver, you don't need direction control as RS-422 has two fixed-direction pairs. If you buy a pre-built cable from FTDI, it will have one of these driver chips connected appropriately.
> >>> The timing issue is not about loading another character into the transmit FIFO. It's about controlling the driver enable. >>> >> Yes, and it is a /solved/ issue if you pick the right hardware. >>> >>>> If you have a 9600 baud RS-485 receiver and you have a delay of 10 &micro;s >>>> between reception of the last bit and the start of transmission of the >>>> next message, your code is wrong - by nearly two orders of magnitude. >>>> It is that simple. >>>> >>>> If we take Modbus RTU as an example, you should be waiting 3.5 * 10 / >>>> 9600 seconds at a minimum - 3.65 /milli/seconds. If you are concerned >>>> about exactly where the receive interrupt comes in the last stop bit, >>>> add another half bit time and you get 3.7 ms. The half bit time is >>>> negligible. >>> >>> Your numbers are only relevant to Modbus. The only requirement is that no two drivers are on the bus at the same time, which requires zero delay from the end of the previous stop bit and the beginning of the next start bit. This is why the timing indication from the UART needs to be the end of the stop bit, not the middle. >>> >> A single transmitter, while sending a multi-character message, does not >> need any delay between sending the full stop bit and starting the next >> start bit. That is obvious. And that is why a "transmission complete" >> signal comes at the end of the start bit sent on the transmitter side. > > ??? Are you talking about the buffer management signals for the software? >
No.
> >> On the receiver side, the "byte received" signal comes in the /middle/ >> of the stop bit, as seen by the receiver, because that could be at the >> /end/ of the stop bit as seen by the transmitter due to clock >> differences. (It could also be at the /start/ of the stop bit as seen >> by the transmitter.) The receiver has to prepare for the next incoming >> start bit as soon as it identifies the stop bit. > > Again, this depends entirely on what this signal is used for. For entering the state of detecting the next start bit, yes, that is the perceived middle of the stop bit. >
Yes.
> >> But you want an extra delay of at least 11 bits (a character frame plus >> a buffer for clock speed differences) between messages - whether they >> are from the same transmitter or a different transmitter - to allow >> resynchronisation if something has gone wrong. > > Again, you seem to not understand the use case.
Yes, I understand your new use case, as well as the original discussions and the side discussions. I don't think /you/ understand that there had been a change, because you seem to imagine everything in the thread is in reference to your current solution.
> The split bus never has messages back to back on the same pair. It gets confusing because so many people have tried to talk up RS-485 using a single pair. In that case, everything is totally different. Slaves need to wait until the driver has stopped driving the bus, which means an additional bit time to account for timing errors. But RS-485 is not being used. Each bus is simplex, implementing a half-duplex protocol on the two buses. >
I agree. I know how your solution works, and have said many times that I think it sounds quite a good idea for the task in hand.
> >> I've explained in other posts why inter-message pauses are needed for >> reliable UART communication protocols. They don't /need/ to be as long >> as 35 bit times as Modbus specifies - 11 bit times is the minimum. If >> you don't understand this by now, then we should drop this point. > > You are assuming a need for error tolerance. But a munged message is the problem, not resyncing. A protocol to detect an error and retransmit is very messy. I've tried that before and it messes up the protocol badly. >
All communications have failures. Accept that as a principle, and understand how to deal with it. It's not hard to do - it is certainly much easier than trying to imagine and eliminate any possible cause of trouble.
> >>>> So put in a delay. An /appropriate/ delay. >>> >>> You are thinking software, like most people do. >> It doesn't matter whether things are software, hardware, or something in >> between. > > Of course it does. Since the slaves are all logic, there is no need for delays, at all. The slave driver can be enabled at any time the message has been received and the reply is ready to go. >
I'm sorry you don't understand, and I can't see how to explain it better than to say timing and delays are fundamental to the communication, not the implementation.
> >>> The slaves will be in logic, so the UART will have timing information relevant to the end of bits. I don't care how the master does it. The FTDI cable is alleged to "just work". Nonetheless, I will be providing for separate send and receive buses (or call it master/slave buses). Only one slave will be addressed at a time, so no collisions there, and the master can't collide with itself. >>> >> Yes, with the bus you have described, and the command/response protocol >> you have described, there should be no problems with multiple >> transmitters on the bus, and you have plenty of inter-message idle periods. >> >> However, this Usenet thread has been mixing posts from different people, >> and discussions of different kinds of buses and protocols - not just the >> solution you picked (which, as I have said before, should work fine). I >> think this mixing means that people are sometimes talking at cross-purposes. > > Yes, it gets confusing. >
There has, I think, been some interesting discussion despite the confusion. I hope you have got something out of it too - and I am glad that you have a bus solution that looks like it will work well for the purpose.
> >>>> If you are pushing the limits of a bus, in terms of load, distance, >>>> speed, cable characteristics, etc., then you need to do such >>>> calculations carefully and be precise in your specification of >>>> components, cables, topology, connectors, etc. For many buses in >>>> practice, they will work fine using whatever resistor you pull out your >>>> box of random parts. For a testbench, you are going to go for something >>>> between these extremes. >>> >>> How long is a piece of string? By keeping the interconnecting cables short, 4" or so, and a 5 foot cable from the PC, I don't expect problems with reflections. But it is prudent to allow for them anyway. The FTDI RS-422 cable seems to have a terminator on the receiver, but not the driver and no provision to add a terminator to the driver. >>> >> There is no point in having a terminator at a driver (unless you are >> talking about very high speed signals with serial resistors for slope >> control). You will want to add a terminator at the far end of both >> buses. This will give you a single terminator on the PC-to-slave bus, >> which is fine as it is fixed direction, and two terminators on the >> slave-to-PC bus, which is appropriate as it has no fixed direction. > > It does if you are using it in a shared bus with multiple drivers. The line should still be organized as linear with minimal stubs and a terminator on each end. This is not my plan, so maybe I should stop discussing it. >
Ideally, a bus should be (as you say) linear with minimal stubs and a terminator at each end - /except/ if one end is always driven. There is no point in having a terminator at a driver. Think about it in terms of impedance - the driver is either driving a line high, or it is driving it low. At any given time, one of the differential pair lines will have almost 0 ohm resistance to 0V, and the other will have nearly 0 ohm resistance to 5V. When the signal changes, these swap. Connecting a 100 ohm resistor across the lines at that point will make no difference whatsoever. The terminator is completely useless - it's just a waste of power. At the other end of the cable it's a different matter - there's a cable full of resistance, capacitance and inductance between the terminator and the near 0 ohm driver, so the terminator resistor /does/ make a difference. In more sophisticated tristate drivers, you would off (disconnect) the local terminator whenever the driver is enabled. This is done in some multi-lane systems as it can significantly reduce power and make slope control and pulse shaping easier. (It's not something you'd be likely to see on RS-485 buses.)
> >> (I agree that your piece of string is of a size that should work fine >> without reflections being a concern.) >>> Oddly enough, the RS-485 cable has a terminator that can be connected by the user, but that would be running through the cable separately from the transceiver signals, so essentially stubbed! I guess at 1 Mbps, 5 feet is less than the rise time, so not an issue. Since the interconnections between cards will be about five feet as well, it's unlikely to be an issue. The entire network will look like a lumped load, with the propagation time on the order of the rise/fall time. Even adding in a second chassis, makes the round trip twice the typical rise/fall time and unlikely to create any issues. >>> >>> They sell cables that have 5 m of cable, with a round trip of 30 ns or so. I think that would still not be significant in this application. The driver rise/fall times are 15 ns typ, 25 ns max. >>> >> The speed of a signal in a copper cable is typically about 70% of the >> speed of light, giving a minimum round-trip time closer to 45 ns than 30 >> ns. Not that it makes any difference here. > > The problem I have now is finding parts to use for this. These devices seem to be in a catagory that are hit hard by the shortage. My product uses the SN65C1168EPW, which is very hard to find in quantity. My customer has mentioned 18,000 units next year. I may need to get with the factory and see if they can supply me directly. >
Unfortunately, sourcing components these days is a much harder problem than designing the systems.
On Saturday, November 5, 2022 at 2:57:30 PM UTC-4, David Brown wrote:
> On 05/11/2022 18:23, Rick C wrote: > > On Saturday, November 5, 2022 at 7:47:59 AM UTC-4, David Brown wrote: > >> On 04/11/2022 16:40, Rick C wrote: > >>> On Friday, November 4, 2022 at 5:49:42 AM UTC-4, David Brown wrote: > >>>> I made no such assumptions about timings. The figures I gave were for > >>>> using a USB 2 based interface on a PC, where the USB polling timer is at > >>>> 8 kHz, or 125 &micro;s. That is half a bit time for 4 Kbaud. (I had doubled > >>>> the frequency instead of halving it and said the baud had to be above 16 > >>>> kBaud - that shows it's good to do your own calculations and not trust > >>>> others blindly!). At 1 MBaud (the suggested rate), the absolute fastest > >>>> the PC could turn around the bus would be 12 character times - half a > >>>> stop bit is irrelevant. > >>> > >>> You are making an assumption of implementation. There is a processor in the USB cable that is implementing the UART. The driver enable control is most likely is implemented there. It would be pointless and very subject to failure, to require the main CPU to handle this timing. There's no reason to expect the driver disable to take more than a fraction of a bit time, so the "UART" needs a timing signal to indicate when the stop bit has been completed. > >>> > >> I'm making the assumption that you are using appropriate hardware. No > >> processor, just a USB device that has a "transmitter enable" signal on > >> its UART. > > > > How can there not be a processor? I'm using a split bus, with the PC master driving all the slave receivers and all the slave transmitters sharing the PC receive bus. > > > > Is the PC not a processor? > Sure, the PC is a processor. It sends a command to the USB device, > saying "send these N bytes of data out on the UART ...". > > The USB device is /not/ a processor - it is a converter between USB and > UART. And it is the USB device that controls the transmit enable signal > to the RS-485/RS-422 driver. There is no software on any processor > handling the transmit enable signal - the driver is enabled precisely > when the USB to UART device is sending data on the UART.
Actually, the FTDI device is a processor. I expect it actually has no UART, rather the entire thing is done in software. I recall there being code to download for various purposes, such as JTAG, but I forget the details. I'm pretty sure the TxEn is controlled by FTDI software.
> > The slaves have no USB. > > > > > >> I'm getting the impression that you have never heard of such a UART > >> (either in a USB-to-UART device, or as a UART peripheral elsewhere), and > >> assume software has to be involved in enabling and disabling the > >> transmitter. Please believe me when I say such UARTs /do/ exist - and > >> the FTDI examples I keep giving are a case in point. > > > > You are not being clear. I don't know and don't care what is inside the FTDI device. That's just magic to me, or it's like something inside the black hole, unknowable. More importantly, there is no transmitter enable on the RS-422 driver in the FTDI device, because it's not tristateable. > > > As I mentioned earlier, this thread is getting seriously mixed-up. The > transmit enable discussion started with /RS-485/ - long before you > decided to use a hybrid bus and a RS-422 cable. You were concerned > about how the PC controlled the transmitter enable for the RS-485 > driver, and I have been trying to explain how this works when you use a > decent UART device. You only confuse yourself when you jump to > discussing RS-422 here, in this bit of the conversation.
Ok, I'll stop talking about what I am doing.
> The FTDI USB to UART chip (or chips - they have several) provides a > "transmitter enable" signal that is active with exactly the right timing > for RS-485. This is provided automatically, in hardware - no software > involved. If you connect one of these chips to an RS-485 driver, you > immediately have a "perfect" RS-485 interface with automatic direction > control. If you connect one of these chips to an RS-422 driver, you > don't need direction control as RS-422 has two fixed-direction pairs. > If you buy a pre-built cable from FTDI, it will have one of these driver > chips connected appropriately.
Ok, thanks.
> >>> The timing issue is not about loading another character into the transmit FIFO. It's about controlling the driver enable. > >>> > >> Yes, and it is a /solved/ issue if you pick the right hardware. > >>> > >>>> If you have a 9600 baud RS-485 receiver and you have a delay of 10 &micro;s > >>>> between reception of the last bit and the start of transmission of the > >>>> next message, your code is wrong - by nearly two orders of magnitude. > >>>> It is that simple. > >>>> > >>>> If we take Modbus RTU as an example, you should be waiting 3.5 * 10 / > >>>> 9600 seconds at a minimum - 3.65 /milli/seconds. If you are concerned > >>>> about exactly where the receive interrupt comes in the last stop bit, > >>>> add another half bit time and you get 3.7 ms. The half bit time is > >>>> negligible. > >>> > >>> Your numbers are only relevant to Modbus. The only requirement is that no two drivers are on the bus at the same time, which requires zero delay from the end of the previous stop bit and the beginning of the next start bit. This is why the timing indication from the UART needs to be the end of the stop bit, not the middle. > >>> > >> A single transmitter, while sending a multi-character message, does not > >> need any delay between sending the full stop bit and starting the next > >> start bit. That is obvious. And that is why a "transmission complete" > >> signal comes at the end of the start bit sent on the transmitter side. > > > > ??? Are you talking about the buffer management signals for the software? > > > No. > > > >> On the receiver side, the "byte received" signal comes in the /middle/ > >> of the stop bit, as seen by the receiver, because that could be at the > >> /end/ of the stop bit as seen by the transmitter due to clock > >> differences. (It could also be at the /start/ of the stop bit as seen > >> by the transmitter.) The receiver has to prepare for the next incoming > >> start bit as soon as it identifies the stop bit. > > > > Again, this depends entirely on what this signal is used for. For entering the state of detecting the next start bit, yes, that is the perceived middle of the stop bit. > > > Yes. > > > >> But you want an extra delay of at least 11 bits (a character frame plus > >> a buffer for clock speed differences) between messages - whether they > >> are from the same transmitter or a different transmitter - to allow > >> resynchronisation if something has gone wrong. > > > > Again, you seem to not understand the use case. > Yes, I understand your new use case, as well as the original discussions > and the side discussions. I don't think /you/ understand that there had > been a change, because you seem to imagine everything in the thread is > in reference to your current solution. > > The split bus never has messages back to back on the same pair. It gets confusing because so many people have tried to talk up RS-485 using a single pair. In that case, everything is totally different. Slaves need to wait until the driver has stopped driving the bus, which means an additional bit time to account for timing errors. But RS-485 is not being used. Each bus is simplex, implementing a half-duplex protocol on the two buses. > > > I agree. I know how your solution works, and have said many times that > I think it sounds quite a good idea for the task in hand.
Ok, then the conversation has reached an end.
> >> I've explained in other posts why inter-message pauses are needed for > >> reliable UART communication protocols. They don't /need/ to be as long > >> as 35 bit times as Modbus specifies - 11 bit times is the minimum. If > >> you don't understand this by now, then we should drop this point. > > > > You are assuming a need for error tolerance. But a munged message is the problem, not resyncing. A protocol to detect an error and retransmit is very messy. I've tried that before and it messes up the protocol badly. > > > All communications have failures. Accept that as a principle, and > understand how to deal with it. It's not hard to do - it is certainly > much easier than trying to imagine and eliminate any possible cause of > trouble.
That's not a premise I have to deal with. I will also die. I'm not factoring that into the project either. I don't need to eliminate "any possible cause of trouble". I only have to reach an effective level of reliability. As I've said, error handling protocols are complex and subject to failure. It's much more likely I will have more trouble with the error handling protocol than I will with bit errors on the bus. So I choose the most reliable solution, no error handling. So without an error handling protocol in the software, I don't need to do anything further to deal with errors.
> >>>> So put in a delay. An /appropriate/ delay. > >>> > >>> You are thinking software, like most people do. > >> It doesn't matter whether things are software, hardware, or something in > >> between. > > > > Of course it does. Since the slaves are all logic, there is no need for delays, at all. The slave driver can be enabled at any time the message has been received and the reply is ready to go. > > > I'm sorry you don't understand, and I can't see how to explain it better > than to say timing and delays are fundamental to the communication, not > the implementation.
I understand perfectly. I only need to meet the requirements of this project. Not the requirements of some ultra high reliability project. With the RS-422 interface, I expect I could run the entire system continuously, and would not find an error in my lifetime. That's good enough for me.
> >>> The slaves will be in logic, so the UART will have timing information relevant to the end of bits. I don't care how the master does it. The FTDI cable is alleged to "just work". Nonetheless, I will be providing for separate send and receive buses (or call it master/slave buses). Only one slave will be addressed at a time, so no collisions there, and the master can't collide with itself. > >>> > >> Yes, with the bus you have described, and the command/response protocol > >> you have described, there should be no problems with multiple > >> transmitters on the bus, and you have plenty of inter-message idle periods. > >> > >> However, this Usenet thread has been mixing posts from different people, > >> and discussions of different kinds of buses and protocols - not just the > >> solution you picked (which, as I have said before, should work fine). I > >> think this mixing means that people are sometimes talking at cross-purposes. > > > > Yes, it gets confusing. > > > There has, I think, been some interesting discussion despite the > confusion. I hope you have got something out of it too - and I am glad > that you have a bus solution that looks like it will work well for the > purpose. > > > >>>> If you are pushing the limits of a bus, in terms of load, distance, > >>>> speed, cable characteristics, etc., then you need to do such > >>>> calculations carefully and be precise in your specification of > >>>> components, cables, topology, connectors, etc. For many buses in > >>>> practice, they will work fine using whatever resistor you pull out your > >>>> box of random parts. For a testbench, you are going to go for something > >>>> between these extremes. > >>> > >>> How long is a piece of string? By keeping the interconnecting cables short, 4" or so, and a 5 foot cable from the PC, I don't expect problems with reflections. But it is prudent to allow for them anyway. The FTDI RS-422 cable seems to have a terminator on the receiver, but not the driver and no provision to add a terminator to the driver. > >>> > >> There is no point in having a terminator at a driver (unless you are > >> talking about very high speed signals with serial resistors for slope > >> control). You will want to add a terminator at the far end of both > >> buses. This will give you a single terminator on the PC-to-slave bus, > >> which is fine as it is fixed direction, and two terminators on the > >> slave-to-PC bus, which is appropriate as it has no fixed direction. > > > > It does if you are using it in a shared bus with multiple drivers. The line should still be organized as linear with minimal stubs and a terminator on each end. This is not my plan, so maybe I should stop discussing it. > > > Ideally, a bus should be (as you say) linear with minimal stubs and a > terminator at each end - /except/ if one end is always driven. There is > no point in having a terminator at a driver. Think about it in terms of > impedance - the driver is either driving a line high, or it is driving > it low. At any given time, one of the differential pair lines will have > almost 0 ohm resistance to 0V, and the other will have nearly 0 ohm > resistance to 5V. When the signal changes, these swap. Connecting a > 100 ohm resistor across the lines at that point will make no difference > whatsoever. The terminator is completely useless - it's just a waste of > power. At the other end of the cable it's a different matter - there's > a cable full of resistance, capacitance and inductance between the > terminator and the near 0 ohm driver, so the terminator resistor /does/ > make a difference. > > In more sophisticated tristate drivers, you would off (disconnect) the > local terminator whenever the driver is enabled. This is done in some > multi-lane systems as it can significantly reduce power and make slope > control and pulse shaping easier. (It's not something you'd be likely > to see on RS-485 buses.) > > > >> (I agree that your piece of string is of a size that should work fine > >> without reflections being a concern.) > >>> Oddly enough, the RS-485 cable has a terminator that can be connected by the user, but that would be running through the cable separately from the transceiver signals, so essentially stubbed! I guess at 1 Mbps, 5 feet is less than the rise time, so not an issue. Since the interconnections between cards will be about five feet as well, it's unlikely to be an issue. The entire network will look like a lumped load, with the propagation time on the order of the rise/fall time. Even adding in a second chassis, makes the round trip twice the typical rise/fall time and unlikely to create any issues. > >>> > >>> They sell cables that have 5 m of cable, with a round trip of 30 ns or so. I think that would still not be significant in this application. The driver rise/fall times are 15 ns typ, 25 ns max. > >>> > >> The speed of a signal in a copper cable is typically about 70% of the > >> speed of light, giving a minimum round-trip time closer to 45 ns than 30 > >> ns. Not that it makes any difference here. > > > > The problem I have now is finding parts to use for this. These devices seem to be in a catagory that are hit hard by the shortage. My product uses the SN65C1168EPW, which is very hard to find in quantity. My customer has mentioned 18,000 units next year. I may need to get with the factory and see if they can supply me directly. > > > Unfortunately, sourcing components these days is a much harder problem > than designing the systems.
Indeed. -- Rick C. -+-- Get 1,000 miles of free Supercharging -+-- Tesla referral code - https://ts.la/richard11209