EmbeddedRelated.com
Forums

Shared Communications Bus - RS-422 or RS-485

Started by Rick C November 2, 2022
On 04/11/2022 05:10, Rick C wrote:

> Yeah, I'm fine with a cable I can make to any length I want in 5 minutes, with most of that spent finding where I put the parts and tool. Oh, and costs less than $1. >
A cable you can make in 5 minutes doesn't cost $1, unless you earn less than a hamburger flipper and the parts are free. The cost of a poor connection when making the cable could be huge in downtime of the testbench. It should not be hard to get a bag of pre-made short Ethernet cables for a couple of dollars per cable - it's probably cheaper to buy an effectively unlimited supply than to buy a good quality crimping tool.
Il 04/11/2022 10:49, David Brown ha scritto:
> On 04/11/2022 08:45, pozz wrote: >> Il 03/11/2022 16:26, David Brown ha scritto: >>> On 03/11/2022 14:00, pozz wrote: >>>> Il 03/11/2022 12:42, David Brown ha scritto: >>>>> On 03/11/2022 00:27, Rick C wrote: >>>>>> On Wednesday, November 2, 2022 at 4:49:16 PM UTC-4, David Brown >>>>>> wrote: >>>>>>> On 02/11/2022 20:20, Rick C wrote: >>>>>>>> On Wednesday, November 2, 2022 at 5:28:21 AM UTC-4, David Brown >>>>>>>> wrote: >>>>>>>>> On 02/11/2022 06:28, Rick C wrote: >>>> >>>> >>>>> You are correct that reception is in the middle of the stop bit >>>>> (typically sub-slot 9 of 16).  The first transmitter will be >>>>> disabled at the end of the stop bit, and the next transmitter must >>>>> not enable its driver until after that point - it must wait at >>>>> least half a bit time after reception before starting >>>>> transmission.  (It can wait longer without trouble, which is why >>>>> faster baud rates are less likely to involve any complications here.) >>>> >>>> Do you mean that RX interrupt triggers in the middle of the stop bit >>>> and not at the end? Interesting, but are you sure this is the case >>>> for every UART implemented in MCUs? >>> >>> Of course I'm not sure - there are a /lot/ of MCU manufacturers! >>> >>> UART receivers usually work in the same way, however.  They have a >>> sample clock running at 16 times the baud clock.  The start bit is >>> edge triggered to give the start of the character frame.  Then each >>> bit is sampled in the middle of its time slot - usually at subbit >>> slots 7, 8, and 9 with majority voting.  So the stop bit is >>> recognized by subbit slot 9 of the tenth bit (assuming 8-bit, no >>> parity) - the voltage on the line after that is irrelevant.  (Even >>> when you have two stop bits, receivers never check the second stop >>> bit - it affects transmit timing only.)  What purpose would there be >>> in waiting another 7 subbits before triggering the interrupt, DMA, or >>> whatever? >> >> There's no real purpose, but it's important to know exactly when the >> RX interrupt is fired from the UART. >> > > I think it is extremely rare that this is important.  I can't think of a > single occasion when I have thought it remotely relevant where in the > stop bit the interrupt comes. > >> Usually the next transmitter starts transmitting after receiving the >> last byte of the previous transmitter (for example, the slave starts >> replying to the master after receiving the complete message from it). >> > > No.  Usually the next transmitter starts after receiving the last byte, > and /then a pause/.  There will always be some handling time in > software, and may also include an explicit pause.  Almost always you > will want to do at least a minimum of checking of the incoming data > before deciding on the next telegram to be sent out.  But if you have > very fast handling in relation to the baud rate, you will want an > explicit pause too - protocols regularly specify a minimum pause (such > as 3.5 character times for Modbus RTU), and you definitely want it to be > at least one full character time to ensure no listener gets hopelessly > out of sync.
In theory, if all the nodes on the bus were able to change direction in hardware (exactly at the end of the stop bit), you will not be forced to introduce any delay in the transmission. Many times I'm the author of a custom protocol because some nodes on a shared bus, so I'm not forced to follow any specifications. When I didn't introduce any delay in the transmission, I sometimes faced this issue. In my experience, the bus is heterogeneous enough to have a fast replying slave to a slow master.
>> Now I think of the issue related to a transmitter that delays a little >> to turn around the direction of its transceiver, from TX to RX. Every >> transmitter on the bus should take into account this delay and avoid >> starting transmission too soon. > > They should, yes.  The turnaround delay should be negligible in this day > and age - if not, your software design is screwed or you have picked the > wrong hardware.  (Of course, you don't always get the choice of hardware > you want, and programmers are often left to find ways around hardware > design flaws.)
Negligible doesn't mean anything. If thre's a poor 8 bit PIC (previous transmitter) clocked at 8MHz that changes direction in TXC interrupt while other interrupts are active, and there's a Cortex-M4 clocked at 200MHz (next transmitter), you will encounter this issue. This is more evident if, as you are saying, the Cortex-M4 is able to start processing the message from the PIC at the midpoint of last stop bit, while the PIC disables its driver at the *end* of the stop bit plus an additional delay caused by interrupts handling. In this cases the half bit time is not negligible and must be added to the transmission delay.
>> So I usually implement a short delay before starting a new message >> transmission. If the maximum expected delay of moving the direction >> from TX to RX is 10us, I could think to use a 10us delay, but this is >> wrong in your assumption. >> > > Implementing an explicit delay (or being confident that your telegram > handling code takes long enough) is a good idea. > >> If the RX interrupt is at the middle of the stop bit, I should delay >> the new transmission of 10us + half of bit time. With 9600 this is >> 52us that is much higher than 10us. > > I made no such assumptions about timings.  The figures I gave were for > using a USB 2 based interface on a PC, where the USB polling timer is at > 8 kHz, or 125 µs.  That is half a bit time for 4 Kbaud.  (I had doubled > the frequency instead of halving it and said the baud had to be above 16 > kBaud - that shows it's good to do your own calculations and not trust > others blindly!).  At 1 MBaud (the suggested rate), the absolute fastest > the PC could turn around the bus would be 12 character times - half a > stop bit is irrelevant. > > If you have a 9600 baud RS-485 receiver and you have a delay of 10 µs > between reception of the last bit and the start of transmission of the > next message, your code is wrong - by nearly two orders of magnitude. It > is that simple.
Not always. If you have only MCUs that are able to control direction in hardware, you don't need any delay before transmission.
> If we take Modbus RTU as an example, you should be waiting 3.5 * 10 / > 9600 seconds at a minimum - 3.65 /milli/seconds.  If you are concerned > about exactly where the receive interrupt comes in the last stop bit, > add another half bit time and you get 3.7 ms.  The half bit time is > negligible.
Oh yes, if you have already implemented a pause of 3.5 char times, it is ok.
>> I know the next transmitter should make some processing of the >> previous received message, prepare and buffer the new message to >> transmit, so the delay is somewhat automatic, but in many cases I have >> small 8-bits PICs and full-futured Linux box on the same bus and the >> Linux could be very fast to start the new transmission. > > So put in a delay.  An /appropriate/ delay. > >> >>>> I wouldn't be surprised if the implementation was different for >>>> different manufacturers. >>>> >>> >>> I've seen a bit of variation, including 8 subbit clocks per baud >>> clock, wider sampling ranges, re-sync of the clock on edges, etc. >>> And of course you don't always get the details of the timings in >>> datasheets (and who bothers measuring them?)  But the key principles >>> are the same. >>> >>>> >>>>>> None of this matters to me really.  I'm going to use more wires, >>>>>> and do the multi-drop from the PC to the slaves on one pair and >>>>>> use RS-422 to multi-point from the slaves to the PC.  Since the >>>>>> slaves are controlled by the master, they will never collide.  The >>>>>> master can't collide with itself, so I can ignore any issues with >>>>>> this.  I will use the bias resistors to assure a valid idle >>>>>> state.  I may need to select different devices than the ones I use >>>>>> in the product.  I think there are differences in the input load >>>>>> and I want to be sure I can chain up to 32 units. >>>>>> >>>>> >>>>> OK.  I have no idea what such a hybrid bus should technically be >>>>> called, but I think it should work absolutely fine for the purpose >>>>> and seems like a solid solution.  I would not foresee any issues >>>>> with 32 nodes on such a bus, especially if it is relatively short >>>>> and you have terminators at each end. >>>> >>>> In my experience, termination resistors at each end of the line >>>> could introduce other troubles if they aren't strictly required >>>> (because of signal integrity on long lines at high baud rates). >>>> >>> >>> RS-485 requires them - you want to hold the bus at a stable idle >>> state when nothing is driving it. >> >> But this is the goal of *bias* resistors, not termination resistors. >> > > Yes - but see below.  Bias resistors are part of the termination - it > just means that you have terminating resistors to 5V and 0V as well as > across the balanced pair. > >> >>> You also want to have a bit of load so that you have some current on >>> the bus, and thereby greater noise immunity. >> >> Of course, but termination resistors are usually small (around 100 >> ohms) because they should match the impedance of the cable. If you >> want only to introduce "some current" on the bus, you could use >> resistors in the order of 1k, but this isn't strictly a *termination* >> resistor. >> > > If you have a cable that is long enough (or speeds fast enough) that it > needs to be treated as a transmission line with controlled impedance, > then you do need impedance matched terminators to avoid reflections > causing trouble.  Usually you don't. > > A "terminating resistor" is just a "resistor at the terminator" - it > does not imply impedance matching, or any other specific purpose.  You > pick a value (and network) appropriate for the task in hand - maybe you > impedance matching, maybe you'd rather have larger values to reduce > power consumption. > >> >>>> The receiver input impedance of all the nodes on the bus are in >>>> parallel with the two terminators. If you have many nodes, the >>>> equivalent impedance on the bus is much small and the partition with >>>> bias resistors could reduce the differential voltage between A and B >>>> at idle to less than 200mV. >>>> >>>> If you don't use true fail-safe transceivers, a fault start bit >>>> could be seen by these kind of receivers. >>>> >>> >>> Receiver load is very small on modern RS-485 drivers. >> >> ST3485 says the input load of the receiver around 24k. When you >> connect 32 slaves, the equivalent resistor would be 750 ohms, that >> should be enough to have "some current" on the bus. If you add >> *termination* resistors in the order of 100R on both sides, you could >> reduce drastically the differential voltage between A and B at idle >> state. >> > > If you are pushing the limits of a bus, in terms of load, distance, > speed, cable characteristics, etc., then you need to do such > calculations carefully and be precise in your specification of > components, cables, topology, connectors, etc.  For many buses in > practice, they will work fine using whatever resistor you pull out your > box of random parts.  For a testbench, you are going to go for something > between these extremes.
Ok, I thought you were suggesting to add impedance matching (slow) resistors as terminators in any case.
On Friday, November 4, 2022 at 5:49:42 AM UTC-4, David Brown wrote:
> On 04/11/2022 08:45, pozz wrote: > > Il 03/11/2022 16:26, David Brown ha scritto: > >> On 03/11/2022 14:00, pozz wrote: > >>> Il 03/11/2022 12:42, David Brown ha scritto: > >>>> On 03/11/2022 00:27, Rick C wrote: > >>>>> On Wednesday, November 2, 2022 at 4:49:16 PM UTC-4, David Brown wrote: > >>>>>> On 02/11/2022 20:20, Rick C wrote: > >>>>>>> On Wednesday, November 2, 2022 at 5:28:21 AM UTC-4, David Brown > >>>>>>> wrote: > >>>>>>>> On 02/11/2022 06:28, Rick C wrote: > >>> > >>> > >>>> You are correct that reception is in the middle of the stop bit > >>>> (typically sub-slot 9 of 16). The first transmitter will be > >>>> disabled at the end of the stop bit, and the next transmitter must > >>>> not enable its driver until after that point - it must wait at least > >>>> half a bit time after reception before starting transmission. (It > >>>> can wait longer without trouble, which is why faster baud rates are > >>>> less likely to involve any complications here.) > >>> > >>> Do you mean that RX interrupt triggers in the middle of the stop bit > >>> and not at the end? Interesting, but are you sure this is the case > >>> for every UART implemented in MCUs? > >> > >> Of course I'm not sure - there are a /lot/ of MCU manufacturers! > >> > >> UART receivers usually work in the same way, however. They have a > >> sample clock running at 16 times the baud clock. The start bit is > >> edge triggered to give the start of the character frame. Then each > >> bit is sampled in the middle of its time slot - usually at subbit > >> slots 7, 8, and 9 with majority voting. So the stop bit is recognized > >> by subbit slot 9 of the tenth bit (assuming 8-bit, no parity) - the > >> voltage on the line after that is irrelevant. (Even when you have two > >> stop bits, receivers never check the second stop bit - it affects > >> transmit timing only.) What purpose would there be in waiting another > >> 7 subbits before triggering the interrupt, DMA, or whatever? > > > > There's no real purpose, but it's important to know exactly when the RX > > interrupt is fired from the UART. > > > I think it is extremely rare that this is important. I can't think of a > single occasion when I have thought it remotely relevant where in the > stop bit the interrupt comes. > > Usually the next transmitter starts transmitting after receiving the > > last byte of the previous transmitter (for example, the slave starts > > replying to the master after receiving the complete message from it). > > > No. Usually the next transmitter starts after receiving the last byte, > and /then a pause/. There will always be some handling time in > software, and may also include an explicit pause. Almost always you > will want to do at least a minimum of checking of the incoming data > before deciding on the next telegram to be sent out. But if you have > very fast handling in relation to the baud rate, you will want an > explicit pause too - protocols regularly specify a minimum pause (such > as 3.5 character times for Modbus RTU), and you definitely want it to be > at least one full character time to ensure no listener gets hopelessly > out of sync. > > Now I think of the issue related to a transmitter that delays a little > > to turn around the direction of its transceiver, from TX to RX. Every > > transmitter on the bus should take into account this delay and avoid > > starting transmission too soon. > They should, yes. The turnaround delay should be negligible in this day > and age - if not, your software design is screwed or you have picked the > wrong hardware. (Of course, you don't always get the choice of hardware > you want, and programmers are often left to find ways around hardware > design flaws.) > > > > So I usually implement a short delay before starting a new message > > transmission. If the maximum expected delay of moving the direction from > > TX to RX is 10us, I could think to use a 10us delay, but this is wrong > > in your assumption. > > > Implementing an explicit delay (or being confident that your telegram > handling code takes long enough) is a good idea. > > If the RX interrupt is at the middle of the stop bit, I should delay the > > new transmission of 10us + half of bit time. With 9600 this is 52us that > > is much higher than 10us. > > > I made no such assumptions about timings. The figures I gave were for > using a USB 2 based interface on a PC, where the USB polling timer is at > 8 kHz, or 125 µs. That is half a bit time for 4 Kbaud. (I had doubled > the frequency instead of halving it and said the baud had to be above 16 > kBaud - that shows it's good to do your own calculations and not trust > others blindly!). At 1 MBaud (the suggested rate), the absolute fastest > the PC could turn around the bus would be 12 character times - half a > stop bit is irrelevant.
You are making an assumption of implementation. There is a processor in the USB cable that is implementing the UART. The driver enable control is most likely is implemented there. It would be pointless and very subject to failure, to require the main CPU to handle this timing. There's no reason to expect the driver disable to take more than a fraction of a bit time, so the "UART" needs a timing signal to indicate when the stop bit has been completed. The timing issue is not about loading another character into the transmit FIFO. It's about controlling the driver enable.
> If you have a 9600 baud RS-485 receiver and you have a delay of 10 µs > between reception of the last bit and the start of transmission of the > next message, your code is wrong - by nearly two orders of magnitude. > It is that simple. > > If we take Modbus RTU as an example, you should be waiting 3.5 * 10 / > 9600 seconds at a minimum - 3.65 /milli/seconds. If you are concerned > about exactly where the receive interrupt comes in the last stop bit, > add another half bit time and you get 3.7 ms. The half bit time is > negligible.
Your numbers are only relevant to Modbus. The only requirement is that no two drivers are on the bus at the same time, which requires zero delay from the end of the previous stop bit and the beginning of the next start bit. This is why the timing indication from the UART needs to be the end of the stop bit, not the middle.
> > I know the next transmitter should make some processing of the previous > > received message, prepare and buffer the new message to transmit, so the > > delay is somewhat automatic, but in many cases I have small 8-bits PICs > > and full-futured Linux box on the same bus and the Linux could be very > > fast to start the new transmission. > > > So put in a delay. An /appropriate/ delay.
You are thinking software, like most people do. The slaves will be in logic, so the UART will have timing information relevant to the end of bits. I don't care how the master does it. The FTDI cable is alleged to "just work". Nonetheless, I will be providing for separate send and receive buses (or call it master/slave buses). Only one slave will be addressed at a time, so no collisions there, and the master can't collide with itself.
> >>> I wouldn't be surprised if the implementation was different for > >>> different manufacturers. > >>> > >> > >> I've seen a bit of variation, including 8 subbit clocks per baud > >> clock, wider sampling ranges, re-sync of the clock on edges, etc. And > >> of course you don't always get the details of the timings in > >> datasheets (and who bothers measuring them?) But the key principles > >> are the same. > >> > >>> > >>>>> None of this matters to me really. I'm going to use more wires, > >>>>> and do the multi-drop from the PC to the slaves on one pair and use > >>>>> RS-422 to multi-point from the slaves to the PC. Since the slaves > >>>>> are controlled by the master, they will never collide. The master > >>>>> can't collide with itself, so I can ignore any issues with this. I > >>>>> will use the bias resistors to assure a valid idle state. I may > >>>>> need to select different devices than the ones I use in the > >>>>> product. I think there are differences in the input load and I > >>>>> want to be sure I can chain up to 32 units. > >>>>> > >>>> > >>>> OK. I have no idea what such a hybrid bus should technically be > >>>> called, but I think it should work absolutely fine for the purpose > >>>> and seems like a solid solution. I would not foresee any issues > >>>> with 32 nodes on such a bus, especially if it is relatively short > >>>> and you have terminators at each end. > >>> > >>> In my experience, termination resistors at each end of the line could > >>> introduce other troubles if they aren't strictly required (because of > >>> signal integrity on long lines at high baud rates). > >>> > >> > >> RS-485 requires them - you want to hold the bus at a stable idle state > >> when nothing is driving it. > > > > But this is the goal of *bias* resistors, not termination resistors. > > > Yes - but see below. Bias resistors are part of the termination - it > just means that you have terminating resistors to 5V and 0V as well as > across the balanced pair. > > > >> You also want to have a bit of load so that you have some current on > >> the bus, and thereby greater noise immunity. > > > > Of course, but termination resistors are usually small (around 100 ohms) > > because they should match the impedance of the cable. If you want only > > to introduce "some current" on the bus, you could use resistors in the > > order of 1k, but this isn't strictly a *termination* resistor. > > > If you have a cable that is long enough (or speeds fast enough) that it > needs to be treated as a transmission line with controlled impedance, > then you do need impedance matched terminators to avoid reflections > causing trouble. Usually you don't. > > A "terminating resistor" is just a "resistor at the terminator" - it > does not imply impedance matching, or any other specific purpose. You > pick a value (and network) appropriate for the task in hand - maybe you > impedance matching, maybe you'd rather have larger values to reduce > power consumption. > > > >>> The receiver input impedance of all the nodes on the bus are in > >>> parallel with the two terminators. If you have many nodes, the > >>> equivalent impedance on the bus is much small and the partition with > >>> bias resistors could reduce the differential voltage between A and B > >>> at idle to less than 200mV. > >>> > >>> If you don't use true fail-safe transceivers, a fault start bit could > >>> be seen by these kind of receivers. > >>> > >> > >> Receiver load is very small on modern RS-485 drivers. > > > > ST3485 says the input load of the receiver around 24k. When you connect > > 32 slaves, the equivalent resistor would be 750 ohms, that should be > > enough to have "some current" on the bus. If you add *termination* > > resistors in the order of 100R on both sides, you could reduce > > drastically the differential voltage between A and B at idle state. > > > If you are pushing the limits of a bus, in terms of load, distance, > speed, cable characteristics, etc., then you need to do such > calculations carefully and be precise in your specification of > components, cables, topology, connectors, etc. For many buses in > practice, they will work fine using whatever resistor you pull out your > box of random parts. For a testbench, you are going to go for something > between these extremes.
How long is a piece of string? By keeping the interconnecting cables short, 4" or so, and a 5 foot cable from the PC, I don't expect problems with reflections. But it is prudent to allow for them anyway. The FTDI RS-422 cable seems to have a terminator on the receiver, but not the driver and no provision to add a terminator to the driver. Oddly enough, the RS-485 cable has a terminator that can be connected by the user, but that would be running through the cable separately from the transceiver signals, so essentially stubbed! I guess at 1 Mbps, 5 feet is less than the rise time, so not an issue. Since the interconnections between cards will be about five feet as well, it's unlikely to be an issue. The entire network will look like a lumped load, with the propagation time on the order of the rise/fall time. Even adding in a second chassis, makes the round trip twice the typical rise/fall time and unlikely to create any issues. They sell cables that have 5 m of cable, with a round trip of 30 ns or so. I think that would still not be significant in this application. The driver rise/fall times are 15 ns typ, 25 ns max. -- Rick C. +-+ Get 1,000 miles of free Supercharging +-+ Tesla referral code - https://ts.la/richard11209
On Friday, November 4, 2022 at 6:13:37 AM UTC-4, David Brown wrote:
> On 04/11/2022 05:10, Rick C wrote: > > > Yeah, I'm fine with a cable I can make to any length I want in 5 minutes, with most of that spent finding where I put the parts and tool. Oh, and costs less than $1. > > > A cable you can make in 5 minutes doesn't cost $1, unless you earn less > than a hamburger flipper and the parts are free. The cost of a poor > connection when making the cable could be huge in downtime of the > testbench. It should not be hard to get a bag of pre-made short > Ethernet cables for a couple of dollars per cable - it's probably > cheaper to buy an effectively unlimited supply than to buy a good > quality crimping tool.
You are not only right, but absolutely correct. Cablestogo has 6 inch cables for $2.99 each. I'd like to keep them a bit shorter, but that's probably not an issue. Under quantity, they even list "unlimited supply". -- Rick C. ++- Get 1,000 miles of free Supercharging ++- Tesla referral code - https://ts.la/richard11209
On 04/11/2022 15:37, pozz wrote:
> Il 04/11/2022 10:49, David Brown ha scritto: >> On 04/11/2022 08:45, pozz wrote: >>> Il 03/11/2022 16:26, David Brown ha scritto: >>>> On 03/11/2022 14:00, pozz wrote: >>>>> Il 03/11/2022 12:42, David Brown ha scritto: >>>>>> On 03/11/2022 00:27, Rick C wrote: >>>>>>> On Wednesday, November 2, 2022 at 4:49:16 PM UTC-4, David Brown >>>>>>> wrote: >>>>>>>> On 02/11/2022 20:20, Rick C wrote: >>>>>>>>> On Wednesday, November 2, 2022 at 5:28:21 AM UTC-4, David Brown >>>>>>>>> wrote: >>>>>>>>>> On 02/11/2022 06:28, Rick C wrote: >>>>> >>>>> >>>>>> You are correct that reception is in the middle of the stop bit >>>>>> (typically sub-slot 9 of 16).  The first transmitter will be >>>>>> disabled at the end of the stop bit, and the next transmitter must >>>>>> not enable its driver until after that point - it must wait at >>>>>> least half a bit time after reception before starting >>>>>> transmission.  (It can wait longer without trouble, which is why >>>>>> faster baud rates are less likely to involve any complications here.) >>>>> >>>>> Do you mean that RX interrupt triggers in the middle of the stop >>>>> bit and not at the end? Interesting, but are you sure this is the >>>>> case for every UART implemented in MCUs? >>>> >>>> Of course I'm not sure - there are a /lot/ of MCU manufacturers! >>>> >>>> UART receivers usually work in the same way, however.  They have a >>>> sample clock running at 16 times the baud clock.  The start bit is >>>> edge triggered to give the start of the character frame.  Then each >>>> bit is sampled in the middle of its time slot - usually at subbit >>>> slots 7, 8, and 9 with majority voting.  So the stop bit is >>>> recognized by subbit slot 9 of the tenth bit (assuming 8-bit, no >>>> parity) - the voltage on the line after that is irrelevant.  (Even >>>> when you have two stop bits, receivers never check the second stop >>>> bit - it affects transmit timing only.)  What purpose would there be >>>> in waiting another 7 subbits before triggering the interrupt, DMA, >>>> or whatever? >>> >>> There's no real purpose, but it's important to know exactly when the >>> RX interrupt is fired from the UART. >>> >> >> I think it is extremely rare that this is important.  I can't think of >> a single occasion when I have thought it remotely relevant where in >> the stop bit the interrupt comes. >> >>> Usually the next transmitter starts transmitting after receiving the >>> last byte of the previous transmitter (for example, the slave starts >>> replying to the master after receiving the complete message from it). >>> >> >> No.  Usually the next transmitter starts after receiving the last >> byte, and /then a pause/.  There will always be some handling time in >> software, and may also include an explicit pause.  Almost always you >> will want to do at least a minimum of checking of the incoming data >> before deciding on the next telegram to be sent out.  But if you have >> very fast handling in relation to the baud rate, you will want an >> explicit pause too - protocols regularly specify a minimum pause (such >> as 3.5 character times for Modbus RTU), and you definitely want it to >> be at least one full character time to ensure no listener gets >> hopelessly out of sync. > > In theory, if all the nodes on the bus were able to change direction in > hardware (exactly at the end of the stop bit), you will not be forced to > introduce any delay in the transmission.
Communication is about /reliably/ transferring data between devices. Asynchronous serial communication is about doing that despite slight differences in clock rates, differences in synchronisation, differences in startup times, etc. If you don't have idle pauses, you have almost zero chance of staying in sync across the nodes - and no chance at all of recovery when that happens. /Every/ successful serial protocol has pauses between frames - long enough pauses that the idle time could not possibly be part of a normal full speed frame. That does not just apply to UART protocols, or even just to asynchronous protocols. The pause does not have to be as long as 3.5 characters, but you need a pause - just as you need other error recovery handling.
> > Many times I'm the author of a custom protocol because some nodes on a > shared bus, so I'm not forced to follow any specifications. When I > didn't introduce any delay in the transmission, I sometimes faced this > issue. In my experience, the bus is heterogeneous enough to have a fast > replying slave to a slow master. > > >>> Now I think of the issue related to a transmitter that delays a >>> little to turn around the direction of its transceiver, from TX to >>> RX. Every transmitter on the bus should take into account this delay >>> and avoid starting transmission too soon. >> >> They should, yes.  The turnaround delay should be negligible in this >> day and age - if not, your software design is screwed or you have >> picked the wrong hardware.  (Of course, you don't always get the >> choice of hardware you want, and programmers are often left to find >> ways around hardware design flaws.) > > Negligible doesn't mean anything.
Negligible means of no significance in comparison to the delays you have anyway - either intentional delays in order to separate telegrams and have a reliable communication, or unavoidable delays due to software processing.
> If thre's a poor 8 bit PIC (previous > transmitter) clocked at 8MHz that changes direction in TXC interrupt > while other interrupts are active, and there's a Cortex-M4 clocked at > 200MHz (next transmitter), you will encounter this issue. >
No, you won't - not unless you are doing something silly in your timing such as failing to use appropriate pauses or thinking that 10 µs turnarounds are a good idea at 9600 baud. And I did specify picking sensible hardware - 8-bit PICs were are terrible choice 20 years ago for anything involving high speed, and they have not improved. (Again - sometimes you don't have control of the hardware, and sometimes there can be other overriding reasons for picking something. But if your hardware is limited, you have to take that into account.)
> This is more evident if, as you are saying, the Cortex-M4 is able to > start processing the message from the PIC at the midpoint of last stop > bit, while the PIC disables its driver at the *end* of the stop bit plus > an additional delay caused by interrupts handling. > > In this cases the half bit time is not negligible and must be added to > the transmission delay. >
Sorry, but I cannot see any situation where that would happen in a well-designed communication system. Oh, and it is actually essential that the receiver considers the character finished half-way through the stop bit, and not at the end. UART communication is intended to work despite small differences in the baud rate - up to nearly 5% total error. By the time the receiver is half way through the received stop bit, and has identified it is valid, the sender could be finished the stop bit as its clock is almost 5% faster (50% bit time over the full 10 bits). The receiver has to be in the "watch for falling edge of start bit" state at this point, ready for the transmitter to start its next frame.
> > >>> So I usually implement a short delay before starting a new message >>> transmission. If the maximum expected delay of moving the direction >>> from TX to RX is 10us, I could think to use a 10us delay, but this is >>> wrong in your assumption. >>> >> >> Implementing an explicit delay (or being confident that your telegram >> handling code takes long enough) is a good idea. >> >>> If the RX interrupt is at the middle of the stop bit, I should delay >>> the new transmission of 10us + half of bit time. With 9600 this is >>> 52us that is much higher than 10us. >> >> I made no such assumptions about timings.  The figures I gave were for >> using a USB 2 based interface on a PC, where the USB polling timer is >> at 8 kHz, or 125 µs.  That is half a bit time for 4 Kbaud.  (I had >> doubled the frequency instead of halving it and said the baud had to >> be above 16 kBaud - that shows it's good to do your own calculations >> and not trust others blindly!).  At 1 MBaud (the suggested rate), the >> absolute fastest the PC could turn around the bus would be 12 >> character times - half a stop bit is irrelevant. >> >> If you have a 9600 baud RS-485 receiver and you have a delay of 10 µs >> between reception of the last bit and the start of transmission of the >> next message, your code is wrong - by nearly two orders of magnitude. >> It is that simple. > > Not always. If you have only MCUs that are able to control direction in > hardware, you don't need any delay before transmission. > > >> If we take Modbus RTU as an example, you should be waiting 3.5 * 10 / >> 9600 seconds at a minimum - 3.65 /milli/seconds.  If you are concerned >> about exactly where the receive interrupt comes in the last stop bit, >> add another half bit time and you get 3.7 ms.  The half bit time is >> negligible. > > Oh yes, if you have already implemented a pause of 3.5 char times, it is > ok. >
Yes, exactly.
> >>> I know the next transmitter should make some processing of the >>> previous received message, prepare and buffer the new message to >>> transmit, so the delay is somewhat automatic, but in many cases I >>> have small 8-bits PICs and full-futured Linux box on the same bus and >>> the Linux could be very fast to start the new transmission. >> >> So put in a delay.  An /appropriate/ delay. >> >>> >>>>> I wouldn't be surprised if the implementation was different for >>>>> different manufacturers. >>>>> >>>> >>>> I've seen a bit of variation, including 8 subbit clocks per baud >>>> clock, wider sampling ranges, re-sync of the clock on edges, etc. >>>> And of course you don't always get the details of the timings in >>>> datasheets (and who bothers measuring them?)  But the key principles >>>> are the same. >>>> >>>>> >>>>>>> None of this matters to me really.  I'm going to use more wires, >>>>>>> and do the multi-drop from the PC to the slaves on one pair and >>>>>>> use RS-422 to multi-point from the slaves to the PC.  Since the >>>>>>> slaves are controlled by the master, they will never collide. >>>>>>> The master can't collide with itself, so I can ignore any issues >>>>>>> with this.  I will use the bias resistors to assure a valid idle >>>>>>> state.  I may need to select different devices than the ones I >>>>>>> use in the product.  I think there are differences in the input >>>>>>> load and I want to be sure I can chain up to 32 units. >>>>>>> >>>>>> >>>>>> OK.  I have no idea what such a hybrid bus should technically be >>>>>> called, but I think it should work absolutely fine for the purpose >>>>>> and seems like a solid solution.  I would not foresee any issues >>>>>> with 32 nodes on such a bus, especially if it is relatively short >>>>>> and you have terminators at each end. >>>>> >>>>> In my experience, termination resistors at each end of the line >>>>> could introduce other troubles if they aren't strictly required >>>>> (because of signal integrity on long lines at high baud rates). >>>>> >>>> >>>> RS-485 requires them - you want to hold the bus at a stable idle >>>> state when nothing is driving it. >>> >>> But this is the goal of *bias* resistors, not termination resistors. >>> >> >> Yes - but see below.  Bias resistors are part of the termination - it >> just means that you have terminating resistors to 5V and 0V as well as >> across the balanced pair. >> >>> >>>> You also want to have a bit of load so that you have some current on >>>> the bus, and thereby greater noise immunity. >>> >>> Of course, but termination resistors are usually small (around 100 >>> ohms) because they should match the impedance of the cable. If you >>> want only to introduce "some current" on the bus, you could use >>> resistors in the order of 1k, but this isn't strictly a *termination* >>> resistor. >>> >> >> If you have a cable that is long enough (or speeds fast enough) that >> it needs to be treated as a transmission line with controlled >> impedance, then you do need impedance matched terminators to avoid >> reflections causing trouble.  Usually you don't. >> >> A "terminating resistor" is just a "resistor at the terminator" - it >> does not imply impedance matching, or any other specific purpose.  You >> pick a value (and network) appropriate for the task in hand - maybe >> you impedance matching, maybe you'd rather have larger values to >> reduce power consumption. >> >>> >>>>> The receiver input impedance of all the nodes on the bus are in >>>>> parallel with the two terminators. If you have many nodes, the >>>>> equivalent impedance on the bus is much small and the partition >>>>> with bias resistors could reduce the differential voltage between A >>>>> and B at idle to less than 200mV. >>>>> >>>>> If you don't use true fail-safe transceivers, a fault start bit >>>>> could be seen by these kind of receivers. >>>>> >>>> >>>> Receiver load is very small on modern RS-485 drivers. >>> >>> ST3485 says the input load of the receiver around 24k. When you >>> connect 32 slaves, the equivalent resistor would be 750 ohms, that >>> should be enough to have "some current" on the bus. If you add >>> *termination* resistors in the order of 100R on both sides, you could >>> reduce drastically the differential voltage between A and B at idle >>> state. >>> >> >> If you are pushing the limits of a bus, in terms of load, distance, >> speed, cable characteristics, etc., then you need to do such >> calculations carefully and be precise in your specification of >> components, cables, topology, connectors, etc.  For many buses in >> practice, they will work fine using whatever resistor you pull out >> your box of random parts.  For a testbench, you are going to go for >> something between these extremes. > > Ok, I thought you were suggesting to add impedance matching (slow) > resistors as terminators in any case.
On Friday, November 4, 2022 at 12:36:51 PM UTC-4, David Brown wrote:
> On 04/11/2022 15:37, pozz wrote: > > Il 04/11/2022 10:49, David Brown ha scritto: > >> On 04/11/2022 08:45, pozz wrote: > >>> Il 03/11/2022 16:26, David Brown ha scritto: > >>>> On 03/11/2022 14:00, pozz wrote: > >>>>> Il 03/11/2022 12:42, David Brown ha scritto: > >>>>>> On 03/11/2022 00:27, Rick C wrote: > >>>>>>> On Wednesday, November 2, 2022 at 4:49:16 PM UTC-4, David Brown > >>>>>>> wrote: > >>>>>>>> On 02/11/2022 20:20, Rick C wrote: > >>>>>>>>> On Wednesday, November 2, 2022 at 5:28:21 AM UTC-4, David Brown > >>>>>>>>> wrote: > >>>>>>>>>> On 02/11/2022 06:28, Rick C wrote: > >>>>> > >>>>> > >>>>>> You are correct that reception is in the middle of the stop bit > >>>>>> (typically sub-slot 9 of 16). The first transmitter will be > >>>>>> disabled at the end of the stop bit, and the next transmitter must > >>>>>> not enable its driver until after that point - it must wait at > >>>>>> least half a bit time after reception before starting > >>>>>> transmission. (It can wait longer without trouble, which is why > >>>>>> faster baud rates are less likely to involve any complications here.) > >>>>> > >>>>> Do you mean that RX interrupt triggers in the middle of the stop > >>>>> bit and not at the end? Interesting, but are you sure this is the > >>>>> case for every UART implemented in MCUs? > >>>> > >>>> Of course I'm not sure - there are a /lot/ of MCU manufacturers! > >>>> > >>>> UART receivers usually work in the same way, however. They have a > >>>> sample clock running at 16 times the baud clock. The start bit is > >>>> edge triggered to give the start of the character frame. Then each > >>>> bit is sampled in the middle of its time slot - usually at subbit > >>>> slots 7, 8, and 9 with majority voting. So the stop bit is > >>>> recognized by subbit slot 9 of the tenth bit (assuming 8-bit, no > >>>> parity) - the voltage on the line after that is irrelevant. (Even > >>>> when you have two stop bits, receivers never check the second stop > >>>> bit - it affects transmit timing only.) What purpose would there be > >>>> in waiting another 7 subbits before triggering the interrupt, DMA, > >>>> or whatever? > >>> > >>> There's no real purpose, but it's important to know exactly when the > >>> RX interrupt is fired from the UART. > >>> > >> > >> I think it is extremely rare that this is important. I can't think of > >> a single occasion when I have thought it remotely relevant where in > >> the stop bit the interrupt comes. > >> > >>> Usually the next transmitter starts transmitting after receiving the > >>> last byte of the previous transmitter (for example, the slave starts > >>> replying to the master after receiving the complete message from it). > >>> > >> > >> No. Usually the next transmitter starts after receiving the last > >> byte, and /then a pause/. There will always be some handling time in > >> software, and may also include an explicit pause. Almost always you > >> will want to do at least a minimum of checking of the incoming data > >> before deciding on the next telegram to be sent out. But if you have > >> very fast handling in relation to the baud rate, you will want an > >> explicit pause too - protocols regularly specify a minimum pause (such > >> as 3.5 character times for Modbus RTU), and you definitely want it to > >> be at least one full character time to ensure no listener gets > >> hopelessly out of sync. > > > > In theory, if all the nodes on the bus were able to change direction in > > hardware (exactly at the end of the stop bit), you will not be forced to > > introduce any delay in the transmission. > Communication is about /reliably/ transferring data between devices. > Asynchronous serial communication is about doing that despite slight > differences in clock rates, differences in synchronisation, differences > in startup times, etc. If you don't have idle pauses, you have almost > zero chance of staying in sync across the nodes - and no chance at all > of recovery when that happens. /Every/ successful serial protocol has > pauses between frames - long enough pauses that the idle time could not > possibly be part of a normal full speed frame. That does not just apply > to UART protocols, or even just to asynchronous protocols. The pause > does not have to be as long as 3.5 characters, but you need a pause - > just as you need other error recovery handling.
The "idle" pauses you talk about are accommodated with the start and stop bits in the async protocol. Every character is sent with a start bit which starts the timing. The stop bit is the "fluff" time for the next character to align to the next start bit. There is no need for the bus to be idle in the sense of no data being sent. If an RS-485 or RS-422 bus is biased for undriven times, there is no need for the driver to be on through the full stop bit. Once the stop bit has driven high, it can be disabled, such as in the middle of the bit. The there is a half bit time for timing skew, which amounts to 5%, between any two devices on the bus.
> > Many times I'm the author of a custom protocol because some nodes on a > > shared bus, so I'm not forced to follow any specifications. When I > > didn't introduce any delay in the transmission, I sometimes faced this > > issue. In my experience, the bus is heterogeneous enough to have a fast > > replying slave to a slow master. > > > > > >>> Now I think of the issue related to a transmitter that delays a > >>> little to turn around the direction of its transceiver, from TX to > >>> RX. Every transmitter on the bus should take into account this delay > >>> and avoid starting transmission too soon. > >> > >> They should, yes. The turnaround delay should be negligible in this > >> day and age - if not, your software design is screwed or you have > >> picked the wrong hardware. (Of course, you don't always get the > >> choice of hardware you want, and programmers are often left to find > >> ways around hardware design flaws.) > > > > Negligible doesn't mean anything. > Negligible means of no significance in comparison to the delays you have > anyway - either intentional delays in order to separate telegrams and > have a reliable communication, or unavoidable delays due to software > processing.
The software on the PC is not managing the bus drivers. So software delays are not relevant to bus control timing.
> > If thre's a poor 8 bit PIC (previous > > transmitter) clocked at 8MHz that changes direction in TXC interrupt > > while other interrupts are active, and there's a Cortex-M4 clocked at > > 200MHz (next transmitter), you will encounter this issue. > > > No, you won't - not unless you are doing something silly in your timing > such as failing to use appropriate pauses or thinking that 10 µs > turnarounds are a good idea at 9600 baud. And I did specify picking > sensible hardware - 8-bit PICs were are terrible choice 20 years ago for > anything involving high speed, and they have not improved. (Again - > sometimes you don't have control of the hardware, and sometimes there > can be other overriding reasons for picking something. But if your > hardware is limited, you have to take that into account.) > > This is more evident if, as you are saying, the Cortex-M4 is able to > > start processing the message from the PIC at the midpoint of last stop > > bit, while the PIC disables its driver at the *end* of the stop bit plus > > an additional delay caused by interrupts handling. > > > > In this cases the half bit time is not negligible and must be added to > > the transmission delay. > > > Sorry, but I cannot see any situation where that would happen in a > well-designed communication system. > > Oh, and it is actually essential that the receiver considers the > character finished half-way through the stop bit, and not at the end. > UART communication is intended to work despite small differences in the > baud rate - up to nearly 5% total error. By the time the receiver is > half way through the received stop bit, and has identified it is valid, > the sender could be finished the stop bit as its clock is almost 5% > faster (50% bit time over the full 10 bits). The receiver has to be in > the "watch for falling edge of start bit" state at this point, ready for > the transmitter to start its next frame.
Yes, why would it not be? This is why there's no need for additional delays or "gaps" in the protocol for an async interface. -- Rick C. +++ Get 1,000 miles of free Supercharging +++ Tesla referral code - https://ts.la/richard11209
Rick C <gnuarm.deletethisbit@gmail.com> wrote:
> > How long is a piece of string? By keeping the interconnecting cables short, 4" or so, and a 5 foot cable from the PC, I don't expect problems with reflections. But it is prudent to allow for them anyway. The FTDI RS-422 cable seems to have a terminator on the receiver, but not the driver and no provision to add a terminator to the driver.
It is pointless to add terminator to driver, there will be mismatch anyway and resistor would just waste transmit power. Mismatch at driver does not case trouble as long as ends are properly terminated. And when driver is at the near end and there are no other drivers, then it is enough to put termination only at the far end. So FTDI cable seem to be doing exactly what is needed.
> > Oddly enough, the RS-485 cable has a terminator that can be connected by the user, but that would be running through the cable separately from the transceiver signals, so essentially stubbed! I guess at 1 Mbps, 5 feet is less than the rise time, so not an issue. Since the interconnections between cards will be about five feet as well, it's unlikely to be an issue. The entire network will look like a lumped load, with the propagation time on the order of the rise/fall time. Even adding in a second chassis, makes the round trip twice the typical rise/fall time and unlikely to create any issues. > > They sell cables that have 5 m of cable, with a round trip of 30 ns or so.
Closer to 50 ns due to lower speed in cable.
> I think that would still not be significant in this application. The driver rise/fall times are 15 ns typ, 25 ns max.
Termination is also to kill _multiple_ reflections. In low loss line you can have bunch of reflection creating jitter. When jitter is more than 10% of bit time serial communication tends to have significant number of errors. At 9600 or at 100000 bits/s with short line bit time is long enough that jitter due to reflections in untermined line does not matter. Also multidrop RS-485 is far from low loss, each extra drop weakens signal, so reflections die faster than in quality point-to-point line. -- Waldek Hebisch
On 11/2/22 05:28, Rick C wrote:
> I have a test fixture that uses RS-232 to communicate with a PC. It actually uses the voltage levels of RS-232, even though this is from a USB cable on the PC, so it's only RS-232 for maybe four inches. lol > > I'm redesigning the test fixtures to hold more units and fully automate a few features that presently requires an operator. There will now be 8 UUTs on each test fixture and I expect to have 10 to 20 test fixtures in a card rack. That's 80 to 160 UUTs total. There will be an FPGA controlling each pair of UUTs, so 80 FPGAs in total that the PC needs to talk to. > > Rather than working on a way to mux 80 RS-232 interfaces, I'm thinking it would be better to either daisy chain, or connect in parallel all these devices. The protocol is master-slave where the master sends a command and the slaves are idle until they reply. The four FPGAs on a test fixture board could be connected in parallel easily enough. But I don't think I want to run TTL level signals between so many boards. > > I could do an RS-422 interface with a master to slave pair and a slave to master pair. The slaves do not speak until spoken to, so there will be no collisions. > > RS-485 would allow all this to be over a single pair of wires. But the one big issue I see people complain about is getting PC software to not clobber the slaves, or I should say, to get the master to wait long enough that it's not clobbering it's own start bit by overwriting the stop bit of the slave. I suppose someone, somewhere has dealt with this on the PC and has a solution that doesn't impact bus speed. I run the single test fixture version of this at about 100 kbps. I'm going to want as much speed as I can get for 80 FPGAs controlling 160 UUTs. Maybe I should give that some analysis, because this might not be true. > > The tests are of two types, most of them are setting up a state and reading a signal. This can go pretty fast and doesn't take too many commands. Then there are the audio tests where the FPGA sends digital data to the UUT, which does it's thing and returns digital data which is crunched by the FPGA. This takes some small number of seconds and presently the protocol is to poll the status until it is done. That's a lot of messages, but it's not necessarily a slow point. The same test can be started on every UUT in parallel, so the waiting is in parallel. So maybe the serial port won't need to be any faster. > > Still, I want to use RS-422 or RS-485 to deal with ground noise since this will be spread over multiple boards that don't have terribly solid grounds, just the power cable really. > > I'm thinking out loud here as much as anything. I intended to simply ask if anyone had experience with RS-485 that would be helpful. Running two wires rather than eight would be a help. I'll probably use a 10 pin connector just to be on the safe side, allowing the transceivers to be used either way. >
I worked on highway traffic sign project some years back that used multidrop RS423. The sign driven from a roadside controller, with a supervisory controller between that and led column controllers. Supervisory controller always master, with col controllers slaves. Master always initiated comms, with col controllers talking when addressed. A simple software state machine and line turnaround for the selected column to talk. Used diff line transceivers at the tx and rx ends, which could be tristated at the output. Interesting project and with a 15 yr design life, probably hundreds still working now. RS423 multidrop works well, though don't remember what the max supported speeds are. Much cheaper than network, but you can used standard cat5 etc network cables and pcb sockets to tie it all together... Chris
On Friday, November 4, 2022 at 6:53:34 PM UTC-4, anti...@math.uni.wroc.pl wrote:
> Rick C <gnuarm.del...@gmail.com> wrote: > > > > How long is a piece of string? By keeping the interconnecting cables short, 4" or so, and a 5 foot cable from the PC, I don't expect problems with reflections. But it is prudent to allow for them anyway. The FTDI RS-422 cable seems to have a terminator on the receiver, but not the driver and no provision to add a terminator to the driver. > It is pointless to add terminator to driver, there will be mismatch > anyway and resistor would just waste transmit power. Mismatch > at driver does not case trouble as long as ends are properly > terminated. And when driver is at the near end and there are no > other drivers, then it is enough to put termination only at the > far end. So FTDI cable seem to be doing exactly what is needed.
Yes, that's true for a single driver and multiple receivers. The point is that with multiple drivers, a terminator is needed at both ends of the cable. You have two ends to terminate, because drivers can be in the middle. You could not use FTDI RS-422 cables in the arrangement I am implementing. Every receiver would add a 120 ohm load to the line. Good thing I only need one!
> > Oddly enough, the RS-485 cable has a terminator that can be connected by the user, but that would be running through the cable separately from the transceiver signals, so essentially stubbed! I guess at 1 Mbps, 5 feet is less than the rise time, so not an issue. Since the interconnections between cards will be about five feet as well, it's unlikely to be an issue. The entire network will look like a lumped load, with the propagation time on the order of the rise/fall time. Even adding in a second chassis, makes the round trip twice the typical rise/fall time and unlikely to create any issues. > > > > They sell cables that have 5 m of cable, with a round trip of 30 ns or so. > Closer to 50 ns due to lower speed in cable. > > I think that would still not be significant in this application. The driver rise/fall times are 15 ns typ, 25 ns max. > Termination is also to kill _multiple_ reflections. In low loss line > you can have bunch of reflection creating jitter. When jitter is > more than 10% of bit time serial communication tends to have significant > number of errors. At 9600 or at 100000 bits/s with short line bit > time is long enough that jitter due to reflections in untermined > line does not matter. Also multidrop RS-485 is far from low loss, > each extra drop weakens signal, so reflections die faster than > in quality point-to-point line.
How do RS-485 drops "weaken" the signal? The load of an RS-485 device is very slight. The same result will happen with multiple receivers on RS-422. I expect to be running at least 1 Mbps, possibly as high as 3 Mbps. One thing I'm a bit confused about, is the wiring of the EIA/TIA 568B or 568A cables. Both standards are used, but as far as I can tell, the only difference is the colors! The green and orange twisted pairs are reversed on both ends, making the cables electrically identical, other than the colors used for a given pair. The only difference is, the different pairs have different twist pitch, to help reduce crosstalk. But the numbers are not specified in the spec, so I don't see how this could matter. Why would the color be an issue, to the point of creating two different specs??? Obviously I'm missing something. I will need to check a cable before I design the boards, lol. -- Rick C. ---- Get 1,000 miles of free Supercharging ---- Tesla referral code - https://ts.la/richard11209
On 11/3/22 4:32 PM, Rick C wrote:
> On Thursday, November 3, 2022 at 3:37:43 PM UTC-4, Dave Nadler wrote: >> On 11/2/2022 1:28 AM, Rick C wrote: >>> I have a test fixture that uses RS-232 to communicate with a PC. It actually uses the voltage levels of RS-232, even though this is from a USB cable on the PC, so it's only RS-232 for maybe four inches. lol >>> >>> I'm redesigning the test fixtures to hold more units and fully automate a few features that presently requires an operator. There will now be 8 UUTs on each test fixture and I expect to have 10 to 20 test fixtures in a card rack. That's 80 to 160 UUTs total. There will be an FPGA controlling each pair of UUTs, so 80 FPGAs in total that the PC needs to talk to. >>> >>> Rather than working on a way to mux 80 RS-232 interfaces, I'm thinking it would be better to either daisy chain, or connect in parallel all these devices. The protocol is master-slave where the master sends a command and the slaves are idle until they reply. The four FPGAs on a test fixture board could be connected in parallel easily enough. But I don't think I want to run TTL level signals between so many boards. >>> >>> I could do an RS-422 interface with a master to slave pair and a slave to master pair. The slaves do not speak until spoken to, so there will be no collisions. >>> >>> RS-485 would allow all this to be over a single pair of wires. But the one big issue I see people complain about is getting PC software to not clobber the slaves, or I should say, to get the master to wait long enough that it's not clobbering it's own start bit by overwriting the stop bit of the slave. I suppose someone, somewhere has dealt with this on the PC and has a solution that doesn't impact bus speed. I run the single test fixture version of this at about 100 kbps. I'm going to want as much speed as I can get for 80 FPGAs controlling 160 UUTs. Maybe I should give that some analysis, because this might not be true. >>> >>> The tests are of two types, most of them are setting up a state and reading a signal. This can go pretty fast and doesn't take too many commands. Then there are the audio tests where the FPGA sends digital data to the UUT, which does it's thing and returns digital data which is crunched by the FPGA. This takes some small number of seconds and presently the protocol is to poll the status until it is done. That's a lot of messages, but it's not necessarily a slow point. The same test can be started on every UUT in parallel, so the waiting is in parallel. So maybe the serial port won't need to be any faster. >>> >>> Still, I want to use RS-422 or RS-485 to deal with ground noise since this will be spread over multiple boards that don't have terribly solid grounds, just the power cable really. >>> >>> I'm thinking out loud here as much as anything. I intended to simply ask if anyone had experience with RS-485 that would be helpful. Running two wires rather than eight would be a help. I'll probably use a 10 pin connector just to be on the safe side, allowing the transceivers to be used either way. >>> >> Hi Rick - I have an RS-485 system on my desk using an implementation of >> the old Intel BitBus. Works fine for a handful of nodes, limited >> distance, and very simple cabling - but only 62.5kbaud. Good solid >> technology for 1994 when I designed it... >> >> Why would you use RS-485 instead of CAN? A million chips out there >> support CAN with no fuss, works at decent speeds over twisted pair, not >> hard to use. >> >> BTW, another option for interfacing to RS-485 from USB is XR21B1411 >> which is what I happen to have on my desk. > > I'm using RS-422 because I don't need to learn how to use a "chip". It's the same serial protocol I'm using now, but instead of RS-232 voltage levels, it's RS-422 differential. The "change" is really the fact that it's not just one slave. So the bus will be split into a master send bus and a slave reply bus. The master doesn't need to manage the tri-state output because it's the only talker. The slaves only talk when spoken to and the UART is in an FPGA, (no CPU), so it can manage the tri-state control to the driver chip very easily. > > CAN bus might be the greatest thing since sliced bread, but I am going to be slammed with work and I don't want to do anything I don't absolutely have to. > > A lot of people don't understand that this is nearly the same as what I'm using now and will only require a very minor modification to the message protocol, to allow the slaves to be selected/addressed. It would be hard to make it any simpler and this would all still have to be done even if adding the CAN bus. The slaves still need to be selected/addressed. > > Thanks for the suggestions. The part I'm worried about now are the more mechanical bits. I am thinking of using the Eurocard size so I can use the rack hardware, but I know very little about the bits and bobs. There will be no backplane, just card guides and the front panels on the cards to hold them in place. I might put the cabling on the front panel to give it easy access, but then it needs machining of the front panel. I could simplify that by cutting out one large hole to expose all the LEDs and connectors. I want to make the design work as simple as possible and mechanical drawings are not my forte. >
RS-485 will require you to make a firm decision on protocol timing. Either you require that ALL units can get off the line fast after a message, so you don't need to add much wait time, or your allow any unit to be slow to get off, so everyone has to wait a while before talking. Perhaps if you have a single master that is fast, the replying machines can be slow, as long as the master knows that. Multi-drop RS-422, with one pair going out from the master controller to everyone, and a shared pair to answer on largely gets around this problem, as the replying units just need to be fast enough getting off the line so they are off before the controller sends enough of a message that someone else might decide to start to reply. This sounds like what you are talking about, and does work. You can even do "Multi-Master" with this topology, if you give the masters two drive chips, one to drive the master bus when they are the master, and one to drive the response bus when they are selected as a slave, and some protocol to pass mastering around and some recovery method to handle the case where the master role gets lost. One other thing to remember is that 422/485 really is designed to be a single linear bus, without significant branches, with end of bus termination. You can "cheat" on this if your speed is on the slow side.