EmbeddedRelated.com
Forums

Shared Communications Bus - RS-422 or RS-485

Started by Rick C November 2, 2022
On 05/11/2022 21:42, Rick C wrote:
> On Saturday, November 5, 2022 at 2:57:30 PM UTC-4, David Brown wrote: >> On 05/11/2022 18:23, Rick C wrote: >>> On Saturday, November 5, 2022 at 7:47:59 AM UTC-4, David Brown wrote:
>> The USB device is /not/ a processor - it is a converter between USB and >> UART. And it is the USB device that controls the transmit enable signal >> to the RS-485/RS-422 driver. There is no software on any processor >> handling the transmit enable signal - the driver is enabled precisely >> when the USB to UART device is sending data on the UART. > > Actually, the FTDI device is a processor. I expect it actually has no UART, rather the entire thing is done in software. I recall there being code to download for various purposes, such as JTAG, but I forget the details. I'm pretty sure the TxEn is controlled by FTDI software. >
No, I think you are mixing things up. FTDI make a fair number of devices, including some that /are/ processors or contain processors. (That would their display controller devices, their USB host controllers, amongst others.) The code for using chips like the FT232H as a JTAG interface runs on the host PC, not FTDI chip - it is a DLL or so file (or OpenOCD, or other software). The chip has /hardware/ support for a few different serial interfaces - SPI, I²C, JTAG and UART.
>> As I mentioned earlier, this thread is getting seriously mixed-up. The >> transmit enable discussion started with /RS-485/ - long before you >> decided to use a hybrid bus and a RS-422 cable. You were concerned >> about how the PC controlled the transmitter enable for the RS-485 >> driver, and I have been trying to explain how this works when you use a >> decent UART device. You only confuse yourself when you jump to >> discussing RS-422 here, in this bit of the conversation. > > Ok, I'll stop talking about what I am doing. >
We don't need to stop talking about it - we (everyone) just need to be a bit clearer about the context. It's been fun to talk about, and its great that you have a solution you are happy with, but it's a shame if topic mixup leads to frustration.
>> All communications have failures. Accept that as a principle, and >> understand how to deal with it. It's not hard to do - it is certainly >> much easier than trying to imagine and eliminate any possible cause of >> trouble. > > That's not a premise I have to deal with. I will also die. I'm not factoring that into the project either. > > I don't need to eliminate "any possible cause of trouble". I only have to reach an effective level of reliability. As I've said, error handling protocols are complex and subject to failure. It's much more likely I will have more trouble with the error handling protocol than I will with bit errors on the bus. So I choose the most reliable solution, no error handling. So without an error handling protocol in the software, I don't need to do anything further to deal with errors. >
I agree that error handling procedures can be difficult - and very often, they are poorly tested and have their own bugs (hardware or software). Over-engineering can reduce overall reliability, rather than increase it. (A few years back, we had a project that had to be updated to SIL safety certification requirements. Most of the changes reduced the overall safety and reliability in order to fulfil the documentation and certification requirements.) For serial protocols, ensuring a brief pause between telegrams is extremely simple and makes recovery possible after many kinds of errors. That's why it is found in virtually every serial protocol in wide use. And like it or not, you have it already in your hybrid bus solution.
On Sunday, November 6, 2022 at 5:55:22 AM UTC-5, David Brown wrote:
> On 05/11/2022 21:42, Rick C wrote: > > On Saturday, November 5, 2022 at 2:57:30 PM UTC-4, David Brown wrote: > >> On 05/11/2022 18:23, Rick C wrote: > >>> On Saturday, November 5, 2022 at 7:47:59 AM UTC-4, David Brown wrote: > > >> The USB device is /not/ a processor - it is a converter between USB and > >> UART. And it is the USB device that controls the transmit enable signal > >> to the RS-485/RS-422 driver. There is no software on any processor > >> handling the transmit enable signal - the driver is enabled precisely > >> when the USB to UART device is sending data on the UART. > > > > Actually, the FTDI device is a processor. I expect it actually has no UART, rather the entire thing is done in software. I recall there being code to download for various purposes, such as JTAG, but I forget the details. I'm pretty sure the TxEn is controlled by FTDI software. > > > No, I think you are mixing things up. FTDI make a fair number of > devices, including some that /are/ processors or contain processors. > (That would their display controller devices, their USB host > controllers, amongst others.) > > The code for using chips like the FT232H as a JTAG interface runs on the > host PC, not FTDI chip - it is a DLL or so file (or OpenOCD, or other > software). The chip has /hardware/ support for a few different serial > interfaces - SPI, I²C, JTAG and UART.
They need code for the PC to run, but there is no reason to think they don't use a processor in the USB dongle.
> >> As I mentioned earlier, this thread is getting seriously mixed-up. The > >> transmit enable discussion started with /RS-485/ - long before you > >> decided to use a hybrid bus and a RS-422 cable. You were concerned > >> about how the PC controlled the transmitter enable for the RS-485 > >> driver, and I have been trying to explain how this works when you use a > >> decent UART device. You only confuse yourself when you jump to > >> discussing RS-422 here, in this bit of the conversation. > > > > Ok, I'll stop talking about what I am doing. > > > We don't need to stop talking about it - we (everyone) just need to be a > bit clearer about the context. It's been fun to talk about, and its > great that you have a solution you are happy with, but it's a shame if > topic mixup leads to frustration. > >> All communications have failures. Accept that as a principle, and > >> understand how to deal with it. It's not hard to do - it is certainly > >> much easier than trying to imagine and eliminate any possible cause of > >> trouble. > > > > That's not a premise I have to deal with. I will also die. I'm not factoring that into the project either. > > > > I don't need to eliminate "any possible cause of trouble". I only have to reach an effective level of reliability. As I've said, error handling protocols are complex and subject to failure. It's much more likely I will have more trouble with the error handling protocol than I will with bit errors on the bus. So I choose the most reliable solution, no error handling. So without an error handling protocol in the software, I don't need to do anything further to deal with errors. > > > I agree that error handling procedures can be difficult - and very > often, they are poorly tested and have their own bugs (hardware or > software). Over-engineering can reduce overall reliability, rather than > increase it. (A few years back, we had a project that had to be updated > to SIL safety certification requirements. Most of the changes reduced > the overall safety and reliability in order to fulfil the documentation > and certification requirements.) > > For serial protocols, ensuring a brief pause between telegrams is > extremely simple and makes recovery possible after many kinds of errors. > That's why it is found in virtually every serial protocol in wide use. > And like it or not, you have it already in your hybrid bus solution.
There's no point to inter-message delays. If there is an error that causes a loss of framing, the devices will see that and ignore the message. As I've said, the real issue is that the message will not be responded to, and the software will fail. At that point the user will exit the software on the PC and start over. That gives a nice long delay for resyncing. -- Rick C. -+-+ Get 1,000 miles of free Supercharging -+-+ Tesla referral code - https://ts.la/richard11209
On 06/11/2022 14:56, Rick C wrote:
> On Sunday, November 6, 2022 at 5:55:22 AM UTC-5, David Brown wrote: >> On 05/11/2022 21:42, Rick C wrote: >>> On Saturday, November 5, 2022 at 2:57:30 PM UTC-4, David Brown wrote: >>>> On 05/11/2022 18:23, Rick C wrote: >>>>> On Saturday, November 5, 2022 at 7:47:59 AM UTC-4, David Brown wrote: >> >>>> The USB device is /not/ a processor - it is a converter between USB and >>>> UART. And it is the USB device that controls the transmit enable signal >>>> to the RS-485/RS-422 driver. There is no software on any processor >>>> handling the transmit enable signal - the driver is enabled precisely >>>> when the USB to UART device is sending data on the UART. >>> >>> Actually, the FTDI device is a processor. I expect it actually has no UART, rather the entire thing is done in software. I recall there being code to download for various purposes, such as JTAG, but I forget the details. I'm pretty sure the TxEn is controlled by FTDI software. >>> >> No, I think you are mixing things up. FTDI make a fair number of >> devices, including some that /are/ processors or contain processors. >> (That would their display controller devices, their USB host >> controllers, amongst others.) >> >> The code for using chips like the FT232H as a JTAG interface runs on the >> host PC, not FTDI chip - it is a DLL or so file (or OpenOCD, or other >> software). The chip has /hardware/ support for a few different serial >> interfaces - SPI, I²C, JTAG and UART. > > They need code for the PC to run, but there is no reason to think they don't use a processor in the USB dongle. >
There is no reason to think that they /do/ have a processor there. I should imagine you would have no problem making the programmable logic needed for controlling a UART/SPI/I²C/JTAG/GPIO port, and USB slave devices are rarely made in software (even on the XMOS they prefer hardware blocks for USB). Why would anyone use a /processor/ for some simple digital hardware? I am not privy to the details of the FTDI design beyond their published documents, but it seems pretty clear to me that there is no processor in sight.
> >>>> As I mentioned earlier, this thread is getting seriously mixed-up. The >>>> transmit enable discussion started with /RS-485/ - long before you >>>> decided to use a hybrid bus and a RS-422 cable. You were concerned >>>> about how the PC controlled the transmitter enable for the RS-485 >>>> driver, and I have been trying to explain how this works when you use a >>>> decent UART device. You only confuse yourself when you jump to >>>> discussing RS-422 here, in this bit of the conversation. >>> >>> Ok, I'll stop talking about what I am doing. >>> >> We don't need to stop talking about it - we (everyone) just need to be a >> bit clearer about the context. It's been fun to talk about, and its >> great that you have a solution you are happy with, but it's a shame if >> topic mixup leads to frustration. >>>> All communications have failures. Accept that as a principle, and >>>> understand how to deal with it. It's not hard to do - it is certainly >>>> much easier than trying to imagine and eliminate any possible cause of >>>> trouble. >>> >>> That's not a premise I have to deal with. I will also die. I'm not factoring that into the project either. >>> >>> I don't need to eliminate "any possible cause of trouble". I only have to reach an effective level of reliability. As I've said, error handling protocols are complex and subject to failure. It's much more likely I will have more trouble with the error handling protocol than I will with bit errors on the bus. So I choose the most reliable solution, no error handling. So without an error handling protocol in the software, I don't need to do anything further to deal with errors. >>> >> I agree that error handling procedures can be difficult - and very >> often, they are poorly tested and have their own bugs (hardware or >> software). Over-engineering can reduce overall reliability, rather than >> increase it. (A few years back, we had a project that had to be updated >> to SIL safety certification requirements. Most of the changes reduced >> the overall safety and reliability in order to fulfil the documentation >> and certification requirements.) >> >> For serial protocols, ensuring a brief pause between telegrams is >> extremely simple and makes recovery possible after many kinds of errors. >> That's why it is found in virtually every serial protocol in wide use. >> And like it or not, you have it already in your hybrid bus solution. > > There's no point to inter-message delays. If there is an error that causes a loss of framing, the devices will see that and ignore the message. As I've said, the real issue is that the message will not be responded to, and the software will fail. At that point the user will exit the software on the PC and start over. That gives a nice long delay for resyncing. >
That is one way to handle possible errors.
On 11/6/22 8:56 AM, Rick C wrote:
> There's no point to inter-message delays. If there is an error that causes a loss of framing, the devices will see that and ignore the message. As I've said, the real issue is that the message will not be responded to, and the software will fail. At that point the user will exit the software on the PC and start over. That gives a nice long delay for resyncing.
If the only way to handle a missed message is to abort the whole software system, that seems to be a pretty bad system. Note, if the master sends out a message, and waits for a response, with a retry if the message is not replied to, that naturally puts a pause in the communication bus for inter-message synchronization. Based on your description, I can't imagine the master starting a message for another slave until after the first one answers, or you will interfere with the arbitration control of the reply bus. In a dedicated link, after the link is established, it might be possible that one side just starts streaming data continously to the other side, but most protocals will have some sort of at least occational handshaking back, so a loss of sync can stop the flow to re-establish the syncronization. And such handshaking is needed if you have need to handle noise in packets.
Richard Damon <Richard@Damon-Family.org> writes:
> And such handshaking is needed if you have need to handle noise in > packets.
Once you acknowledge that noise and errors are even possible, some kind of checksums or FEC seem appropriate in addition to a retry protocol.
On 11/6/22 6:37 PM, Paul Rubin wrote:
> Richard Damon <Richard@Damon-Family.org> writes: >> And such handshaking is needed if you have need to handle noise in >> packets. > > Once you acknowledge that noise and errors are even possible, some kind > of checksums or FEC seem appropriate in addition to a retry protocol.
Yes, the messages should have some form of checksum in them to identify bad packets. That should be part of the message definition.
On 2022-11-05 Rick C wrote in comp.arch.embedded:
...
> One thing I'm a bit confused about, is the wiring of the EIA/TIA 568B or 568A cables. Both standards are used, but as far as I can tell, the only difference is the colors! The green and orange twisted pairs are reversed on both ends, making the cables electrically identical, other than the colors used for a given pair. The only difference is, the different pairs have different twist pitch, to help reduce crosstalk. But the numbers are not specified in the spec, so I don't see how this could matter. > > Why would the color be an issue, to the point of creating two different specs??? > > Obviously I'm missing something. I will need to check a cable before I design the boards, lol.
Yes, only difference is the colors. There are some historical backgrounds, see also https://en.wikipedia.org/wiki/ANSI/TIA-568. In the early days there sometimes was a need for crossover cables. 568A on one end, 568B on the other end. IIRC, you needed one to connect 2 PC's together directly, without a hub. Hubs also had a special uplink port. These days all ethernet PHY's are auto detect and there is no need for special ports or cables anymore. So pick a standard you like or just use what is available. Most cables I have in my drawer here seem to be 568B. Just standard cables, did not pay attention to the A/B when I bought them. ;-) -- Stef The light at the end of the tunnel is the headlight of an approaching train.
On Sunday, November 6, 2022 at 3:54:04 PM UTC-5, David Brown wrote:
> On 06/11/2022 14:56, Rick C wrote: > > On Sunday, November 6, 2022 at 5:55:22 AM UTC-5, David Brown wrote: > >> On 05/11/2022 21:42, Rick C wrote: > >>> On Saturday, November 5, 2022 at 2:57:30 PM UTC-4, David Brown wrote: > >>>> On 05/11/2022 18:23, Rick C wrote: > >>>>> On Saturday, November 5, 2022 at 7:47:59 AM UTC-4, David Brown wrote: > >> > >>>> The USB device is /not/ a processor - it is a converter between USB and > >>>> UART. And it is the USB device that controls the transmit enable signal > >>>> to the RS-485/RS-422 driver. There is no software on any processor > >>>> handling the transmit enable signal - the driver is enabled precisely > >>>> when the USB to UART device is sending data on the UART. > >>> > >>> Actually, the FTDI device is a processor. I expect it actually has no UART, rather the entire thing is done in software. I recall there being code to download for various purposes, such as JTAG, but I forget the details. I'm pretty sure the TxEn is controlled by FTDI software. > >>> > >> No, I think you are mixing things up. FTDI make a fair number of > >> devices, including some that /are/ processors or contain processors. > >> (That would their display controller devices, their USB host > >> controllers, amongst others.) > >> > >> The code for using chips like the FT232H as a JTAG interface runs on the > >> host PC, not FTDI chip - it is a DLL or so file (or OpenOCD, or other > >> software). The chip has /hardware/ support for a few different serial > >> interfaces - SPI, I&sup2;C, JTAG and UART. > > > > They need code for the PC to run, but there is no reason to think they don't use a processor in the USB dongle. > > > There is no reason to think that they /do/ have a processor there. I > should imagine you would have no problem making the programmable logic > needed for controlling a UART/SPI/I&sup2;C/JTAG/GPIO port, and USB slave > devices are rarely made in software (even on the XMOS they prefer > hardware blocks for USB). Why would anyone use a /processor/ for some > simple digital hardware? I am not privy to the details of the FTDI > design beyond their published documents, but it seems pretty clear to me > that there is no processor in sight.
I don't agree. These interfaces are not so simple when you consider the level of flexibility in implementing many different interfaces in one part. XMOS is nothing like this. A small processor running at high speed would easily implement any of these interfaces. The small processor can actually be a very small amount of chip area. Typical MCUs are dominated by the memory blocks. With a small memory an MCU could easily be smaller than dedicated logic. Even many of the I/O blocks, like UARTs, can be larger than an 8 bit CPU. A CPU takes advantage of the massive multiplexer in the memory, which is implemented in ways that use very little area. FPGAs use the multiplexers in tiny LUTs while an MCU uses the multiplexer in a single, much larger LUT, the program store. -- Rick C. -++- Get 1,000 miles of free Supercharging -++- Tesla referral code - https://ts.la/richard11209
On 2022-11-05 Rick C wrote in comp.arch.embedded:
> On Saturday, November 5, 2022 at 6:58:24 AM UTC-4, David Brown wrote: > >> In UART communication, this is handled at the protocol level rather than >> the hardware (though some UART hardware may have "idle detect" signals >> when more than 11 bits of high level are seen in a row). Some >> UART-based protocols also use a "break" signal between frames - that is >> a string of at least 11 bits of low level. >> >> If you do not have such pauses, and a receiver is out of step, > > You have failed to explain how a receiver would get "out of step". The receiver syncs to every character transmitted. If all characters are received, what else do you need? How does it get "out of step"?
I have seen this happen in long messages (few kB) with no pauses between characters and transmitter and receiver set to 8,N,1. It seemed that the receiver needed the complete stop bit and then immediately saw the low of the next start bit. Detecting the edge when it was ready to see it, not when it actually happened. When the receiver is slightly slower than the transmitter, this caused the detection of the start bit (and therefor the whole character) to shift a tiny bit. This added up over the character stream until it eventually failed. Lowering the baud rate did not solve the issue, but inserting pauses after a number of chars did. What also solved it was setting the transmitter to 2 stop bits and the receiver to one stop bit. This was a one way stream and this may not be possible on a bi-directional stream. I would expect a sensible UART implementation to allow for a slightly shorter stop bit to compensate for issues like this. But apparently this UART did not do so in the 1 stop bit setting. I have not tested if setting both ends to 2 stop bits also solved the problem. -- Stef Westheimer's Discovery: A couple of months in the laboratory can frequently save a couple of hours in the library.
On Sunday, November 6, 2022 at 6:34:59 PM UTC-5, Richard Damon wrote:
> On 11/6/22 8:56 AM, Rick C wrote: > > There's no point to inter-message delays. If there is an error that causes a loss of framing, the devices will see that and ignore the message. As I've said, the real issue is that the message will not be responded to, and the software will fail. At that point the user will exit the software on the PC and start over. That gives a nice long delay for resyncing. > If the only way to handle a missed message is to abort the whole > software system, that seems to be a pretty bad system.
You would certainly think that if your error rate was more than once a hundred years. I expect to be long dead before an RS-422 bus only 10 feet long burps a bit error.
> Note, if the master sends out a message, and waits for a response, with > a retry if the message is not replied to, that naturally puts a pause in > the communication bus for inter-message synchronization.
The pause is already there by virtue of the protocol. Commands and replies are on different busses.
> Based on your description, I can't imagine the master starting a message > for another slave until after the first one answers, or you will > interfere with the arbitration control of the reply bus.
Exactly! Now you are starting to catch on.
> In a dedicated link, after the link is established, it might be possible > that one side just starts streaming data continously to the other side,
Except that there is no data to stream. Maybe you haven't been around for the full conversation. The protocol is command/reply for reading and writing registers and selecting which unit the registers are being accessed. The "stream" is an 8 bit value.
> but most protocals will have some sort of at least occational > handshaking back, so a loss of sync can stop the flow to re-establish > the syncronization. And such handshaking is needed if you have need to > handle noise in packets.
??? Every command has a reply. How is that not a handshake??? -- Rick C. -+++ Get 1,000 miles of free Supercharging -+++ Tesla referral code - https://ts.la/richard11209