EmbeddedRelated.com
Forums
Memfault Beyond the Launch

Two-wires RS485 and the driver enable problem

Started by pozzugno October 13, 2014
pozzugno wrote:
> I have a multi-drop two-wires RS485 bus. One node is the master and all > the others are slaves. The master is the only node that is authorized > to initiate a transmission, addressing one slave. The addressed slave > usually answers to the master. > > The bus is half-duplex, so every node disables the driver. Only THE > node that transmits data on the bus enables the driver and disables it > as soon as it can, just after the last byte. An interrupt (transmit > complete) usually triggers when the last byte is totally shifted out, so > the driver can be disabled immediately. > > Of course, other interrupts can be triggered. What happens when > interrupt X (whatever) triggers just before the "transmit complete" > interrupt? The result is the ISR X is called, postponing the execution > of "transmit complete" ISR. The RS485 driver will be disabled with a > certain amount of delay. In the worst case, the driver could be > disabled with a delay that is the sum of the duration of all ISRs that > could trigger. > [In this scenario, I think of ISRs that can't be interrupted by a higher > priority interrupt.] > > If a node on the bus is very fast and starts transmitting (the master) > or answering (one slave) immediately after receving the last byte, but > when the previous transmitting node is executing other ISRs, the final > result is a corrupted transmission. > > What is the solution? I think the only solution is to define, at the > design time, a minimum interval between the receiving of the last byte > from one node and the transmission of the first byte. This interval > could be in the range of 100 microseconds and should be calibrated on > the sum of duration of *all* ISRs of *all* nodes on the bus. It isn't a > simple calculation. > > Moreover, implementing a short "software" delay in the range of some > microseconds isn't a simple task. An empty loop on a decreasing > volatile variable is a solution, but the final delay isn't simple to > calculate at the design time, and it could depend on the compiler, > compiler settings, clock frequency and so on. Use an hw timer only for > this pause? > > How do you solve this problem? >
MODBUS RTU specifies 3.5 character times "settle time" after the last byte was sent. That's on the order of .9 msec for 38,400 baud. But that's a *minimum*.
> [I know there are some microcontrollers that automatically (at the > hw-level) toggle an output pin when the last byte is totally shifted > out, but I'm not using one of the them and they aren't so common.]
You have to know the worst-case timing. If you can't control the slave nodes, then you'll have retransmissions due to collisions if the ISR is sufficiently nondeterministic. Since it's half-duplex, I'm wondering why you get an interrupt other than TX-complete in that state. You simply have to trade speed for determinism in this case. And there will be collisions. If nothing else, add counters of bad CRC events and no-response events and tune a delay. 485 isn't a good protocol these days. The kids get Ethernet on RasPi class machines for class projects; it's not unreasonable to use a real comms stack. -- Les Cargill
On Mon, 13 Oct 2014 13:58:42 +0200, pozzugno <pozzugno@gmail.com>
wrote:

>I have a multi-drop two-wires RS485 bus. One node is the master and all >the others are slaves. The master is the only node that is authorized >to initiate a transmission, addressing one slave. The addressed slave >usually answers to the master. > >The bus is half-duplex, so every node disables the driver. Only THE >node that transmits data on the bus enables the driver and disables it >as soon as it can, just after the last byte. An interrupt (transmit >complete) usually triggers when the last byte is totally shifted out, so >the driver can be disabled immediately. > >Of course, other interrupts can be triggered. What happens when >interrupt X (whatever) triggers just before the "transmit complete" >interrupt? The result is the ISR X is called, postponing the execution >of "transmit complete" ISR. The RS485 driver will be disabled with a >certain amount of delay. In the worst case, the driver could be >disabled with a delay that is the sum of the duration of all ISRs that >could trigger. >[In this scenario, I think of ISRs that can't be interrupted by a higher >priority interrupt.] > >If a node on the bus is very fast and starts transmitting (the master) >or answering (one slave) immediately after receving the last byte, but >when the previous transmitting node is executing other ISRs, the final >result is a corrupted transmission. > >What is the solution? I think the only solution is to define, at the >design time, a minimum interval between the receiving of the last byte >from one node and the transmission of the first byte. This interval >could be in the range of 100 microseconds and should be calibrated on >the sum of duration of *all* ISRs of *all* nodes on the bus. It isn't a >simple calculation. > >Moreover, implementing a short "software" delay in the range of some >microseconds isn't a simple task. An empty loop on a decreasing >volatile variable is a solution, but the final delay isn't simple to >calculate at the design time, and it could depend on the compiler, >compiler settings, clock frequency and so on. Use an hw timer only for >this pause? > >How do you solve this problem? > >[I know there are some microcontrollers that automatically (at the >hw-level) toggle an output pin when the last byte is totally shifted >out, but I'm not using one of the them and they aren't so common.]
You must be quite desperate, if you intend to use 1x550 style chips on RS-485 :-). That chip family is useless for any high speed half duplex communication. You can get an interrupt when you load the last character into the Tx shift register, but you can't get an interrupt, when the last bit of the last character is actually shifted out of the Tx shift register. In real word, some high priority code will have to poll, when the last bit of your transmission has actually sent the last stop bit of your last byte into the line.
On 2014-10-13, upsidedown@downunder.com <upsidedown@downunder.com> wrote:

> You must be quite desperate, if you intend to use 1x550 style chips on > RS-485 :-). That chip family is useless for any high speed half duplex > communication. > > You can get an interrupt when you load the last character into the Tx > shift register, but you can't get an interrupt, when the last bit of > the last character is actually shifted out of the Tx shift register. > > In real word, some high priority code will have to poll, when the > last bit of your transmission has actually sent the last stop bit of > your last byte into the line.
And (in my experience) figuring out when that stop bit has been sent can be problematic. Not all '550 "compatible" UARTs wait until the end of the stop bit to set the "transmit shift register empty" status bit. Some I've used set it as soon as the last data bit has been sent, and if you turn of the driver at that point, you canse lose the stop bit and create a framing error. -- Grant Edwards grant.b.edwards Yow! My polyvinyl cowboy at wallet was made in Hong gmail.com Kong by Montgomery Clift!
pozzugno <pozzugno@gmail.com> wrote:
> I have a multi-drop two-wires RS485 bus. One node is the master and all > the others are slaves. The master is the only node that is authorized > to initiate a transmission, addressing one slave. The addressed slave > usually answers to the master.
> The bus is half-duplex, so every node disables the driver. Only THE > node that transmits data on the bus enables the driver and disables it > as soon as it can, just after the last byte. An interrupt (transmit > complete) usually triggers when the last byte is totally shifted out, so > the driver can be disabled immediately.
Ethernet has a minimum time between packets that is independent of most other timing parameters. The suggestion for it is that it is enough time for the receiver to get the data out of its buffer, and get ready to receive again. -- glen
Grant Edwards <invalid@invalid.invalid> wrote:

(snip, someone wrote)
>> In real word, some high priority code will have to poll, when the >> last bit of your transmission has actually sent the last stop bit of >> your last byte into the line.
> And (in my experience) figuring out when that stop bit has been sent > can be problematic. Not all '550 "compatible" UARTs wait until the > end of the stop bit to set the "transmit shift register empty" status > bit. Some I've used set it as soon as the last data bit has been > sent, and if you turn of the driver at that point, you canse lose the > stop bit and create a framing error.
For the usual asynchronous serial systems, the stop bit is the same level as the inactive state of the line. As long as you don't start the next character too soon, you are safe. If you are using it for a multiple driver line, seems to me that you are using it for something that it wasn't designed to do. -- glen
On 2014-10-13, glen herrmannsfeldt <gah@ugcs.caltech.edu> wrote:
> Grant Edwards <invalid@invalid.invalid> wrote: > > (snip, someone wrote) >>> In real word, some high priority code will have to poll, when the >>> last bit of your transmission has actually sent the last stop bit of >>> your last byte into the line. > >> And (in my experience) figuring out when that stop bit has been sent >> can be problematic. Not all '550 "compatible" UARTs wait until the >> end of the stop bit to set the "transmit shift register empty" status >> bit. Some I've used set it as soon as the last data bit has been >> sent, and if you turn of the driver at that point, you canse lose the >> stop bit and create a framing error. > > For the usual asynchronous serial systems, the stop bit is the > same level as the inactive state of the line.
That's true, but I don't know why it's relevent. We're talking about knowing when to turn off RTS at the end of the last byte in the message. If you turn it off immediately after the last data bit, and the level of the last data bit is opposite from the required stop-bit (idle) state, then you end up with problems unless the line is biased stringly enough to return the line to it's idle state in less than about 1/8 of a bit time. In my experience, a lot of installations end up with no bias resistors at all...
> As long as you don't start the next character too soon, you are safe.
There is no next character. We're talking about the last byte in a message.
> If you are using it for a multiple driver line, seems to me that > you are using it for something that it wasn't designed to do.
That's exactly what RS485 is designed to do, but for it to work reliably, you have to leave RTS on during a good portion of the stop bit so that the driver can actively force the line back to the idle state. -- Grant Edwards grant.b.edwards Yow! ... the MYSTERIANS are at in here with my CORDUROY gmail.com SOAP DISH!!
On Mon, 13 Oct 2014 14:38:41 +0200, pozzugno wrote:

> Il 13/10/2014 14:29, Wouter van Ooijen ha scritto: >> The minimum delay that you calculated is just that: a minimum. Apart >> from performance, there no problem in waiting somewhat longer. IMO you >> must come up with two figures: the minimum respond time (determined by >> the maximum driver turn-off delay) and the maximum respond time >> (determined by the time you can afford to loose with the bus idling >> bewteen request and response). > > Ok, so you're confirming the solution is a delay. > > >> As for the delay: My experience is that I need delays all over the >> place. One approach is to have a free-running 64 bit counter (which >> will roll over long after you are dead) and wait for it to exceed >> start+delay. > > I usually use this approach for longer delays (milliseconds or seconds), > so I can increment the counter in a ISR that trigger every millisecond. > I don't like to fire a trigger every 100us. > > 64-bit appears to me too wide. Everytime I need to read it and compare > with the time, I have to disable interrupts.
So? You don't have to disable interrupts for very long. Yes, it adds to the interrupt latency, but not by much if you're careful. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com
On Mon, 13 Oct 2014 13:58:42 +0200, pozzugno wrote:

> I have a multi-drop two-wires RS485 bus. One node is the master and all > the others are slaves. The master is the only node that is authorized > to initiate a transmission, addressing one slave. The addressed slave > usually answers to the master. > > The bus is half-duplex, so every node disables the driver. Only THE > node that transmits data on the bus enables the driver and disables it > as soon as it can, just after the last byte. An interrupt (transmit > complete) usually triggers when the last byte is totally shifted out, so > the driver can be disabled immediately. > > Of course, other interrupts can be triggered. What happens when > interrupt X (whatever) triggers just before the "transmit complete" > interrupt? The result is the ISR X is called, postponing the execution > of "transmit complete" ISR. The RS485 driver will be disabled with a > certain amount of delay. In the worst case, the driver could be > disabled with a delay that is the sum of the duration of all ISRs that > could trigger. > [In this scenario, I think of ISRs that can't be interrupted by a higher > priority interrupt.]
Unless your processor is very primitive, you should be able to make the serial interrupt the highest priority. Or take David Brown's suggestion and disable all but the serial interrupt when you start to transmit the last byte.
> If a node on the bus is very fast and starts transmitting (the master) > or answering (one slave) immediately after receving the last byte, but > when the previous transmitting node is executing other ISRs, the final > result is a corrupted transmission. > > What is the solution? I think the only solution is to define, at the > design time, a minimum interval between the receiving of the last byte > from one node and the transmission of the first byte. This interval > could be in the range of 100 microseconds and should be calibrated on > the sum of duration of *all* ISRs of *all* nodes on the bus. It isn't a > simple calculation.
I believe you're going about the last half of this backwards. Do not calculate the worst-case interrupt latency -- specify it, and make it a requirement on the slave boards. This should be easy enough to do if you are in charge of all the software, and still quite doable if you're only in charge of the communications software (assuming a functional group).
> Moreover, implementing a short "software" delay in the range of some > microseconds isn't a simple task. An empty loop on a decreasing > volatile variable is a solution, but the final delay isn't simple to > calculate at the design time, and it could depend on the compiler, > compiler settings, clock frequency and so on. Use an hw timer only for > this pause? > > How do you solve this problem?
In a UART without a FIFO, an easy way to do this would be to send one or more bytes with the transmitter disabled, then turn on the transmitter at the appropriate time. Basically, use the UART as your timed event generator.
> > [I know there are some microcontrollers that automatically (at the > hw-level) toggle an output pin when the last byte is totally shifted > out, but I'm not using one of the them and they aren't so common.]
In my experience, unless you're really using a high baud rate and a slow processor, or if your ISR's are just plain incorrectly written, your interrupt latency will be far lower than a bit interval. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com
Den mandag den 13. oktober 2014 23.08.45 UTC+2 skrev Grant Edwards:
> On 2014-10-13, glen herrmannsfeldt <gah@ugcs.caltech.edu> wrote: > > > Grant Edwards <invalid@invalid.invalid> wrote: > > > > > > (snip, someone wrote) > > >>> In real word, some high priority code will have to poll, when the > > >>> last bit of your transmission has actually sent the last stop bit of > > >>> your last byte into the line. > > > > > >> And (in my experience) figuring out when that stop bit has been sent > > >> can be problematic. Not all '550 "compatible" UARTs wait until the > > >> end of the stop bit to set the "transmit shift register empty" status > > >> bit. Some I've used set it as soon as the last data bit has been > > >> sent, and if you turn of the driver at that point, you canse lose the > > >> stop bit and create a framing error. > > > > > > For the usual asynchronous serial systems, the stop bit is the > > > same level as the inactive state of the line. > > > > That's true, but I don't know why it's relevent. We're talking about > > knowing when to turn off RTS at the end of the last byte in the > > message. If you turn it off immediately after the last data bit, and > > the level of the last data bit is opposite from the required stop-bit > > (idle) state, then you end up with problems unless the line is biased > > stringly enough to return the line to it's idle state in less than > > about 1/8 of a bit time. In my experience, a lot of installations end > > up with no bias resistors at all... > > > > > As long as you don't start the next character too soon, you are safe. > > > > There is no next character. We're talking about the last byte in a > > message. > > > > > If you are using it for a multiple driver line, seems to me that > > > you are using it for something that it wasn't designed to do. > > > > That's exactly what RS485 is designed to do, but for it to work > > reliably, you have to leave RTS on during a good portion of the stop > > bit so that the driver can actively force the line back to the idle > > state. >
I've never had much luck running with out bias (or failsafe as they call them) too many false start bits unless the wires are very long the ~600R or so recommended pull-up/down I'd think if would be fast enough even if you turn off the transmitter -Lasse
Den mandag den 13. oktober 2014 19.35.45 UTC+2 skrev Les Cargill:
> pozzugno wrote: > > > I have a multi-drop two-wires RS485 bus. One node is the master and all > > > the others are slaves. The master is the only node that is authorized > > > to initiate a transmission, addressing one slave. The addressed slave > > > usually answers to the master. > > > > > > The bus is half-duplex, so every node disables the driver. Only THE > > > node that transmits data on the bus enables the driver and disables it > > > as soon as it can, just after the last byte. An interrupt (transmit > > > complete) usually triggers when the last byte is totally shifted out, so > > > the driver can be disabled immediately. > > > > > > Of course, other interrupts can be triggered. What happens when > > > interrupt X (whatever) triggers just before the "transmit complete" > > > interrupt? The result is the ISR X is called, postponing the execution > > > of "transmit complete" ISR. The RS485 driver will be disabled with a > > > certain amount of delay. In the worst case, the driver could be > > > disabled with a delay that is the sum of the duration of all ISRs that > > > could trigger. > > > [In this scenario, I think of ISRs that can't be interrupted by a higher > > > priority interrupt.] > > > > > > If a node on the bus is very fast and starts transmitting (the master) > > > or answering (one slave) immediately after receving the last byte, but > > > when the previous transmitting node is executing other ISRs, the final > > > result is a corrupted transmission. > > > > > > What is the solution? I think the only solution is to define, at the > > > design time, a minimum interval between the receiving of the last byte > > > from one node and the transmission of the first byte. This interval > > > could be in the range of 100 microseconds and should be calibrated on > > > the sum of duration of *all* ISRs of *all* nodes on the bus. It isn't a > > > simple calculation. > > > > > > Moreover, implementing a short "software" delay in the range of some > > > microseconds isn't a simple task. An empty loop on a decreasing > > > volatile variable is a solution, but the final delay isn't simple to > > > calculate at the design time, and it could depend on the compiler, > > > compiler settings, clock frequency and so on. Use an hw timer only for > > > this pause? > > > > > > How do you solve this problem? > > > > > > > MODBUS RTU specifies 3.5 character times "settle time" after > > the last byte was sent. That's on the order of .9 msec for 38,400 > > baud. > > > > But that's a *minimum*. > > > > > [I know there are some microcontrollers that automatically (at the > > > hw-level) toggle an output pin when the last byte is totally shifted > > > out, but I'm not using one of the them and they aren't so common.] > > > > > > You have to know the worst-case timing. If you can't control the slave > > nodes, then you'll have retransmissions due to collisions if the ISR > > is sufficiently nondeterministic. > > > > Since it's half-duplex, I'm wondering why you get an interrupt > > other than TX-complete in that state. > > > > You simply have to trade speed for determinism in this case. And there > > will be collisions. If nothing else, add counters of bad CRC events > > and no-response events and tune a delay. > > > > 485 isn't a good protocol these days. The kids get Ethernet on RasPi > > class machines for class projects; it's not unreasonable to > > use a real comms stack. >
but Ethernet need hubs or switches, that gets messy if you have a lot of nodes -Lasse

Memfault Beyond the Launch