EmbeddedRelated.com
Forums

Half-duplex RS485 bus and how to manage direction signal

Started by pozz August 9, 2016
Each node on the half-duplex RS485 bus should start transmitting only if 
all the other nodes are receiveing (their transmitters are all disabled).

Usually the bus is master-slave with the master sending a frame request 
to a single slave and the slave send back a frame answer.
In this scenario, slave must be sure to start transmission of the answer 
after the master has disabled its transmitter.
At the same time, even the master must be sure the slave have disabled 
tx, before sending a new frame request.

One simple solution to this problem is to use a microcontroller that 
automatically moves TE (Transmitter Enable) signal. For example, the 
USART peripheral of SAMC21 from Atmel (Cortex-M0+) has this nice feature.
Of course, *all* the nodes on the bus should use such a microcontroller.
This solution has no software overhead for TE signal management.

Another solution is to introduce a short delay before answering (on the 
slaves) and a short delay before sending the next frame request (on the 
master).
The amount of this delay is typically short, so often it is implemented 
with a blocking loop (how bad!). If you are lucky, you have a free hw 
timer that can be used for this goal.
Of course, there is some overhead in the firmware to manage the 
direction signal.

Another solution is to use the USART peripheral itself to implement the 
short delay. Before transmitting a frame, some dummy bytes can be 
transmitted first. During the transmission of dummy bytes, the TE signal 
keeps the transmitter disabled. Only when the last dummy byte has 
shifted out completely, the TE signal can enable the transmitter.
This solution has some overhead too. A counter of dummy bytes already 
transmitted should be implemented.

There is another solution that is more elegant and doesn't introduce any 
overhead to manage TE signal. I heard about this trick somewhere (maybe 
here) in the past, but I never tried to implement on my projects.
The trick can be applied if the frames on the bus can start with a 
variable number of SOF (Start of Frame) characters and the SOF character 
can be set to 0xFF.
In this case the transmitter is able to transmit a new frame immediately 
after receiving the previous, adding some SOF characters at the top of 
the frame.

0xFF appears as a single Start Bit (short negative pulse at TTL domain).
Now, if the receiver disables the transmitter between any two Start Bits 
(or before the Start Bit of the first SOF char) , it will see some (or 
no) SOF chars at the start of the frame and it will discard them silently.

However I can't explain what happens if the receiver disables its 
transmitter *in the middle* of a Start Bit of a SOF char.
In this case, I think the receiver can be confused and can see some 
frame errors... and this not only for the first byte, but for all the 
bytes in the frame if they are transmitted as a burst (with a single 
Stop Bit between them).
On Wed, 10 Aug 2016 01:03:24 +0200, pozz <pozzugno@gmail.com> wrote:

>Each node on the half-duplex RS485 bus should start transmitting only if >all the other nodes are receiveing (their transmitters are all disabled). > >Usually the bus is master-slave with the master sending a frame request >to a single slave and the slave send back a frame answer. >In this scenario, slave must be sure to start transmission of the answer >after the master has disabled its transmitter. >At the same time, even the master must be sure the slave have disabled >tx, before sending a new frame request. > >One simple solution to this problem is to use a microcontroller that >automatically moves TE (Transmitter Enable) signal. For example, the >USART peripheral of SAMC21 from Atmel (Cortex-M0+) has this nice feature. >Of course, *all* the nodes on the bus should use such a microcontroller. >This solution has no software overhead for TE signal management.
Or at least a proper UART that generates an interrupt, when the last stop bit has actually been _shifted_out_. The garden variety 14550 family is practically useless, since it only generates an interrupt, when the last byte is moved from the FIFO _to_the shift register. It then takes a full character time, before the actual last stop bit has been transmitted out. You would have to use a busy loop to check for the actual status bit telling when the last stop bit has actually been sent. Only then, you can disable the TE. I have seen a lot of cases, when the last byte is received as 0x8?, 0xC?, 0xE?, when the TE goes inactive in the middle of last character and the fail safe termination forces the last bit(s) to idle ("1"). The LSB is sent firs, so the MSBbit(s) are affected. One way around this could be keeping the receiver active all the time, so you can listen for your own echo. As soon as you get Rx interrupt from the last character transmitted and then disable TE.
> >Another solution is to introduce a short delay before answering (on the >slaves) and a short delay before sending the next frame request (on the >master).
The internal processing delay on the slave is usually enough to solve this issues. On a master, you may have to enforce this.
>The amount of this delay is typically short, so often it is implemented >with a blocking loop (how bad!). If you are lucky, you have a free hw >timer that can be used for this goal. >Of course, there is some overhead in the firmware to manage the >direction signal. > >Another solution is to use the USART peripheral itself to implement the >short delay. Before transmitting a frame, some dummy bytes can be >transmitted first. During the transmission of dummy bytes, the TE signal >keeps the transmitter disabled. Only when the last dummy byte has >shifted out completely, the TE signal can enable the transmitter. >This solution has some overhead too. A counter of dummy bytes already >transmitted should be implemented.
If you get an interrupt when that last bit of the dummy byte has actually been shifted out, just wait for the interrupt from that dummy byte, turn on TE and send your actual data frame.
Il 10/08/2016 08:04, upsidedown@downunder.com ha scritto:
> On Wed, 10 Aug 2016 01:03:24 +0200, pozz <pozzugno@gmail.com> wrote: > >> Each node on the half-duplex RS485 bus should start transmitting only if >> all the other nodes are receiveing (their transmitters are all disabled). >> >> Usually the bus is master-slave with the master sending a frame request >> to a single slave and the slave send back a frame answer. >> In this scenario, slave must be sure to start transmission of the answer >> after the master has disabled its transmitter. >> At the same time, even the master must be sure the slave have disabled >> tx, before sending a new frame request. >> >> One simple solution to this problem is to use a microcontroller that >> automatically moves TE (Transmitter Enable) signal. For example, the >> USART peripheral of SAMC21 from Atmel (Cortex-M0+) has this nice feature. >> Of course, *all* the nodes on the bus should use such a microcontroller. >> This solution has no software overhead for TE signal management. > > Or at least a proper UART that generates an interrupt, when the last > stop bit has actually been _shifted_out_.
Oh yes, but my original post intent was to discuss the methods to avoid transmitting before the previous transmitter has switched back to receiver mode.
> The garden variety 14550 > family is practically useless, since it only generates an interrupt, > when the last byte is moved from the FIFO _to_the shift register.
> [...] Some years ago I worked on a microcontroller from Fujitsu that has similar behaviour. It was a mess.
>> Another solution is to introduce a short delay before answering (on the >> slaves) and a short delay before sending the next frame request (on the >> master). > > The internal processing delay on the slave is usually enough to solve > this issues. On a master, you may have to enforce this.
Why do you think master and slaves are different on this?
>> The amount of this delay is typically short, so often it is implemented >> with a blocking loop (how bad!). If you are lucky, you have a free hw >> timer that can be used for this goal. >> Of course, there is some overhead in the firmware to manage the >> direction signal. >> >> Another solution is to use the USART peripheral itself to implement the >> short delay. Before transmitting a frame, some dummy bytes can be >> transmitted first. During the transmission of dummy bytes, the TE signal >> keeps the transmitter disabled. Only when the last dummy byte has >> shifted out completely, the TE signal can enable the transmitter. >> This solution has some overhead too. A counter of dummy bytes already >> transmitted should be implemented. > > If you get an interrupt when that last bit of the dummy byte has > actually been shifted out, just wait for the interrupt from that dummy > byte, turn on TE and send your actual data frame.
Yes, it is exactly what I tried to explain. The drawback of this method is some sort of overhead. My main goal was to discuss the last method that doesn't introduce an overhead at all. You can enable the transmitter as soon as you are ready to send something. The trick is to start the frame with some 0xFF.
On Wed, 10 Aug 2016 09:04:50 +0200, pozz <pozzugno@gmail.com> wrote:

>Il 10/08/2016 08:04, upsidedown@downunder.com ha scritto: >> On Wed, 10 Aug 2016 01:03:24 +0200, pozz <pozzugno@gmail.com> wrote: >> >>> Each node on the half-duplex RS485 bus should start transmitting only if >>> all the other nodes are receiveing (their transmitters are all disabled). >>> >>> Usually the bus is master-slave with the master sending a frame request >>> to a single slave and the slave send back a frame answer. >>> In this scenario, slave must be sure to start transmission of the answer >>> after the master has disabled its transmitter. >>> At the same time, even the master must be sure the slave have disabled >>> tx, before sending a new frame request. >>> >>> One simple solution to this problem is to use a microcontroller that >>> automatically moves TE (Transmitter Enable) signal. For example, the >>> USART peripheral of SAMC21 from Atmel (Cortex-M0+) has this nice feature. >>> Of course, *all* the nodes on the bus should use such a microcontroller. >>> This solution has no software overhead for TE signal management. >> >> Or at least a proper UART that generates an interrupt, when the last >> stop bit has actually been _shifted_out_. > >Oh yes, but my original post intent was to discuss the methods to avoid >transmitting before the previous transmitter has switched back to >receiver mode.
The real problem with 2 wire RS-485 is the TE timing accuracy. Turning off the transmitter too early and the last byte is corrupted, turning off too late and there is collision, when an other station starts to transmit. Instead of fixing the original problem, you try a workaround by introducing an extra delay before any frame transmission. This may be OK, but it should be remembered that any extra latencies will drop the effective throughput, especially when short message frames are used. Doubling the nominal line speed doesn't double the throughput. How much is acceptable varies by case by case.
> > >> The garden variety 14550 >> family is practically useless, since it only generates an interrupt, >> when the last byte is moved from the FIFO _to_the shift register. > > [...] > >Some years ago I worked on a microcontroller from Fujitsu that has >similar behaviour. It was a mess. > > >>> Another solution is to introduce a short delay before answering (on the >>> slaves) and a short delay before sending the next frame request (on the >>> master). >> >> The internal processing delay on the slave is usually enough to solve >> this issues. On a master, you may have to enforce this. > >Why do you think master and slaves are different on this?
The slave usually doesn't know what the master is going to ask it, so it may take a while to prepare the answer. A 300 us response delay at 115k2 is 3.5 character times, so you could get the Modbus interframe gap for "free" :-) On the master side, you might have a prepared scan list just waiting for a go ahead, so the next request would go out immediately after getting the response from the previous slave. This may require some additional artificial delay.
> > >>> The amount of this delay is typically short, so often it is implemented >>> with a blocking loop (how bad!). If you are lucky, you have a free hw >>> timer that can be used for this goal. >>> Of course, there is some overhead in the firmware to manage the >>> direction signal. >>> >>> Another solution is to use the USART peripheral itself to implement the >>> short delay. Before transmitting a frame, some dummy bytes can be >>> transmitted first. During the transmission of dummy bytes, the TE signal >>> keeps the transmitter disabled. Only when the last dummy byte has >>> shifted out completely, the TE signal can enable the transmitter. >>> This solution has some overhead too. A counter of dummy bytes already >>> transmitted should be implemented. >> >> If you get an interrupt when that last bit of the dummy byte has >> actually been shifted out, just wait for the interrupt from that dummy >> byte, turn on TE and send your actual data frame. > >Yes, it is exactly what I tried to explain. The drawback of this method >is some sort of overhead. > > >My main goal was to discuss the last method that doesn't introduce an >overhead at all. You can enable the transmitter as soon as you are >ready to send something. The trick is to start the frame with some 0xFF.
This would be acceptable only for protocol that starts with a known byte, like 0x68 as in some IEC protocols, so that any bytes (such as 0xFF) preceding it is ignored by the receivers as line noise. Such method is useless e.g. for Modbus RTU, in which the frame can start with any byte, including 0xFF. My favorite for implementing half duplex and especially Modbus RTU is the QUICC I/O-coprocessor as found in MC68360 and some PPCs, in which you can program the pre- and postambles and idle timeouts on a byte by byte basis and let the coprocessor do the whole frame sequence.
Il 10/08/2016 11:19, upsidedown@downunder.com ha scritto:
> On Wed, 10 Aug 2016 09:04:50 +0200, pozz <pozzugno@gmail.com> wrote: > >> Il 10/08/2016 08:04, upsidedown@downunder.com ha scritto: >>> On Wed, 10 Aug 2016 01:03:24 +0200, pozz <pozzugno@gmail.com> wrote: >>> >>>> Each node on the half-duplex RS485 bus should start transmitting only if >>>> all the other nodes are receiveing (their transmitters are all disabled). >>>> >>>> Usually the bus is master-slave with the master sending a frame request >>>> to a single slave and the slave send back a frame answer. >>>> In this scenario, slave must be sure to start transmission of the answer >>>> after the master has disabled its transmitter. >>>> At the same time, even the master must be sure the slave have disabled >>>> tx, before sending a new frame request. >>>> >>>> One simple solution to this problem is to use a microcontroller that >>>> automatically moves TE (Transmitter Enable) signal. For example, the >>>> USART peripheral of SAMC21 from Atmel (Cortex-M0+) has this nice feature. >>>> Of course, *all* the nodes on the bus should use such a microcontroller. >>>> This solution has no software overhead for TE signal management. >>> >>> Or at least a proper UART that generates an interrupt, when the last >>> stop bit has actually been _shifted_out_. >> >> Oh yes, but my original post intent was to discuss the methods to avoid >> transmitting before the previous transmitter has switched back to >> receiver mode. > > The real problem with 2 wire RS-485 is the TE timing accuracy. Turning > off the transmitter too early and the last byte is corrupted, turning > off too late and there is collision, when an other station starts to > transmit. > > Instead of fixing the original problem, you try a workaround by > introducing an extra delay before any frame transmission.
Deasserting TE signal in interrupt (if your hw provides a suitable interrupt when the byte has really shifted out on the bus) *isn't* a fix of the original problem. First of all, there is always a *latency* between the event (last bit shifted out) and the action (deassertion of TE signal). Ok, this latency is very small. However there are some cases when the latency *could* be longer. For example, if you have other interrupts (timers, other UARTs,...) and they occurs at the same time. On some hw, you can configure the priority of the transmission complete interrupt and one ISR can be interrupted by another interrupt with a higher priority. However I think the more frequent situation is when your interrupt can be interrupted by another interrupt request. Your TE signal could delay to deassert when other interrupts occur at the same time. The worst case is the sum of all ISR worst-case durations. This is why I introduce a short delay before transmitting a new frame on the bus, just to be sure the other node has effectively disabled the transmitter in the worst case situation.
> This may be > OK, but it should be remembered that any extra latencies will drop the > effective throughput, especially when short message frames are used. > Doubling the nominal line speed doesn't double the throughput. How > much is acceptable varies by case by case. >> >> >>> The garden variety 14550 >>> family is practically useless, since it only generates an interrupt, >>> when the last byte is moved from the FIFO _to_the shift register. >>> [...] >> >> Some years ago I worked on a microcontroller from Fujitsu that has >> similar behaviour. It was a mess. >> >> >>>> Another solution is to introduce a short delay before answering (on the >>>> slaves) and a short delay before sending the next frame request (on the >>>> master). >>> >>> The internal processing delay on the slave is usually enough to solve >>> this issues. On a master, you may have to enforce this. >> >> Why do you think master and slaves are different on this? > > The slave usually doesn't know what the master is going to ask it, so > it may take a while to prepare the answer. A 300 us response delay at > 115k2 is 3.5 character times, so you could get the Modbus interframe > gap for "free" :-) > > On the master side, you might have a prepared scan list just waiting > for a go ahead, so the next request would go out immediately after > getting the response from the previous slave. This may require some > additional artificial delay. > >> >> >>>> The amount of this delay is typically short, so often it is implemented >>>> with a blocking loop (how bad!). If you are lucky, you have a free hw >>>> timer that can be used for this goal. >>>> Of course, there is some overhead in the firmware to manage the >>>> direction signal. >>>> >>>> Another solution is to use the USART peripheral itself to implement the >>>> short delay. Before transmitting a frame, some dummy bytes can be >>>> transmitted first. During the transmission of dummy bytes, the TE signal >>>> keeps the transmitter disabled. Only when the last dummy byte has >>>> shifted out completely, the TE signal can enable the transmitter. >>>> This solution has some overhead too. A counter of dummy bytes already >>>> transmitted should be implemented. >>> >>> If you get an interrupt when that last bit of the dummy byte has >>> actually been shifted out, just wait for the interrupt from that dummy >>> byte, turn on TE and send your actual data frame. >> >> Yes, it is exactly what I tried to explain. The drawback of this method >> is some sort of overhead. >> >> >> My main goal was to discuss the last method that doesn't introduce an >> overhead at all. You can enable the transmitter as soon as you are >> ready to send something. The trick is to start the frame with some 0xFF. > > This would be acceptable only for protocol that starts with a known > byte, like 0x68 as in some IEC protocols, so that any bytes (such as > 0xFF) preceding it is ignored by the receivers as line noise.
Yes, I think it is a very good method to solve this problem if you are able to design the protocol. Anyway I'm not sure this works well in every situation. What happens if TE signal is deasserted *during* the start bit of 0xFF? The receivers would see a shorter start bit that can confuse next bytes... or not?
> Such method is useless e.g. for Modbus RTU, in which the frame can > start with any byte, including 0xFF.
Of course.
> My favorite for implementing half duplex and especially Modbus RTU is > the QUICC I/O-coprocessor as found in MC68360 and some PPCs, in which > you can program the pre- and postambles and idle timeouts on a byte by > byte basis and let the coprocessor do the whole frame sequence.
If you have all SAMC21 (or micro that manages automatically TE signal) on the bus, you have solved the problem of half-duplex communication (Modbus RTU could be somewhat more complex).
pozz <pozzugno@gmail.com> wrote:
> Each node on the half-duplex RS485 bus should start transmitting only if > all the other nodes are receiveing (their transmitters are all disabled). > > Usually the bus is master-slave with the master sending a frame request > to a single slave and the slave send back a frame answer. > In this scenario, slave must be sure to start transmission of the answer > after the master has disabled its transmitter. > At the same time, even the master must be sure the slave have disabled > tx, before sending a new frame request.
<snip>
> There is another solution that is more elegant and doesn't introduce any > overhead to manage TE signal. I heard about this trick somewhere (maybe > here) in the past, but I never tried to implement on my projects. > The trick can be applied if the frames on the bus can start with a > variable number of SOF (Start of Frame) characters and the SOF character > can be set to 0xFF.
So there is overhead: you need to send, receive and drop SOF characters... With fixed length messages one can use DMA in the receiver. However, extra SOF-s mean that you get variable length messages. You can add padding at the end so that payload is always within fixed receive window, but that is extra overhead.
> In this case the transmitter is able to transmit a new frame immediately > after receiving the previous, adding some SOF characters at the top of > the frame. > > 0xFF appears as a single Start Bit (short negative pulse at TTL domain). > Now, if the receiver disables the transmitter between any two Start Bits > (or before the Start Bit of the first SOF char) , it will see some (or > no) SOF chars at the start of the frame and it will discard them silently. > > However I can't explain what happens if the receiver disables its > transmitter *in the middle* of a Start Bit of a SOF char. > In this case, I think the receiver can be confused and can see some > frame errors... and this not only for the first byte, but for all the > bytes in the frame if they are transmitted as a burst (with a single > Stop Bit between them).
Receiver is supposed to synchronize to the edge of start bit (transition from idle to start bit). If receiver disables its transmitter *in the middle* of a start bit, then the edge as seen by receiver will be later than real transition. Normally receiver should correctly synchronize to the second edge. However, if receiver starts looking for edge of start bit too late it will miss the second byte and only recognize start bit of third byte. So, you should loose at most two bytes. -- Waldek Hebisch
On 8/10/2016 6:38 AM, pozz wrote:
> Il 10/08/2016 11:19, upsidedown@downunder.com ha scritto: >> On Wed, 10 Aug 2016 09:04:50 +0200, pozz <pozzugno@gmail.com> wrote: >> >>> Il 10/08/2016 08:04, upsidedown@downunder.com ha scritto: >>>> On Wed, 10 Aug 2016 01:03:24 +0200, pozz <pozzugno@gmail.com> wrote: >>>> >>>>> The amount of this delay is typically short, so often it is >>>>> implemented >>>>> with a blocking loop (how bad!). If you are lucky, you have a free hw >>>>> timer that can be used for this goal. >>>>> Of course, there is some overhead in the firmware to manage the >>>>> direction signal. >>>>> >>>>> Another solution is to use the USART peripheral itself to implement >>>>> the >>>>> short delay. Before transmitting a frame, some dummy bytes can be >>>>> transmitted first. During the transmission of dummy bytes, the TE >>>>> signal >>>>> keeps the transmitter disabled. Only when the last dummy byte has >>>>> shifted out completely, the TE signal can enable the transmitter. >>>>> This solution has some overhead too. A counter of dummy bytes already >>>>> transmitted should be implemented. >>>> >>>> If you get an interrupt when that last bit of the dummy byte has >>>> actually been shifted out, just wait for the interrupt from that dummy >>>> byte, turn on TE and send your actual data frame. >>> >>> Yes, it is exactly what I tried to explain. The drawback of this method >>> is some sort of overhead. >>> >>> >>> My main goal was to discuss the last method that doesn't introduce an >>> overhead at all. You can enable the transmitter as soon as you are >>> ready to send something. The trick is to start the frame with some >>> 0xFF. >> >> This would be acceptable only for protocol that starts with a known >> byte, like 0x68 as in some IEC protocols, so that any bytes (such as >> 0xFF) preceding it is ignored by the receivers as line noise. > > Yes, I think it is a very good method to solve this problem if you are > able to design the protocol. > Anyway I'm not sure this works well in every situation. What happens if > TE signal is deasserted *during* the start bit of 0xFF? The receivers > would see a shorter start bit that can confuse next bytes... or not?
The way a UART should work (and all the UARTs I've seen do work this way) is to sample the line for the leading edge of the start bit at a rate either 8 or 16 times faster than the bit rate. This provides the timing of the start bit accurately enough to find the middle of the start bit. If the line is not still a low when the UART is looking for the middle of the start bit it is rejected as noise. So your short start bits fall into two categories, long and short. If the shortened start bit is shorter than half a bit time it will be ignored. If the start bit is still longer than half a bit time it will picked up as a valid start bit. There is the trouble. There will be some variation in timing of the receiver and transmitter so that the "middle of the bit" sampling may be into the next bit - skipping bits or seeing some twice. It can give you a framing error if the next char start bit is sampled as the stop bit. So the receiver can see a garbage character and miss the second character. A third xFF should be received correctly. -- Rick C
Il 11/08/2016 02:20, rickman ha scritto:
> On 8/10/2016 6:38 AM, pozz wrote: >> Il 10/08/2016 11:19, upsidedown@downunder.com ha scritto: >>> On Wed, 10 Aug 2016 09:04:50 +0200, pozz <pozzugno@gmail.com> wrote: >>> >>>> Il 10/08/2016 08:04, upsidedown@downunder.com ha scritto: >>>>> On Wed, 10 Aug 2016 01:03:24 +0200, pozz <pozzugno@gmail.com> wrote: >>>>> >>>>>> The amount of this delay is typically short, so often it is >>>>>> implemented >>>>>> with a blocking loop (how bad!). If you are lucky, you have a free hw >>>>>> timer that can be used for this goal. >>>>>> Of course, there is some overhead in the firmware to manage the >>>>>> direction signal. >>>>>> >>>>>> Another solution is to use the USART peripheral itself to implement >>>>>> the >>>>>> short delay. Before transmitting a frame, some dummy bytes can be >>>>>> transmitted first. During the transmission of dummy bytes, the TE >>>>>> signal >>>>>> keeps the transmitter disabled. Only when the last dummy byte has >>>>>> shifted out completely, the TE signal can enable the transmitter. >>>>>> This solution has some overhead too. A counter of dummy bytes already >>>>>> transmitted should be implemented. >>>>> >>>>> If you get an interrupt when that last bit of the dummy byte has >>>>> actually been shifted out, just wait for the interrupt from that dummy >>>>> byte, turn on TE and send your actual data frame. >>>> >>>> Yes, it is exactly what I tried to explain. The drawback of this >>>> method >>>> is some sort of overhead. >>>> >>>> >>>> My main goal was to discuss the last method that doesn't introduce an >>>> overhead at all. You can enable the transmitter as soon as you are >>>> ready to send something. The trick is to start the frame with some >>>> 0xFF. >>> >>> This would be acceptable only for protocol that starts with a known >>> byte, like 0x68 as in some IEC protocols, so that any bytes (such as >>> 0xFF) preceding it is ignored by the receivers as line noise. >> >> Yes, I think it is a very good method to solve this problem if you are >> able to design the protocol. >> Anyway I'm not sure this works well in every situation. What happens if >> TE signal is deasserted *during* the start bit of 0xFF? The receivers >> would see a shorter start bit that can confuse next bytes... or not? > > The way a UART should work (and all the UARTs I've seen do work this > way) is to sample the line for the leading edge of the start bit at a > rate either 8 or 16 times faster than the bit rate. This provides the > timing of the start bit accurately enough to find the middle of the > start bit. If the line is not still a low when the UART is looking for > the middle of the start bit it is rejected as noise. > > So your short start bits fall into two categories, long and short. If > the shortened start bit is shorter than half a bit time it will be > ignored. If the start bit is still longer than half a bit time it will > picked up as a valid start bit. There is the trouble. There will be > some variation in timing of the receiver and transmitter so that the > "middle of the bit" sampling may be into the next bit - skipping bits or > seeing some twice. It can give you a framing error if the next char > start bit is sampled as the stop bit. So the receiver can see a garbage > character and miss the second character. A third xFF should be received > correctly.
The third byte could be another 0xFF, but could be the first valid byte of the frame. So at least two 0xFF should be pushed in front of a frame. At 9600bps it could be a long delay (about 2ms). If throughput isn't a problem, it could be ok. At 57600bps it could be ok (about 350usec).
Il 11/08/2016 01:05, antispam@math.uni.wroc.pl ha scritto:
> pozz <pozzugno@gmail.com> wrote: >> Each node on the half-duplex RS485 bus should start transmitting only if >> all the other nodes are receiveing (their transmitters are all disabled). >> >> Usually the bus is master-slave with the master sending a frame request >> to a single slave and the slave send back a frame answer. >> In this scenario, slave must be sure to start transmission of the answer >> after the master has disabled its transmitter. >> At the same time, even the master must be sure the slave have disabled >> tx, before sending a new frame request. > <snip> >> There is another solution that is more elegant and doesn't introduce any >> overhead to manage TE signal. I heard about this trick somewhere (maybe >> here) in the past, but I never tried to implement on my projects. >> The trick can be applied if the frames on the bus can start with a >> variable number of SOF (Start of Frame) characters and the SOF character >> can be set to 0xFF. > > So there is overhead: you need to send, receive and drop SOF > characters...
Yes, the solution without any overhead is to use an hardware that manages TE signal automatically on every node on the bus. Here I'm trying to find a slightly worse solution.
> With fixed length messages one can use DMA > in the receiver. However, extra SOF-s mean that you get > variable length messages. You can add padding at the > end so that payload is always within fixed receive window, > but that is extra overhead.
IMHO a protocol with fixed-length messages isn't common and good in many cases. In my experience, the frames/messages could be of (very) different length.
>> In this case the transmitter is able to transmit a new frame immediately >> after receiving the previous, adding some SOF characters at the top of >> the frame. >> >> 0xFF appears as a single Start Bit (short negative pulse at TTL domain). >> Now, if the receiver disables the transmitter between any two Start Bits >> (or before the Start Bit of the first SOF char) , it will see some (or >> no) SOF chars at the start of the frame and it will discard them silently. >> >> However I can't explain what happens if the receiver disables its >> transmitter *in the middle* of a Start Bit of a SOF char. >> In this case, I think the receiver can be confused and can see some >> frame errors... and this not only for the first byte, but for all the >> bytes in the frame if they are transmitted as a burst (with a single >> Stop Bit between them). > > Receiver is supposed to synchronize to the edge of start bit > (transition from idle to start bit). If receiver disables > its transmitter *in the middle* of a start bit, then > the edge as seen by receiver will be later than real > transition. Normally receiver should correctly synchronize to > the second edge. However, if receiver starts looking for > edge of start bit too late it will miss the second byte > and only recognize start bit of third byte. So, you should > loose at most two bytes.
So you need to introduce at least two 0xFF to avoid loosing useful data byte.