EmbeddedRelated.com
Forums
The 2026 Embedded Online Conference

Shared Communications Bus - RS-422 or RS-485

Started by Rick C November 2, 2022
On 9/11/22 11:46, Rick C wrote:
> On Tuesday, November 8, 2022 at 7:45:14 PM UTC-4, Clifford Heath wrote: >> On 9/11/22 00:50, Rick C wrote: >>> On Tuesday, November 8, 2022 at 7:54:59 AM UTC-4, Richard Damon wrote: >>>> IF you don't start the looking for the start bit until the time has >>>> passed for the END of the stop bit, and the receiver is 0.1% slow, then >>>> every bit you lose 0.1% of a bit, or 1% per character, so after 50 >>>> consecutive characters you are 1/2 a bit late, and getting errors. >>> >>> There you go! You have just proven that no one would design a UART to work this way and for it to be used in the market place. There would be too many applications where the data burst would cause it to not work. Programming around such a design flaw would be such a PITA and expose the flaw, that the part would become a pariah. >> Yeah, but you can still insist that the stop bit fills 99%, or 90% of >> the required time, and not get that pathology. > > I'm not clear on what you are saying. The larger the clock difference, the earlier the receiver has to look for the start bit. It will work just fine with the start bit check being delayed until the end of the stop bit, as long as the timing clocks aren't offset in one direction. Looking for the start bit in the middle of the stop bit gives a total of 5% tolerance, pretty much taking mistiming out of the list of problems for async data transmission. Drop that to 0.05% (your 99% example) and you are in the realm of crystal timing error on the two systems, ±250 ppm. >
Go back to the first words I quoted from Richard: " IF you don't start the looking for the start bit until the time has passed for the END of the stop bit, and the receiver is 0.1% slow, then every bit you lose 0.1% of a bit " But if you wait until 95% of the stop bit time, and allow a new start bit to come early by 5%, then it doesn't matter if "the receiver is 0.1% slow" and you don't lose sync; the 5% early doesn't mount up over "50 consecutive characters". Same if you wait 99% and the new start bit is only 1% early. So your "There you go! You have just proven..." was a bogus situation proposed by Richard, that's trivially avoided, and basically all actual UARTs will do that,
On Wednesday, November 9, 2022 at 7:32:25 AM UTC-4, Clifford Heath wrote:
> On 9/11/22 11:46, Rick C wrote: > > On Tuesday, November 8, 2022 at 7:45:14 PM UTC-4, Clifford Heath wrote: > >> On 9/11/22 00:50, Rick C wrote: > >>> On Tuesday, November 8, 2022 at 7:54:59 AM UTC-4, Richard Damon wrote: > >>>> IF you don't start the looking for the start bit until the time has > >>>> passed for the END of the stop bit, and the receiver is 0.1% slow, then > >>>> every bit you lose 0.1% of a bit, or 1% per character, so after 50 > >>>> consecutive characters you are 1/2 a bit late, and getting errors. > >>> > >>> There you go! You have just proven that no one would design a UART to work this way and for it to be used in the market place. There would be too many applications where the data burst would cause it to not work. Programming around such a design flaw would be such a PITA and expose the flaw, that the part would become a pariah. > >> Yeah, but you can still insist that the stop bit fills 99%, or 90% of > >> the required time, and not get that pathology. > > > > I'm not clear on what you are saying. The larger the clock difference, the earlier the receiver has to look for the start bit. It will work just fine with the start bit check being delayed until the end of the stop bit, as long as the timing clocks aren't offset in one direction. Looking for the start bit in the middle of the stop bit gives a total of 5% tolerance, pretty much taking mistiming out of the list of problems for async data transmission. Drop that to 0.05% (your 99% example) and you are in the realm of crystal timing error on the two systems, ±250 ppm. > > > Go back to the first words I quoted from Richard: > " > IF you don't start the looking for the start bit until the time has > passed for the END of the stop bit, and the receiver is 0.1% slow, then > every bit you lose 0.1% of a bit > " > But if you wait until 95% of the stop bit time, and allow a new start > bit to come early by 5%, then it doesn't matter if "the receiver is 0.1% > slow" and you don't lose sync; the 5% early doesn't mount up over "50 > consecutive characters". > > Same if you wait 99% and the new start bit is only 1% early. > > So your "There you go! You have just proven..." was a bogus situation > proposed by Richard, that's trivially avoided, and basically all actual > UARTs will do that,
If you cherry pick your numbers, you can make anything work. Looking for a start bit at the middle of the stop bit gives you the ±5% tolerance of timing. If you delay when you start looking for a start bit, you reduce this tolerance. So, in that case, if you are happy to provide a ±0.1% tolerance clock under all conditions, then sure, you can look for the start bit later. In the real world, there are users who expect a UART to work the way it is supposed to work, and use less accurate timing references than a crystal. This UART won't work for them and that would become known to users in general. While a claim has been made that such UARTs exist, no one has provided information about one. I would also point out that the above timing analysis is not actually worse case since it does not take into account the 1/16th or 1/8th bit jitter from the first character start bit detection. So the requirements on the timing reference are even tighter when using the sloppy timing for start bit checking.
On 11/8/22 8:50 AM, Rick C wrote:
> On Tuesday, November 8, 2022 at 7:54:59 AM UTC-4, Richard Damon wrote: >> On 11/7/22 8:15 PM, Rick C wrote: >>> On Monday, November 7, 2022 at 8:02:19 PM UTC-4, Richard Damon wrote: >>>> YOU may consider it a design flaw, but I have seen too many serial ports >>>> having this flaw in them to just totally ignore it. >>> >>> That is exceedingly hard to imagine, since it would take extra logic to implement. The logic of a UART is to first, detect the start bit which lands the state machine in the middle of said start bit which then times to the middle of all subsequent bits (ignoring timing accuracy). So it thinks it is in the middle of the stop bit when the bit timing is complete. It would need to have more hardware to time to the end of the stop bit. This might be present, for other purposes, but it should not be used to control looking for the start bit. This is by definition of the async protocol, to use the stop bit time to resync to the next start bit. Any device that does not start looking for a new start bit at the point it thinks is the middle of the stop bit, is defective by definition and will never work properly with timing mismatches of one polarity, the receiver's clock being slower than the transmitter clock. >> Depends on how you design it. IF you start a counter at the leading edge >> of the start bit and then detect the counter at its middle value, then >> the stop bit ends when the counter finally expires at the END of the >> stop bit. > > There is still some extra logic to distinguish the condition. There is a bit timing counter, and a counter to track which bit you are in. Everything happening in the operation of the UART is happening at the middle of a bit. Then you need extra logic to distinguish the end of a bit.
Nope, simplest logic is to have your 8x sub-bit counter start at 0 and count up starting on the leading edge, on the count values of 3, 4, and 5 you sample the bit for noise detection, and roll over from 7 to 0 for the next bit, and count to the next bit. You stop the counter when it rolls from 7 to 0 in the stop bit, and counts past the stop bit.
> > >>> I guess I'm not certain that would cause an error, actually. It would initiate the start bit detection logic, and as long as it does not require seeing the idle condition before detecting the start bit condition, it would still work. Again, this is expected by the definition of asynchronous format. This would result in a grosser offset in timing the middle of the bits, so the allowable timing error is less. But it will still work otherwise. 5% is a very large combined error. Most devices are timed by crystals with maybe ±200 ppm error. >> IF you don't start the looking for the start bit until the time has >> passed for the END of the stop bit, and the receiver is 0.1% slow, then >> every bit you lose 0.1% of a bit, or 1% per character, so after 50 >> consecutive characters you are 1/2 a bit late, and getting errors. > > There you go! You have just proven that no one would design a UART to work this way and for it to be used in the market place. There would be too many applications where the data burst would cause it to not work. Programming around such a design flaw would be such a PITA and expose the flaw, that the part would become a pariah.
Except that we have bought many USB serial ports with just this flaw in them. So I guess the nobody actually exists. Seem to be based on an FTDI chip, but maybe just a "look alike", where they did bare minimum design work. The key point is that very few applications actually do have very long uninterrupted sequences of characters, and typical PCs will tend to naturally add small spaces just becuase the OS isn't that great. Doesn't require much to fix the issue.
> > I recall the Intel USART was such a part for other technical flaws. So they finally came out with a new version that fixed the problems. > >
On Thursday, November 10, 2022 at 12:46:03 AM UTC-4, Richard Damon wrote:
> On 11/8/22 8:50 AM, Rick C wrote: > > On Tuesday, November 8, 2022 at 7:54:59 AM UTC-4, Richard Damon wrote: > >> On 11/7/22 8:15 PM, Rick C wrote: > >>> On Monday, November 7, 2022 at 8:02:19 PM UTC-4, Richard Damon wrote: > >>>> YOU may consider it a design flaw, but I have seen too many serial ports > >>>> having this flaw in them to just totally ignore it. > >>> > >>> That is exceedingly hard to imagine, since it would take extra logic to implement. The logic of a UART is to first, detect the start bit which lands the state machine in the middle of said start bit which then times to the middle of all subsequent bits (ignoring timing accuracy). So it thinks it is in the middle of the stop bit when the bit timing is complete. It would need to have more hardware to time to the end of the stop bit. This might be present, for other purposes, but it should not be used to control looking for the start bit. This is by definition of the async protocol, to use the stop bit time to resync to the next start bit. Any device that does not start looking for a new start bit at the point it thinks is the middle of the stop bit, is defective by definition and will never work properly with timing mismatches of one polarity, the receiver's clock being slower than the transmitter clock. > >> Depends on how you design it. IF you start a counter at the leading edge > >> of the start bit and then detect the counter at its middle value, then > >> the stop bit ends when the counter finally expires at the END of the > >> stop bit. > > > > There is still some extra logic to distinguish the condition. There is a bit timing counter, and a counter to track which bit you are in. Everything happening in the operation of the UART is happening at the middle of a bit. Then you need extra logic to distinguish the end of a bit. > Nope, simplest logic is to have your 8x sub-bit counter start at 0 and > count up starting on the leading edge, on the count values of 3, 4, and > 5 you sample the bit for noise detection, and roll over from 7 to 0 for > the next bit, and count to the next bit. You stop the counter when it > rolls from 7 to 0 in the stop bit, and counts past the stop bit.
You've conveniently left out a significant amount of logic. Detecting specific states of the sub-bit counter uses more logic than other function. Most UARTs use 16 sub-samples and so have a 4 bit counter. Counters have a carry chain built in, so the carry out is a free zero count detector. Counters are most efficient in terms of implementation when done as down counters, with various preloads. The counter is loaded with the half bit count while waiting for the leading edge of the start bit. The same zero detection (carry out) that triggers the next load is also the bit center mark. All loads during an active character will load a full bit count (different in the msb only). Every zero detect will mark a bit center. To get to the end of the final stop bit would require loading the counter with another half bit count, so extra logic. More than anything, why would anyone want to think about adding the extra half bit count when it's not part of any requirement?
> >>> I guess I'm not certain that would cause an error, actually. It would initiate the start bit detection logic, and as long as it does not require seeing the idle condition before detecting the start bit condition, it would still work. Again, this is expected by the definition of asynchronous format. This would result in a grosser offset in timing the middle of the bits, so the allowable timing error is less. But it will still work otherwise. 5% is a very large combined error. Most devices are timed by crystals with maybe ±200 ppm error. > >> IF you don't start the looking for the start bit until the time has > >> passed for the END of the stop bit, and the receiver is 0.1% slow, then > >> every bit you lose 0.1% of a bit, or 1% per character, so after 50 > >> consecutive characters you are 1/2 a bit late, and getting errors. > > > > There you go! You have just proven that no one would design a UART to work this way and for it to be used in the market place. There would be too many applications where the data burst would cause it to not work. Programming around such a design flaw would be such a PITA and expose the flaw, that the part would become a pariah. > Except that we have bought many USB serial ports with just this flaw in > them.
Oh, you mean the Chinese UARTs that most people won't touch because they are full of flaws! Got it. I was talking about real UARTs that people use in real designs. I used to buy CH340 based USB cables for work. But we eventually figured out that they were unreliable and I only use FTDI cables now. The CH340 cables seemed to work, but would quit after an hour or two or three.
> So I guess the nobody actually exists. > > Seem to be based on an FTDI chip, but maybe just a "look alike", where > they did bare minimum design work.
There are lots of clones. If you have an FTDI chip with this stop bit problem, I'd love to see it. I think FTDI would love to see it too.
> The key point is that very few applications actually do have very long > uninterrupted sequences of characters, and typical PCs will tend to > naturally add small spaces just becuase the OS isn't that great. Doesn't > require much to fix the issue.
The key point is that a company like FTDI is not going to sell such crap. "Fixing" such issues is only possible if you have control over the system. Not everyone is designing a system from scratch. My brother's company makes a device that interfaces to a measurement device outputting data periodically. For who knows what reason, that company changed the product so it stopped outputting the headers. So a small box was made to add the headers every few lines. The UARTs in it just have to work correctly, since there's no option to modify any other piece of equipment. If they don't work correctly, they get pulled and they use other equipment, and the original maker gets a black eye. Enough black eyes and people don't buy that equipment anymore. -- Rick C. --+++ Get 1,000 miles of free Supercharging --+++ Tesla referral code - https://ts.la/richard11209
The 2026 Embedded Online Conference