On 11/7/22 8:15 PM, Rick C wrote:
> On Monday, November 7, 2022 at 8:02:19 PM UTC-4, Richard Damon wrote:
>> On 11/7/22 11:05 AM, Rick C wrote:
>>> On Monday, November 7, 2022 at 6:00:09 AM UTC-4, Stef wrote:
>>>> On 2022-11-05 Rick C wrote in comp.arch.embedded:
>>>>> On Saturday, November 5, 2022 at 6:58:24 AM UTC-4, David Brown wrote:
>>>>>
>>>>>> In UART communication, this is handled at the protocol level rather than
>>>>>> the hardware (though some UART hardware may have "idle detect" signals
>>>>>> when more than 11 bits of high level are seen in a row). Some
>>>>>> UART-based protocols also use a "break" signal between frames - that is
>>>>>> a string of at least 11 bits of low level.
>>>>>>
>>>>>> If you do not have such pauses, and a receiver is out of step,
>>>>>
>>>>> You have failed to explain how a receiver would get "out of step". The receiver syncs to every character transmitted. If all characters are received, what else do you need? How does it get "out of step"?
>>>> I have seen this happen in long messages (few kB) with no pauses between
>>>> characters and transmitter and receiver set to 8,N,1. It seemed that the
>>>> receiver needed the complete stop bit and then immediately saw the low
>>>> of the next start bit. Detecting the edge when it was ready to see it,
>>>> not when it actually happened. When the receiver is slightly slower than
>>>> the transmitter, this caused the detection of the start bit (and
>>>> therefor the whole character) to shift a tiny bit. This added up over
>>>> the character stream until it eventually failed.
>>>>
>>>> Lowering the baud rate did not solve the issue, but inserting pauses
>>>> after a number of chars did. What also solved it was setting the
>>>> transmitter to 2 stop bits and the receiver to one stop bit. This was a
>>>> one way stream and this may not be possible on a bi-directional stream.
>>>>
>>>> I would expect a sensible UART implementation to allow for a slightly
>>>> shorter stop bit to compensate for issues like this. But apparently this
>>>> UART did not do so in the 1 stop bit setting. I have not tested if
>>>> setting both ends to 2 stop bits also solved the problem.
>>>
>>> If a UART receiver can not properly receive a message like this, it is defective. The point of the start and stop bits are to provide the synchronization. The receiver simply needs to detect the stop be state (by sampling where the receiver thinks is the middle of the bit) and then immediately start looking for the leading edge of the next start bit. The receiver will then be synchronized to the new character bit timing and it will never slip. That gives up to ±5% combined timing error tolerance.
>>>
>>> If the receiver waits until a later time, such as the expected end of the received stop bit, to start looking for a start bit leading edge, it will not be able to tolerate a timing error where the transmitter is faster than the receiver making the timing tolerance unipolar, i.e. 5% rather than ±5%.
>>>
>>> That's a receiver design flaw, or the transmitter is sending short stop bits, which you can easily see on the scope with a delayed trigger control.
>>>
>>> You should be able to diagnose which end has the problem by connecting a different type of receiver to the stream. If a different receiver UART is able to receive the messages without fault, the problem is obviously the failing receiver.
>>>
>> YOU may consider it a design flaw, but I have seen too many serial ports
>> having this flaw in them to just totally ignore it.
>
> That is exceedingly hard to imagine, since it would take extra logic to implement. The logic of a UART is to first, detect the start bit which lands the state machine in the middle of said start bit which then times to the middle of all subsequent bits (ignoring timing accuracy). So it thinks it is in the middle of the stop bit when the bit timing is complete. It would need to have more hardware to time to the end of the stop bit. This might be present, for other purposes, but it should not be used to control looking for the start bit. This is by definition of the async protocol, to use the stop bit time to resync to the next start bit. Any device that does not start looking for a new start bit at the point it thinks is the middle of the stop bit, is defective by definition and will never work properly with timing mismatches of one polarity, the receiver's clock being slower than the transmitter clock.
Depends on how you design it. IF you start a counter at the leading edge
of the start bit and then detect the counter at its middle value, then
the stop bit ends when the counter finally expires at the END of the
stop bit.
>
> I guess I'm not certain that would cause an error, actually. It would initiate the start bit detection logic, and as long as it does not require seeing the idle condition before detecting the start bit condition, it would still work. Again, this is expected by the definition of asynchronous format. This would result in a grosser offset in timing the middle of the bits, so the allowable timing error is less. But it will still work otherwise. 5% is a very large combined error. Most devices are timed by crystals with maybe ±200 ppm error.
IF you don't start the looking for the start bit until the time has
passed for the END of the stop bit, and the receiver is 0.1% slow, then
every bit you lose 0.1% of a bit, or 1% per character, so after 50
consecutive characters you are 1/2 a bit late, and getting errors.
>
>
>> Yes, the "robust" design will allow for a short stop bit, but you can't
>> count on all serial adaptors allowing for it.
>
> There's always garbage designs. I'm surprised I never ran into one. I guess being crystal controlled, there was never enough error to add up to a bit.
As I pointed out, 0.1% means 50 characters. 0.001% means 5000
characters, long enough string of characters and eventually you hit the
problem.
If you only use short messages, you never have a problem.
>
>
>> Part of the problem is that (at least as far as I know) the Asynchronous
>> Serial Format isn't actually a "Published Standard", but just an
>> de-facto protocol that is simple enough that it mostly just works, but
>> still hides a few gotchas for corner cases.
>
> True, but anyone designing chips should understand what they are designing. If they don't, you get garbage. I learned that lesson in a class in school where I screwed up a detail on a program I wrote, because I didn't understand the spec. I've always tried to ask questions since and even if they seem like stupid questions, I don't read the specs wrong.
>
The problem is that if you describe the sampling as "Middle of bit",
then going to the end of the stop bit makes sense.
If you are adding functionality like RS-485 control that needs to know
when that end of bit is, and it is easy to forget that the receiver has
different needs than the transmitter.