EmbeddedRelated.com
Forums

UART behavior for CTS/RTS - Kinetis bugs?

Started by Dave Nadler October 27, 2016
On 10/28/2016 10:42 AM, Dave Nadler wrote:
> On Thursday, October 27, 2016 at 4:42:07 PM UTC-4, Don Y wrote: >> ...A TWX would handle this differently than a modern PC, etc.... > > Of course, as a 'modern' PC hasn't had a serial port in a decade ;-) > > Seriously Don, you haven't answered the question: > Is there any specification on how this should be handled? > Or even any consensus?
No, there is no "specification". The reason I threw out the TWX/PC comparison is to highlight how different devices can handle the same circumstances. And, "concensus" depends on your specific application. [Originally, RTS/CTS were used to handshake the flow of data from DTE to DCE: RTS to indicate a *desire* to send data to the DCE; CTS to indicate a *willingness* to ACCEPT data from the DTE. I.e., assert RTS, wait for CTS, send data as long as CTS is valid. There was not a similar handshake from DCE to DTE! Now, RTS is effectively RTR: "Ready to Receive", not "Request to Send"! There's a big difference in intent.] In your case, you have two specific devices talking to each other. You can possibly tune your implementation to that scenario. In my projects, I typically only know ONE end of the link (the end that I'm coding) and have to *hope* the other party (which will be decided by the end user and may change, over time) will be reasonable in its implementation. So, my driver and handler tends to be very aggressive in ensuring that data is not lost (because its often not possible to NAK a transmission; if I miss it, its gone!) And, as some devices may *not* be reasonable (e.g., legacy devices where the UART driver handles all the handshaking by toggling digital I/O ports), I have to safeguard the incoming data stream that *I* will be processing so I can see when it has not been cooperative (which is why I add a "FIFO overrun" flag to the per-character flags *in* the FIFO). E.g., if I can receive the two different messages: Stop Now Do Not Stop (because someone did a lousy job designing the protocol -- which is surprisingly common!), then I need to be able to note that Stop <OVERRUN> <OVERRUN> Stop are both potentially ambiguous. Was the first message "Stop ... Resume"? Or, "Stop Now"? Was the second "Continue ... Stop"? Or, "Do Not Stop"?
> The interface at issue here is between the Bluegiga/SiLabs Bluetooth > module and the Kinetis microcontroller. > The Bluegiga Bluetooth module is a black box; its behavior is fixed. > I cannot change the receive buffering in this module.
Do you know if it is 'processor-based' or 'dedicated hardware'? (see below)
> Sounds like most of you think that: > > 1) the Kinetis part should be more careful in assuring a valid CTS > > 2) the Bluegiga module is faulty: > - for not processing at least one character after de-asserting CTS
That's what appears to be the case (see below)
> - for extremely short CTS de-assertion pulse, and
Why is it a *pulse*? Does the module effectively use CTS as an acknowledgement for each character received? I.e., as if the developer had written the Rx ISR to drop CTS, read the receive holding register and then reassert CTS when the character had been "processed"? [I.e., you can see how this *might* make sense and isn't strictly problematic -- *if* it was coded to handle the character that is likely in transit on the heels of the previously received character] Try watching the signals (data and flow control) with a logic analyzer and see if there is a fixed, almost immutable relationship between them. Almost as if "Received Data Available" was wired to "Clear to Send", in hardware. And, waited for the processing of the character to "reset" the CTS "flag".
> - for consequently dropping characters transmitted by the Kinetis
If there is *no* incoming FIFO in the device (i.e., just the receiver holding register), then the above scenario would make sense. Any delays in the device's *processing* of the previously received character would effectively cause the next character to be dropped (or, the previous overwritten).
On 10/28/2016 11:15 AM, Don Y wrote:
> On 10/28/2016 10:42 AM, Dave Nadler wrote: >> On Thursday, October 27, 2016 at 4:42:07 PM UTC-4, Don Y wrote: >>> ...A TWX would handle this differently than a modern PC, etc.... >> >> Of course, as a 'modern' PC hasn't had a serial port in a decade ;-) >> >> Seriously Don, you haven't answered the question: >> Is there any specification on how this should be handled? >> Or even any consensus?
> In your case, you have two specific devices talking to each other. > You can possibly tune your implementation to that scenario. > > In my projects, I typically only know ONE end of the link (the end > that I'm coding) and have to *hope* the other party (which will > be decided by the end user and may change, over time) will be > reasonable in its implementation. So, my driver and handler > tends to be very aggressive in ensuring that data is not lost > (because its often not possible to NAK a transmission; if I miss > it, its gone!)
Additionally, I tend to write comm drivers/handlers that are more generic. I.e., that support full-fledged ioctl(2)'s. So, I can choose to configure the device to support XON/XOFF, RTR/CTS, breaks, bit rates, parity, etc. from the application layer AT RUN-TIME, based on the needs of the user's peripheral equipment. In that way, I don't have to keep reinventing the same wheel.
On 10/28/2016 1:17 AM, David Brown wrote:
>> I believe that the only reasonable thing to do would be to finish sending >> the character, then wait until CTS is asserted before sending. > > Yes, anything else would disrupt the low-level behaviour of the UART, > which is designed to transfer in units of one character. If the > transmitter broke off in the middle of the character, a receiver would > see the start of the character followed by high bits.
... unless the transmitter was restarted before the receiver finished counting out the remaining bit-times. In that case, the receiver can see all sorts of data -- including transitions between logic levels that are NOT SYNCHRONIZED with its sample clock (which is typically "set" at the start bit -- of the transmission that had been aborted -- and not re-set until the next "ready for start bit" portion of the algorithm)
> I also think it is odd to see something using hardware flow control in > modern devices. Hardware flow control can be a real pain when you have > buffers, FIFOs, DMA, etc. Flow control is usually handled at a higher > protocol level now. Rather than using CTS/RTS, or XON/XOFF, you define > it at a higher level such as replying to telegrams with NACK's. Or you > just note that since your UART is running at perhaps 115kbps or less, > and your microcontrollers are running at 50 MHz or more, you don't need > any kind of flow control - each side is ready all the time.
Each side *can* be ready all the time -- if designed to be. OTOH, if other activities and buffer lengths prevent a consumer from processing all of the data made available to it before additional data are presented, then the actual clock rate of the processor is immaterial. [E.g., I've seen SPARCstations (LX -- 50MHz) "lose time" because they missed timer interrupts -- and those are far less frequent than even a SLOW serial port! <http://www.pcvr.nl/tcpip/append_b.htm>] If, for example, a consumer waits until an entire "message" is available before processing ANY of it, then there is a very real possibility that the next message can arrive before it's finished with the previous: where to *put* it?
On 10/28/2016 7:23 AM, Grant Edwards wrote:
> On 2016-10-28, David Brown <david.brown@hesbynett.no> wrote: >> >>> I believe that the only reasonable thing to do would be to finish sending >>> the character, then wait until CTS is asserted before sending. >> >> Yes, anything else would disrupt the low-level behaviour of the >> UART, which is designed to transfer in units of one character. If >> the transmitter broke off in the middle of the character, a receiver >> would see the start of the character followed by high bits. > > Which is an entirely valid character assuming no parity. It has a 50% > chance of being a valid character with parity enabled.
Note that there is also the possibility of the transmitter being resumed/restarted before the aborted character *time* has expired.
> Stopping the transmitter in mid-character is simply 100% broken.
+42 There's no way this can be useful unless you were operating in simplex or in an interlocked protocol (like the *original* use of RTS/CTS; not the current RTR/CTS interpretation). How else would the receiver know when it is "appropriate" to alter the state of this control signal? [Yet another case where most documentation is incomplete. Where's the figure indicating how much (time) BEFORE the start bit the CTS signal is examined -- to go/no-go the transmission of THIS character?]
>> I also think it is odd to see something using hardware flow control in >> modern devices. Hardware flow control can be a real pain when you have >> buffers, FIFOs, DMA, etc. > > Why? If hardware flow control is implemented properly (which means > it's in HW), then it's completely transparent regardless of buffers, > FIFOs, DMA, etc. Trying to implement hardware flow control in > software is usually futile.
Its no harder to implement flow control in software than it is to handle the receipt of characters! The EXACT SAME timeliness issues apply (if you can get around to *noticing* that you've received more characters than you can buffer BEFORE you actually lose characters, then you obviously can toggle a DI/O to inform the remote transmitter of this. The issue is deciding how much "slack" you need in your buffer to accommodate variations in Rx ISR latency -- how LATE are you to notice the receipt of a character (and how deep is the hardware FIFO in the UART to tolerate that lateness without loss of data). And, at the same time, how late do you expect the transmitting device to be in noticing the change in the state of the incoming pacing signal. If you're feeding the transmitter from an ISR (not DMA), then you add a test to check the state of the line before stuffing the next character into the transmitter. The only real "complication" comes from having to now monitor the handshaking signal to determine when it is appropriate to RESUME the transmitter ISR (cuz the transmitter is now essentially "ready, but disabled" -- unable to generate a new "event" to signal the loading of the next character)
Dave - as someone mentioned here, there are all sorts various ways of
using the rs232 uarts. I am currently debugging a file interchange routine 
which is causing all sorts of trouble but it is intended to follow the 
following form, which is a somewhat common setup.
   Two machines are interconnected via a 9 pin cable with the recieve and 
transmit lines switched midway, ala a "null modem". The rts and cts lines 
are switched in the same fashion.
   When either machine is willing to accept data, it raises rts which 
causes cts to be raised at the other machine. When the other machine wants 
to send data, it checks to see cts is high and, if so, transmits as long
as cts is high.

Hul
  



Dave Nadler <drn@nadler.com> wrote:
> Hi all - Perhaps some of you have encountered this?
> The post linked below shows the Kinetis UART continues to transmit after the receiver has de-asserted CTS: > https://community.nxp.com/thread/432154 > Freescale engineers claim their part is OK and its the other guys fault...
> Here's my question: As this is an asynchronous link, the receiver can de-assert CTS at any time. What is the transmitter supposed to do? For example, suppose CTS is de-asserted in the middle of transmitting a character: Must the transmitter immediately stop, and restart transmitting the character from the start bit when CTS is again asserted (which in turn introduces a race condition around the end of character transmission)? Or finish the current character transmission and then pause (another race condition for "current character")? Is there any specification on how this should be handled? Or even any consensus?
> Thanks in advance, > Best Regards, Dave
> PS: I've always used a protocol layer that corrects dropped characters so never had this impact an application, but using some parts like SiLabs bluegiga for SPP don't give that option (at the layer of controlling the Bluetooth device, or supporting a non-protocol client expecting a simple ASCII data stream). Even worse, these things drop status messages into the (application data) stream with no consistent header/delimiter - good luck catching all possible Bluetooth events.
On Fri, 28 Oct 2016 10:17:43 +0200, David Brown
<david.brown@hesbynett.no> wrote:

>> I believe that the only reasonable thing to do would be to finish sending >> the character, then wait until CTS is asserted before sending. >> > >Yes, anything else would disrupt the low-level behaviour of the UART, >which is designed to transfer in units of one character. If the >transmitter broke off in the middle of the character, a receiver would >see the start of the character followed by high bits. > >I also think it is odd to see something using hardware flow control in >modern devices. Hardware flow control can be a real pain when you have >buffers, FIFOs, DMA, etc. Flow control is usually handled at a higher >protocol level now. Rather than using CTS/RTS, or XON/XOFF, you define >it at a higher level such as replying to telegrams with NACK's. Or you >just note that since your UART is running at perhaps 115kbps or less, >and your microcontrollers are running at 50 MHz or more, you don't need >any kind of flow control - each side is ready all the time. >
The last time I used true RTS/CTS handshaking was with a serial matrix printer a few decades ago. The CTS pin is sometimes used to control the Data Direction pin on an RS-485 two wire half duplex transceiver. Unfortunately, the garden variety 14550 family UARTs turn off the CTS pin, when the last character is moved into the transmit shift register, _not_ when the last stop bit is actually transmitted from the UART.
Don Y <blockedofcourse@foo.invalid> wrote:
> > [Originally, RTS/CTS were used to handshake the flow of data from > DTE to DCE: RTS to indicate a *desire* to send data to the DCE; > CTS to indicate a *willingness* to ACCEPT data from the DTE. I.e., > assert RTS, wait for CTS, send data as long as CTS is valid. There > was not a similar handshake from DCE to DTE!
AFAICS DTR and DTS where used for DCE to DTE handshake
> Now, RTS is effectively > RTR: "Ready to Receive", not "Request to Send"! There's a big > difference in intent.]
Well, this meaning of RTS corresponds to old DTR. I think that current practice is clearer to than old one: when I first saw names of handshake lines it was natural for me that when "Request to Send" is asserted it means that the _other_ end should send data... -- Waldek Hebisch
On 10/28/2016 5:17 PM, antispam@math.uni.wroc.pl wrote:
> Don Y <blockedofcourse@foo.invalid> wrote: >> >> [Originally, RTS/CTS were used to handshake the flow of data from >> DTE to DCE: RTS to indicate a *desire* to send data to the DCE; >> CTS to indicate a *willingness* to ACCEPT data from the DTE. I.e., >> assert RTS, wait for CTS, send data as long as CTS is valid. There >> was not a similar handshake from DCE to DTE! > > AFAICS DTR and DTS where used for DCE to DTE handshake
Remember, the standard was originally intended to allow big blue and ma bell to talk to each other. (e.g., Bell 103 modem) Trying to relate this to EIA232F changes lots of basic assumptions (e.g., lots of interface signals are no longer in play!) [Speaking historically...] The "data terminal" *was* a "terminal" -- an electroMECHANICAL device. DTR was asserted at the start of an exchange with the attached MODEM (DCE) device to indicate the DTE's (TTY) willingness to enter into a conversation. I.e., "ready to receive an incoming call, or initiate an outgoing call (via an ACU ion the modem)". For an incoming call, eventually, the MODEM informs the TTY of its detection of a ring signal (RI == Ring Indicator). Note that RI almost literally means "the bell in the phone is ringing NOW... and now it has stopped... and now its ringing again, etc." I.e., the DTE can actually sense the ring *pattern*, not just the fact that there is a call coming in. It is up to the TTY to determine when and if the call will be answered (e.g., wait for the 6th ring). The TTY asserts DTR to "wake up" the MODEM (in some MODEM chipsets, the DTR *input* is essentially a RESET signal; when not asserted, the MODEM chipset is HELD RESET!) Once the MODEM "powers up", it asserts DSR (Data SET Ready). I.e., DTR and DSR just say "Hello, I am alive and well" for each of their respective parties. When the TTY wants to send data to the MODEM, it asserts RTS (REQUESTING the go-ahead to send data). When the MODEM is ready to handle that data, it replies by asserting CTS ("It's OK for you to send that data, now!") At any time, the TTY can drop DTR and the MODEM will hangup! As the remote device can also hangup at any time, the MODEM signals that fact to the TTY by dropping DCD (Data Carrier Detected -- connection broken!). The process can not restart until the DTR signal is dropped (to RESET the MODEM). And, the MODEM will acknowledge this by dropping DSR. The RTS/CTS signals are similarly interlocked. You have to remember that the original MODEMs were little more than analog circuits. These interface signals directly controlled them and reported their state. All of the "smarts" controlling the sequencing was implemented in the TTY (DTE) device. The MODEM simply MOdulated and DEModulated the analog signals on the PSTN. Look at the physical size of a 103 modem and remember that it was built before SMT components. So, each resistor was larger than a SOIC8! In fact, the "IC" had only been invented a few years earlier!! [Imagine how large the MODEM would have had to be if it had real *smarts* inside it!] E.g., RTS effectively turned *on* the transmit carrier. The MODEM told the TTY that the Tx carrier had been turned on and was now ready to be MODULATED by returning CTS. Thereafter, the incoming data (TxD) would directly control the modulation of that carrier -- using the timing inherent in the TxD signal! (i.e., if you mucked with the individual bit times, the MODEM conveyed that to the remote DCE! Note that, nowadays, the presence of an MCU *inside* a MODEM allows the DCE-DTE interface to run at a different data rate than the DCE-DCE interface -- slower OR faster!) Note that dropping RTS would result in the MODEM dropping CTS! This is consistent with the above explanation: the MODEM is telling the TTY that it is no longer monitoring the TxD signal to control the Tx carrier. If you encounter such a device, nowadays, chances are your "driver" will choke because of these sorts of direct controls of the MODEM's internals. I.e., imagine what would happen if every time you dropped RTS to signal you were "too busy for more data" the remote device responded by saying "well, *I* am too busy for you, too!" And, conversely, when you reasserted RTS, the remote device reasserted CTS! Note that the Centronics printer interface's original implementation also had similar interlocking signals in the protocol -- that are now largely perverted owing to the increased capabilities of processors.
>> Now, RTS is effectively >> RTR: "Ready to Receive", not "Request to Send"! There's a big >> difference in intent.] > > Well, this meaning of RTS corresponds to old DTR. I think
No. "Old DTR" had exactly two state changes per phone call: turning on at the start and off at the end. By contrast, RTR turns on and off repeatedly during a "call".
> that current practice is clearer to than old one: when > I first saw names of handshake lines it was natural for > me that when "Request to Send" is asserted it means that > the _other_ end should send data...
Again, you have to evaluate the names in the context of a TTY talking to a MODEM. And, an *historical* modem, not the sorts of "modems with computers inside them" that are the norm, today. Nowadays, we think of everything as being a DTE (TTY/computer role). That symmetry didn't exist when the standard was originally created; each end of the RS232 link had very specific responsibilities and roles.
In article <4c2f9b2e-570c-475a-8838-ef024e90518b@googlegroups.com>, 
drn@nadler.com says...
> > On Thursday, October 27, 2016 at 4:42:07 PM UTC-4, Don Y wrote: > > ...A TWX would handle this differently than a modern PC, etc.... > > Of course, as a 'modern' PC hasn't had a serial port in a decade ;-)
Side point actually not true. For Laptops/netbooks/Phones/tablets that is true. Just this week had to get two desktops/tower PCs (acually small form factor units). These came with connectors as standard for a Serial port and a parallel one. Yes these were NEW units even with things like USB 3 and DDR4 RAM. See http://www.misco.co.uk/product/2577743/LENOVO-M700-SFF-INTEL-H110-CORE- I3-6100-1X4GB-DDR4-2133MHZ-500GB-7200RPM-3-5inch-SATA-DVDplus-RW-DL-? selectedTabIndex=2&tabBarViewName=ProductTechnicalSpecifications&page=1 &#tabs Not very common, but NOT rare by any means. Often folks need these to talk to older equipment like building infrastructure (heating, phone system) and they need timely handling of extra modem signals. A lot of USB to serial do not handle CTS/RTS and DTR/DSR properly or timely. if at all. -- Paul Carpenter | paul@pcserviceselectronics.co.uk <http://www.pcserviceselectronics.co.uk/> PC Services <http://www.pcserviceselectronics.co.uk/pi/> Raspberry Pi Add-ons <http://www.pcserviceselectronics.co.uk/fonts/> Timing Diagram Font <http://www.badweb.org.uk/> For those web sites you hate
On Fri, 28 Oct 2016 19:28:19 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:

>On 10/28/2016 5:17 PM, antispam@math.uni.wroc.pl wrote:
>[Speaking historically...] > >The "data terminal" *was* a "terminal" -- an electroMECHANICAL device.
Both the Teletype as well as the serial port in the big iron in the computer room are DTEs (Data Terminal Equipment) and the short cable is connected to a DCE (Data Communication Equipment) in practice a (radio)modem. The two DCEs communicated with each other over the phone line or radio link. To connect directly two DTEs to each other, such as a terminal and a computer, you needed a null modem. With synchronous links, this was a real hardware device, generating the Rx/Tx clocks etc. for the two DTEs. In asynchronous systems, a few cable jumpers did the trick. These days swapping pins 2 and 3 on both DB9 and DB25 will do the trick, since hardware handshake is rarely used.
>When the TTY wants to send data to the MODEM, it asserts RTS (REQUESTING >the go-ahead to send data). When the MODEM is ready to handle that data, >it replies by asserting CTS ("It's OK for you to send that data, now!")
With big iron, quite often half-duplex connection was used (prior to echo canceling). After the DTE asserted the RTS line, the modem had to wait that the incoming traffic stops and is then able to activate the transmitter and then assert the CTS line. f the DTE doesn't wait for CTS, the first bits would be lost. This is especially important with radio modems, when turning on a (high power) transmitter can take several hundred milliseconds, when relays connect power to the high power RF stages. These days the RTS signal is sometimes used for data direction control on two wire RS-485 half duplex connections. Unfortunately most UARTs (especially 1x550 series) are more or less useless due to braindead design, various FIFOs and software drivers, making the RTS control very inaccurate.
>You have to remember that the original MODEMs were little more than >analog circuits. These interface signals directly controlled them and >reported their state. All of the "smarts" controlling the sequencing >was implemented in the TTY (DTE) device. The MODEM simply MOdulated >and DEModulated the analog signals on the PSTN. Look at the physical >size of a 103 modem and remember that it was built before SMT components. >So, each resistor was larger than a SOIC8! In fact, the "IC" had >only been invented a few years earlier!!
The largest modem connected to a real 110 Bd Teletype was a Nokia model, which was 19" wide and 3-4 U high. One explanation is that Nokia used the same enclosure also for synchronous modems for big iron, which of course are much more complex.