Reply by Paul October 30, 20162016-10-30
In article <9q091c9ou458mfg6i85r8n2n4n2bo175ol@4ax.com>, 
upsidedown@downunder.com says...
> > On Sat, 29 Oct 2016 09:03:47 +0100, Paul > <paul@pcserviceselectronics.co.uk> wrote: > > >In article <4c2f9b2e-570c-475a-8838-ef024e90518b@googlegroups.com>, > >drn@nadler.com says... > >> > >> On Thursday, October 27, 2016 at 4:42:07 PM UTC-4, Don Y wrote: > >> > ...A TWX would handle this differently than a modern PC, etc.... > >> > >> Of course, as a 'modern' PC hasn't had a serial port in a decade ;-) > > > >Side point actually not true. > > > >For Laptops/netbooks/Phones/tablets that is true. > > Five years ago, I was able to buy a laptop with a _real_ UART, not one > with on board USB/Serial converter. But I guess this would be > impossible these days.
Considering most of the mobile type devices are trending to thin as possible (meaning breakable as possible), these days to have a true serial port on these types of devices and DB9 would require cable adapter or docking station. -- Paul Carpenter | paul@pcserviceselectronics.co.uk <http://www.pcserviceselectronics.co.uk/> PC Services <http://www.pcserviceselectronics.co.uk/pi/> Raspberry Pi Add-ons <http://www.pcserviceselectronics.co.uk/fonts/> Timing Diagram Font <http://www.badweb.org.uk/> For those web sites you hate
Reply by October 29, 20162016-10-29
On Sat, 29 Oct 2016 09:03:47 +0100, Paul
<paul@pcserviceselectronics.co.uk> wrote:

>In article <4c2f9b2e-570c-475a-8838-ef024e90518b@googlegroups.com>, >drn@nadler.com says... >> >> On Thursday, October 27, 2016 at 4:42:07 PM UTC-4, Don Y wrote: >> > ...A TWX would handle this differently than a modern PC, etc.... >> >> Of course, as a 'modern' PC hasn't had a serial port in a decade ;-) > >Side point actually not true. > >For Laptops/netbooks/Phones/tablets that is true.
Five years ago, I was able to buy a laptop with a _real_ UART, not one with on board USB/Serial converter. But I guess this would be impossible these days.
Reply by October 29, 20162016-10-29
On Sat, 29 Oct 2016 01:51:19 +0300, upsidedown@downunder.com wrote:

>On Fri, 28 Oct 2016 10:17:43 +0200, David Brown ><david.brown@hesbynett.no> wrote: > >>> I believe that the only reasonable thing to do would be to finish sending >>> the character, then wait until CTS is asserted before sending. >>> >> >>Yes, anything else would disrupt the low-level behaviour of the UART, >>which is designed to transfer in units of one character. If the >>transmitter broke off in the middle of the character, a receiver would >>see the start of the character followed by high bits. >> >>I also think it is odd to see something using hardware flow control in >>modern devices. Hardware flow control can be a real pain when you have >>buffers, FIFOs, DMA, etc. Flow control is usually handled at a higher >>protocol level now. Rather than using CTS/RTS, or XON/XOFF, you define >>it at a higher level such as replying to telegrams with NACK's. Or you >>just note that since your UART is running at perhaps 115kbps or less, >>and your microcontrollers are running at 50 MHz or more, you don't need >>any kind of flow control - each side is ready all the time. >> > >The last time I used true RTS/CTS handshaking was with a serial >matrix printer a few decades ago. > >The CTS pin is sometimes used to control the Data Direction pin on an >RS-485 two wire half duplex transceiver. Unfortunately, the garden >variety 14550 family UARTs turn off the CTS pin, when the last >character is moved into the transmit shift register, _not_ when the >last stop bit is actually transmitted from the UART. >
Correction: The RTS pin is more often used for data direction control
Reply by October 29, 20162016-10-29
On Fri, 28 Oct 2016 19:28:19 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:

>On 10/28/2016 5:17 PM, antispam@math.uni.wroc.pl wrote:
>[Speaking historically...] > >The "data terminal" *was* a "terminal" -- an electroMECHANICAL device.
Both the Teletype as well as the serial port in the big iron in the computer room are DTEs (Data Terminal Equipment) and the short cable is connected to a DCE (Data Communication Equipment) in practice a (radio)modem. The two DCEs communicated with each other over the phone line or radio link. To connect directly two DTEs to each other, such as a terminal and a computer, you needed a null modem. With synchronous links, this was a real hardware device, generating the Rx/Tx clocks etc. for the two DTEs. In asynchronous systems, a few cable jumpers did the trick. These days swapping pins 2 and 3 on both DB9 and DB25 will do the trick, since hardware handshake is rarely used.
>When the TTY wants to send data to the MODEM, it asserts RTS (REQUESTING >the go-ahead to send data). When the MODEM is ready to handle that data, >it replies by asserting CTS ("It's OK for you to send that data, now!")
With big iron, quite often half-duplex connection was used (prior to echo canceling). After the DTE asserted the RTS line, the modem had to wait that the incoming traffic stops and is then able to activate the transmitter and then assert the CTS line. f the DTE doesn't wait for CTS, the first bits would be lost. This is especially important with radio modems, when turning on a (high power) transmitter can take several hundred milliseconds, when relays connect power to the high power RF stages. These days the RTS signal is sometimes used for data direction control on two wire RS-485 half duplex connections. Unfortunately most UARTs (especially 1x550 series) are more or less useless due to braindead design, various FIFOs and software drivers, making the RTS control very inaccurate.
>You have to remember that the original MODEMs were little more than >analog circuits. These interface signals directly controlled them and >reported their state. All of the "smarts" controlling the sequencing >was implemented in the TTY (DTE) device. The MODEM simply MOdulated >and DEModulated the analog signals on the PSTN. Look at the physical >size of a 103 modem and remember that it was built before SMT components. >So, each resistor was larger than a SOIC8! In fact, the "IC" had >only been invented a few years earlier!!
The largest modem connected to a real 110 Bd Teletype was a Nokia model, which was 19" wide and 3-4 U high. One explanation is that Nokia used the same enclosure also for synchronous modems for big iron, which of course are much more complex.
Reply by Paul October 29, 20162016-10-29
In article <4c2f9b2e-570c-475a-8838-ef024e90518b@googlegroups.com>, 
drn@nadler.com says...
> > On Thursday, October 27, 2016 at 4:42:07 PM UTC-4, Don Y wrote: > > ...A TWX would handle this differently than a modern PC, etc.... > > Of course, as a 'modern' PC hasn't had a serial port in a decade ;-)
Side point actually not true. For Laptops/netbooks/Phones/tablets that is true. Just this week had to get two desktops/tower PCs (acually small form factor units). These came with connectors as standard for a Serial port and a parallel one. Yes these were NEW units even with things like USB 3 and DDR4 RAM. See http://www.misco.co.uk/product/2577743/LENOVO-M700-SFF-INTEL-H110-CORE- I3-6100-1X4GB-DDR4-2133MHZ-500GB-7200RPM-3-5inch-SATA-DVDplus-RW-DL-? selectedTabIndex=2&tabBarViewName=ProductTechnicalSpecifications&page=1 &#tabs Not very common, but NOT rare by any means. Often folks need these to talk to older equipment like building infrastructure (heating, phone system) and they need timely handling of extra modem signals. A lot of USB to serial do not handle CTS/RTS and DTR/DSR properly or timely. if at all. -- Paul Carpenter | paul@pcserviceselectronics.co.uk <http://www.pcserviceselectronics.co.uk/> PC Services <http://www.pcserviceselectronics.co.uk/pi/> Raspberry Pi Add-ons <http://www.pcserviceselectronics.co.uk/fonts/> Timing Diagram Font <http://www.badweb.org.uk/> For those web sites you hate
Reply by Don Y October 28, 20162016-10-28
On 10/28/2016 5:17 PM, antispam@math.uni.wroc.pl wrote:
> Don Y <blockedofcourse@foo.invalid> wrote: >> >> [Originally, RTS/CTS were used to handshake the flow of data from >> DTE to DCE: RTS to indicate a *desire* to send data to the DCE; >> CTS to indicate a *willingness* to ACCEPT data from the DTE. I.e., >> assert RTS, wait for CTS, send data as long as CTS is valid. There >> was not a similar handshake from DCE to DTE! > > AFAICS DTR and DTS where used for DCE to DTE handshake
Remember, the standard was originally intended to allow big blue and ma bell to talk to each other. (e.g., Bell 103 modem) Trying to relate this to EIA232F changes lots of basic assumptions (e.g., lots of interface signals are no longer in play!) [Speaking historically...] The "data terminal" *was* a "terminal" -- an electroMECHANICAL device. DTR was asserted at the start of an exchange with the attached MODEM (DCE) device to indicate the DTE's (TTY) willingness to enter into a conversation. I.e., "ready to receive an incoming call, or initiate an outgoing call (via an ACU ion the modem)". For an incoming call, eventually, the MODEM informs the TTY of its detection of a ring signal (RI == Ring Indicator). Note that RI almost literally means "the bell in the phone is ringing NOW... and now it has stopped... and now its ringing again, etc." I.e., the DTE can actually sense the ring *pattern*, not just the fact that there is a call coming in. It is up to the TTY to determine when and if the call will be answered (e.g., wait for the 6th ring). The TTY asserts DTR to "wake up" the MODEM (in some MODEM chipsets, the DTR *input* is essentially a RESET signal; when not asserted, the MODEM chipset is HELD RESET!) Once the MODEM "powers up", it asserts DSR (Data SET Ready). I.e., DTR and DSR just say "Hello, I am alive and well" for each of their respective parties. When the TTY wants to send data to the MODEM, it asserts RTS (REQUESTING the go-ahead to send data). When the MODEM is ready to handle that data, it replies by asserting CTS ("It's OK for you to send that data, now!") At any time, the TTY can drop DTR and the MODEM will hangup! As the remote device can also hangup at any time, the MODEM signals that fact to the TTY by dropping DCD (Data Carrier Detected -- connection broken!). The process can not restart until the DTR signal is dropped (to RESET the MODEM). And, the MODEM will acknowledge this by dropping DSR. The RTS/CTS signals are similarly interlocked. You have to remember that the original MODEMs were little more than analog circuits. These interface signals directly controlled them and reported their state. All of the "smarts" controlling the sequencing was implemented in the TTY (DTE) device. The MODEM simply MOdulated and DEModulated the analog signals on the PSTN. Look at the physical size of a 103 modem and remember that it was built before SMT components. So, each resistor was larger than a SOIC8! In fact, the "IC" had only been invented a few years earlier!! [Imagine how large the MODEM would have had to be if it had real *smarts* inside it!] E.g., RTS effectively turned *on* the transmit carrier. The MODEM told the TTY that the Tx carrier had been turned on and was now ready to be MODULATED by returning CTS. Thereafter, the incoming data (TxD) would directly control the modulation of that carrier -- using the timing inherent in the TxD signal! (i.e., if you mucked with the individual bit times, the MODEM conveyed that to the remote DCE! Note that, nowadays, the presence of an MCU *inside* a MODEM allows the DCE-DTE interface to run at a different data rate than the DCE-DCE interface -- slower OR faster!) Note that dropping RTS would result in the MODEM dropping CTS! This is consistent with the above explanation: the MODEM is telling the TTY that it is no longer monitoring the TxD signal to control the Tx carrier. If you encounter such a device, nowadays, chances are your "driver" will choke because of these sorts of direct controls of the MODEM's internals. I.e., imagine what would happen if every time you dropped RTS to signal you were "too busy for more data" the remote device responded by saying "well, *I* am too busy for you, too!" And, conversely, when you reasserted RTS, the remote device reasserted CTS! Note that the Centronics printer interface's original implementation also had similar interlocking signals in the protocol -- that are now largely perverted owing to the increased capabilities of processors.
>> Now, RTS is effectively >> RTR: "Ready to Receive", not "Request to Send"! There's a big >> difference in intent.] > > Well, this meaning of RTS corresponds to old DTR. I think
No. "Old DTR" had exactly two state changes per phone call: turning on at the start and off at the end. By contrast, RTR turns on and off repeatedly during a "call".
> that current practice is clearer to than old one: when > I first saw names of handshake lines it was natural for > me that when "Request to Send" is asserted it means that > the _other_ end should send data...
Again, you have to evaluate the names in the context of a TTY talking to a MODEM. And, an *historical* modem, not the sorts of "modems with computers inside them" that are the norm, today. Nowadays, we think of everything as being a DTE (TTY/computer role). That symmetry didn't exist when the standard was originally created; each end of the RS232 link had very specific responsibilities and roles.
Reply by October 28, 20162016-10-28
Don Y <blockedofcourse@foo.invalid> wrote:
> > [Originally, RTS/CTS were used to handshake the flow of data from > DTE to DCE: RTS to indicate a *desire* to send data to the DCE; > CTS to indicate a *willingness* to ACCEPT data from the DTE. I.e., > assert RTS, wait for CTS, send data as long as CTS is valid. There > was not a similar handshake from DCE to DTE!
AFAICS DTR and DTS where used for DCE to DTE handshake
> Now, RTS is effectively > RTR: "Ready to Receive", not "Request to Send"! There's a big > difference in intent.]
Well, this meaning of RTS corresponds to old DTR. I think that current practice is clearer to than old one: when I first saw names of handshake lines it was natural for me that when "Request to Send" is asserted it means that the _other_ end should send data... -- Waldek Hebisch
Reply by October 28, 20162016-10-28
On Fri, 28 Oct 2016 10:17:43 +0200, David Brown
<david.brown@hesbynett.no> wrote:

>> I believe that the only reasonable thing to do would be to finish sending >> the character, then wait until CTS is asserted before sending. >> > >Yes, anything else would disrupt the low-level behaviour of the UART, >which is designed to transfer in units of one character. If the >transmitter broke off in the middle of the character, a receiver would >see the start of the character followed by high bits. > >I also think it is odd to see something using hardware flow control in >modern devices. Hardware flow control can be a real pain when you have >buffers, FIFOs, DMA, etc. Flow control is usually handled at a higher >protocol level now. Rather than using CTS/RTS, or XON/XOFF, you define >it at a higher level such as replying to telegrams with NACK's. Or you >just note that since your UART is running at perhaps 115kbps or less, >and your microcontrollers are running at 50 MHz or more, you don't need >any kind of flow control - each side is ready all the time. >
The last time I used true RTS/CTS handshaking was with a serial matrix printer a few decades ago. The CTS pin is sometimes used to control the Data Direction pin on an RS-485 two wire half duplex transceiver. Unfortunately, the garden variety 14550 family UARTs turn off the CTS pin, when the last character is moved into the transmit shift register, _not_ when the last stop bit is actually transmitted from the UART.
Reply by Hul Tytus October 28, 20162016-10-28
Dave - as someone mentioned here, there are all sorts various ways of
using the rs232 uarts. I am currently debugging a file interchange routine 
which is causing all sorts of trouble but it is intended to follow the 
following form, which is a somewhat common setup.
   Two machines are interconnected via a 9 pin cable with the recieve and 
transmit lines switched midway, ala a "null modem". The rts and cts lines 
are switched in the same fashion.
   When either machine is willing to accept data, it raises rts which 
causes cts to be raised at the other machine. When the other machine wants 
to send data, it checks to see cts is high and, if so, transmits as long
as cts is high.

Hul
  



Dave Nadler <drn@nadler.com> wrote:
> Hi all - Perhaps some of you have encountered this?
> The post linked below shows the Kinetis UART continues to transmit after the receiver has de-asserted CTS: > https://community.nxp.com/thread/432154 > Freescale engineers claim their part is OK and its the other guys fault...
> Here's my question: As this is an asynchronous link, the receiver can de-assert CTS at any time. What is the transmitter supposed to do? For example, suppose CTS is de-asserted in the middle of transmitting a character: Must the transmitter immediately stop, and restart transmitting the character from the start bit when CTS is again asserted (which in turn introduces a race condition around the end of character transmission)? Or finish the current character transmission and then pause (another race condition for "current character")? Is there any specification on how this should be handled? Or even any consensus?
> Thanks in advance, > Best Regards, Dave
> PS: I've always used a protocol layer that corrects dropped characters so never had this impact an application, but using some parts like SiLabs bluegiga for SPP don't give that option (at the layer of controlling the Bluetooth device, or supporting a non-protocol client expecting a simple ASCII data stream). Even worse, these things drop status messages into the (application data) stream with no consistent header/delimiter - good luck catching all possible Bluetooth events.
Reply by Don Y October 28, 20162016-10-28
On 10/28/2016 7:23 AM, Grant Edwards wrote:
> On 2016-10-28, David Brown <david.brown@hesbynett.no> wrote: >> >>> I believe that the only reasonable thing to do would be to finish sending >>> the character, then wait until CTS is asserted before sending. >> >> Yes, anything else would disrupt the low-level behaviour of the >> UART, which is designed to transfer in units of one character. If >> the transmitter broke off in the middle of the character, a receiver >> would see the start of the character followed by high bits. > > Which is an entirely valid character assuming no parity. It has a 50% > chance of being a valid character with parity enabled.
Note that there is also the possibility of the transmitter being resumed/restarted before the aborted character *time* has expired.
> Stopping the transmitter in mid-character is simply 100% broken.
+42 There's no way this can be useful unless you were operating in simplex or in an interlocked protocol (like the *original* use of RTS/CTS; not the current RTR/CTS interpretation). How else would the receiver know when it is "appropriate" to alter the state of this control signal? [Yet another case where most documentation is incomplete. Where's the figure indicating how much (time) BEFORE the start bit the CTS signal is examined -- to go/no-go the transmission of THIS character?]
>> I also think it is odd to see something using hardware flow control in >> modern devices. Hardware flow control can be a real pain when you have >> buffers, FIFOs, DMA, etc. > > Why? If hardware flow control is implemented properly (which means > it's in HW), then it's completely transparent regardless of buffers, > FIFOs, DMA, etc. Trying to implement hardware flow control in > software is usually futile.
Its no harder to implement flow control in software than it is to handle the receipt of characters! The EXACT SAME timeliness issues apply (if you can get around to *noticing* that you've received more characters than you can buffer BEFORE you actually lose characters, then you obviously can toggle a DI/O to inform the remote transmitter of this. The issue is deciding how much "slack" you need in your buffer to accommodate variations in Rx ISR latency -- how LATE are you to notice the receipt of a character (and how deep is the hardware FIFO in the UART to tolerate that lateness without loss of data). And, at the same time, how late do you expect the transmitting device to be in noticing the change in the state of the incoming pacing signal. If you're feeding the transmitter from an ISR (not DMA), then you add a test to check the state of the line before stuffing the next character into the transmitter. The only real "complication" comes from having to now monitor the handshaking signal to determine when it is appropriate to RESUME the transmitter ISR (cuz the transmitter is now essentially "ready, but disabled" -- unable to generate a new "event" to signal the loading of the next character)