Reply by CBFalconer August 21, 20062006-08-21
"pomerado@hotmail.com" wrote:
> Wolfgang Mahringer wrote: >
... snip ...
>> >> Obvioulsy, you should ensure that your last transmitted >> bit is not more than a half bit time away. >> >> But, believe me, please do yourself a favour and use >> the correct clock frequency. > > That is not my problem. I intend to use the PIC USART as expected. > My questions were about its capabilites to connect effectively with > a source I do not control. There appears to be very little technical > data available on the PIC website aboiut the guts of the USART.
Then you might indulge in some experiments. A worst case would be some sort of pattern when the last tranmitted bit is opposite to the stop bit. You can connect the PIC USART to a test UART, and supply that UART with a continuous stream of the worst cast pattern. UARTs generally have an independent clock input pin, so all you need do is vary that frequency about the nominal until errors appear. Measure that frequency. That will give you an accurate picture of the tolerance available, but not the noise rejection. In fact, you should be able to make the PIC itself generate the test clock. That will keep it locked to the actual PIC frequency. This, of course, will require that that clock is considerably lower than the main PIC clock, which should be no problem. -- Chuck F (cbfalconer@yahoo.com) (cbfalconer@maineline.net) Available for consulting/temporary embedded and systems. <http://cbfalconer.home.att.net> USE maineline address!
Reply by pome...@hotmail.com August 21, 20062006-08-21
Wolfgang Mahringer wrote:
> Hi, > > pomerado@hotmail.com wrote: > > > "The data on the RXx pin (either RC7/RX1/DT1 or RG2/ > > RX2/DT2) is sampled three times by a majority detect > > circuit to determine if a high or a low level is present at > > the pin." > > > > "The data recovery block is actually a high-speed > > shifter operating at 16 times the baud rate, whereas the > > main receive serial shifter operates at the bit rate or at > > FOSC. This mode would typically be used in RS-232 > > systems" > > > > That's about it. > > Fine. Thats all you must know. > > Obvioulsy, you should ensure that your last transmitted > bit is not more than a half bit time away. > > But, believe me, please do yourself a favour and use > the correct clock frequency.
That is not my problem. I intend to use the PIC USART as expected. My questions were about its capabilites to connect effectively with a source I do not control. There appears to be very little technical data available on the PIC website aboiut the guts of the USART.
Reply by mc August 19, 20062006-08-19
"Wolfgang Mahringer" <yeti201@gmx.at> wrote in message 
news:37FFg.4$zc2.669783@news.salzburg-online.at...

> Obviously, you should ensure that your last transmitted > bit is not more than a half bit time away.
That implies about a 5% tolerance, at most, right?
> But, believe me, please do yourself a favour and use > the correct clock frequency.
I agree. You run the risk that the other guy is going to be off by the maximum amount in the opposite direction, and then communication won't work. Realistically, though, as I understand it, all this implies that 1% accuracy of baud rate is fine, and it's usually possible to do a lot better than that.
Reply by Wolfgang Mahringer August 19, 20062006-08-19
Hi,

pomerado@hotmail.com wrote:

> "The data on the RXx pin (either RC7/RX1/DT1 or RG2/ > RX2/DT2) is sampled three times by a majority detect > circuit to determine if a high or a low level is present at > the pin." > > "The data recovery block is actually a high-speed > shifter operating at 16 times the baud rate, whereas the > main receive serial shifter operates at the bit rate or at > FOSC. This mode would typically be used in RS-232 > systems" > > That's about it.
Fine. Thats all you must know. Obvioulsy, you should ensure that your last transmitted bit is not more than a half bit time away. But, believe me, please do yourself a favour and use the correct clock frequency. HTH Wolfgang -- From-address is Spam trap Use: wolfgang (dot) mahringer (at) sbg (dot) at
Reply by pome...@hotmail.com August 19, 20062006-08-19
Jack Klein wrote:
> On 18 Aug 2006 18:49:12 -0700, "pomerado@hotmail.com" > <pomerado@hotmail.com> wrote in comp.arch.embedded: > > > > > CBFalconer wrote: > > > As usual, that depends. The principle factor is the actual clock > > > and its tolerance. If possible most designers aim for a 16x > > > clock. This allows a start bit to be confirmed after 8 clocks, at > > > roughly the middle of a bit period. (If not confirmed, the start > > > bit is ignored as noise, answering Q1). From then on the UART > > > samples the line every 16 clock times, resulting in a sample at the > > > nominal middle of a bit time. If tolerances put that sample time > > > outside the actual bit time, things will fail. Once the stop bit > > > is received, again at nominal middle of a bit period, the complete > > > byte (or whatever) is confirmed and shipped and the system rearms > > > to hunt a new start bit. > > > > I understand the process, I am looking for details on the timing (which > > sample clocks are critical, what error bits are set for which > > condition, etc. > > > > The little quote from the data sheet says majority of three samples, > > but doesn't say which 3. > > Every 16x clock UART I've ever seen that included that spec on the > data sheet sampled on the 7th, 8th, and 9th clock edge, the nominal > bit center and one clock on either side. The majority logic provides > some noise immunity against noise signals less than 1 to 1.5 of the > 16x clock cycles. > > > > Note that any even clock divisor automatically puts a bias on the > > > sampling time. > > > > > > Draw a few timing diagrams showing start detection delay and clock > > > tolerance effects and all will rapidly be clear. IIRC the net > > > effect is that a 3% 16x clock tolerance is likely to be fatal to > > > the normal start + 8 bits + 1 stop bit configuration. Remember > > > both ends can have tolerances, so effects are doubled. > > > > I'm looking at a spec that always sends a Break (more 0's than a legal > > byte) followed by a 55 to "train" the clock frequency. > > That's a little unusual. Usually auto baud detection routines start > with an idle line, and require that the first character sent have an > odd numeric value (such as ASCII carriage return, 13 decimal, 0d hex). > > Firmware monitors the pin and counts the number of clock cycles of the > one low (at the logic level on the pin) start bit between the high of > the idle line and the high of the least significant data bit. > > But the principle's the same the way you describe.
It's LIN, Local Interconnect Network. Spec avaialbel on consortium website http://www.lin-subbus.org/
Reply by Jack Klein August 19, 20062006-08-19
On 18 Aug 2006 18:49:12 -0700, "pomerado@hotmail.com"
<pomerado@hotmail.com> wrote in comp.arch.embedded:

> > CBFalconer wrote: > > As usual, that depends. The principle factor is the actual clock > > and its tolerance. If possible most designers aim for a 16x > > clock. This allows a start bit to be confirmed after 8 clocks, at > > roughly the middle of a bit period. (If not confirmed, the start > > bit is ignored as noise, answering Q1). From then on the UART > > samples the line every 16 clock times, resulting in a sample at the > > nominal middle of a bit time. If tolerances put that sample time > > outside the actual bit time, things will fail. Once the stop bit > > is received, again at nominal middle of a bit period, the complete > > byte (or whatever) is confirmed and shipped and the system rearms > > to hunt a new start bit. > > I understand the process, I am looking for details on the timing (which > sample clocks are critical, what error bits are set for which > condition, etc. > > The little quote from the data sheet says majority of three samples, > but doesn't say which 3.
Every 16x clock UART I've ever seen that included that spec on the data sheet sampled on the 7th, 8th, and 9th clock edge, the nominal bit center and one clock on either side. The majority logic provides some noise immunity against noise signals less than 1 to 1.5 of the 16x clock cycles.
> > Note that any even clock divisor automatically puts a bias on the > > sampling time. > > > > Draw a few timing diagrams showing start detection delay and clock > > tolerance effects and all will rapidly be clear. IIRC the net > > effect is that a 3% 16x clock tolerance is likely to be fatal to > > the normal start + 8 bits + 1 stop bit configuration. Remember > > both ends can have tolerances, so effects are doubled. > > I'm looking at a spec that always sends a Break (more 0's than a legal > byte) followed by a 55 to "train" the clock frequency.
That's a little unusual. Usually auto baud detection routines start with an idle line, and require that the first character sent have an odd numeric value (such as ASCII carriage return, 13 decimal, 0d hex). Firmware monitors the pin and counts the number of clock cycles of the one low (at the logic level on the pin) start bit between the high of the idle line and the high of the least significant data bit. But the principle's the same the way you describe. -- Jack Klein Home: http://JK-Technology.Com FAQs for comp.lang.c http://c-faq.com/ comp.lang.c++ http://www.parashift.com/c++-faq-lite/ alt.comp.lang.learn.c-c++ http://www.contrib.andrew.cmu.edu/~ajo/docs/FAQ-acllc.html
Reply by pome...@hotmail.com August 18, 20062006-08-18
CBFalconer wrote:
> As usual, that depends. The principle factor is the actual clock > and its tolerance. If possible most designers aim for a 16x > clock. This allows a start bit to be confirmed after 8 clocks, at > roughly the middle of a bit period. (If not confirmed, the start > bit is ignored as noise, answering Q1). From then on the UART > samples the line every 16 clock times, resulting in a sample at the > nominal middle of a bit time. If tolerances put that sample time > outside the actual bit time, things will fail. Once the stop bit > is received, again at nominal middle of a bit period, the complete > byte (or whatever) is confirmed and shipped and the system rearms > to hunt a new start bit.
I understand the process, I am looking for details on the timing (which sample clocks are critical, what error bits are set for which condition, etc. The little quote from the data sheet says majority of three samples, but doesn't say which 3.
> Note that any even clock divisor automatically puts a bias on the > sampling time. > > Draw a few timing diagrams showing start detection delay and clock > tolerance effects and all will rapidly be clear. IIRC the net > effect is that a 3% 16x clock tolerance is likely to be fatal to > the normal start + 8 bits + 1 stop bit configuration. Remember > both ends can have tolerances, so effects are doubled.
I'm looking at a spec that always sends a Break (more 0's than a legal byte) followed by a 55 to "train" the clock frequency.
Reply by CBFalconer August 18, 20062006-08-18
"pomerado@hotmail.com" wrote:
> > I am looking at a design which uses a PIC18F8520 whose USART is > connected to a PC serial port running at a fairly high data rate. > I am also looking at the design of a serial port instantiated in > an FPGA connected to the same data stream in an RS-485 network. > Having to deal with the internal details of the FPGA "USART" makes > me wonder about the internal details of the PIC USART. FOr > instance: > > How long or short can or must the start bit be to be accepted? > > Where in a bit time and for how many samples does the USART > compare to determine whether a bit is 1 or 0? > > How much tolerance does the PIC USART have to bit-timing > discrepancies on the RX side? > > Can anyone provide any answers to these questions, or point to a > good document or FAQ that deals with them?
As usual, that depends. The principle factor is the actual clock and its tolerance. If possible most designers aim for a 16x clock. This allows a start bit to be confirmed after 8 clocks, at roughly the middle of a bit period. (If not confirmed, the start bit is ignored as noise, answering Q1). From then on the UART samples the line every 16 clock times, resulting in a sample at the nominal middle of a bit time. If tolerances put that sample time outside the actual bit time, things will fail. Once the stop bit is received, again at nominal middle of a bit period, the complete byte (or whatever) is confirmed and shipped and the system rearms to hunt a new start bit. Note that any even clock divisor automatically puts a bias on the sampling time. Draw a few timing diagrams showing start detection delay and clock tolerance effects and all will rapidly be clear. IIRC the net effect is that a 3% 16x clock tolerance is likely to be fatal to the normal start + 8 bits + 1 stop bit configuration. Remember both ends can have tolerances, so effects are doubled. -- "The power of the Executive to cast a man into prison without formulating any charge known to the law, and particularly to deny him the judgement of his peers, is in the highest degree odious and is the foundation of all totalitarian government whether Nazi or Communist." -- W. Churchill, Nov 21, 1943
Reply by pome...@hotmail.com August 18, 20062006-08-18
Wolfgang Mahringer wrote:
> Hi, > > pomerado@hotmail.com wrote: > > Where in a bit time and for how many samples does the USART compare to > > determine whether a bit is 1 or 0? > > > > How much tolerance does the PIC USART have to bit-timing discrepancies > > on the RX side? > > > > Can anyone provide any answers to these questions, or point to a good > > document or FAQ that deals with them? > > Take a look into your micros datasheet. > It is all in there. > www.microchip.com
I had the datasheet downloaded and open when I posted this. http://ww1.microchip.com/downloads/en/DeviceDoc/39609b.pdf I haven't found much useful there: "The data on the RXx pin (either RC7/RX1/DT1 or RG2/ RX2/DT2) is sampled three times by a majority detect circuit to determine if a high or a low level is present at the pin." "The data recovery block is actually a high-speed shifter operating at 16 times the baud rate, whereas the main receive serial shifter operates at the bit rate or at FOSC. This mode would typically be used in RS-232 systems" That's about it. And the information in the timing diagrams is limited to setup and hold times on the I/O pins.
Reply by Wolfgang Mahringer August 18, 20062006-08-18
Hi,

pomerado@hotmail.com wrote:
> Where in a bit time and for how many samples does the USART compare to > determine whether a bit is 1 or 0? > > How much tolerance does the PIC USART have to bit-timing discrepancies > on the RX side? > > Can anyone provide any answers to these questions, or point to a good > document or FAQ that deals with them?
Take a look into your micros datasheet. It is all in there. www.microchip.com HTH Wolfgang -- From-address is Spam trap Use: wolfgang (dot) mahringer (at) sbg (dot) at