EmbeddedRelated.com
Forums

Why should I (not) use an internal oscillator for 8-bit micros

Started by Schwob August 14, 2004
"Alan" <me@somewhere.com.au> wrote in message
news:e491i0had0r3vthg7vbnknmr588fojbnsd@4ax.com...
> On Mon, 16 Aug 2004 13:06:48 +0200, "David Brown" > <david@no.westcontrol.spam.com> wrote: > >> > >> Async uses the "start" bit of each byte to tell the receiver to start > >> timing to look at the bits of this one byte only. > >> > >> The term Asynchronous here means that the sender can send data at > >> anytime without having to worry about whether or not the receiver is > >> in sync. > >> > > > >Not quite - the term "asynchronous" here means "not synchronous" - i.e.,
the
> >opposite of your correct definition of "synchronous". The receiver is > >*never* in sync with the sender in async communication, since it does not > >have a clock signal on which to sychronize. > > > Perhaps I could have explained it better. But the point is that the > Async receiver uses the leading edge of the start bit to trigger it's > own internal timing mechanism which should produce sampling at the > correct time for the incoming data. It is not, as you say, in sync > with the incoming data as it doesn't have a clock to sync to. However > the internal sampling clock needs to be less than 5% different from > the clock that produced the data to reliably decode the data. > > This is always presuming that the receiving UART (or software) has > been designed properly to sample in the middle of the data bit (in the > case of a single sample per bit UART) or at the correct times for a > multi-sample per bit UART. > > In fact multi-sample per bit UARTS "could" make the tolerance > situation worse! >
The only scheme I know of for multi-sample receivers is to take 3 samples in the middle of the bit (which is typically divided into 16 sub-bit time slots), and use majority voting to get the result. This shouldn't affect the tolerance (the middle sample is going to fall exactly half-way within the nominal bit - samples are taken *between* time slots) directly, as far as I can see. Having 16-times oversampling will add another +/-(1/16) bit time to the total error, which I suppose should also be taken into account. Certainly for 4-times oversampling receivers it would be a significant difference, reducing your total error margin to 25%, thus requiring about 2.5% match between the sender and the receiver.
> There is also a third type of synchronous data where the clock is not > sent but the receiver and the transmitter have to have accurate clocks > which are synchronised by a preamble only. >
Do you mean when the receiver's clock speed is adjusted to match a preamble (typically a 010101 pattern) ? As far as I know, that is used for LIN communication, which is basically standard uart except that a preamble is used to counter for clocks with greater than 5% mis-match (i.e., LIN slaves are typically cheapo devices with internal oscillators). There are plenty of other schemes for adjustments - CAN controllers adjust their sub-sampling clock on each bit, to avoid the error building up too much over a 80-bit frame. David
> >That is, of course, correct - I'm slightly stunned that there are people > >working in this field who apparently fail to grasp that. Hopefullly, > >"apparently" is the operative word, and that it is merely the wording of > >their posts that is ambiguous, rather than their understanding. > > > > It's unfortunate that there appear to be a (large) number of people > out there that don't seem to know the basic of data transmission and > end up writing code that produce wrong baud rates - especially when it > comes to bit-banging. I always try to get as close to 0% tolerance as > possible with baud rates to cater for all the funny ones. > > Alan > > ++++++++++++++++++++++++++++++++++++++++++ > Jenal Communications > Manufacturers and Suppliers of HF Selcall > P O Box 1108, Morley, WA, 6943 > Tel: +61 8 9370 5533 Fax +61 8 9467 6146 > Web Site: http://www.jenal.com > e-mail: http://www.jenal.com/?p=1 > ++++++++++++++++++++++++++++++++++++++++++
"David Brown" <david@no.westcontrol.spam.com> wrote in message 
news:cfq2tv$pht$1@news.netpower.no...
>> Um... well, originally the question was the tolerance of the *CRYSTAL* >> inside of microprocessors, not the "bit time" error: > What difference does that make? If an oscillator is 2% out, the the bit > time on a uart based on that oscillator is 2% out.
Very true. My only point was that everything that comes between the start and stop bits are susceptible to cumulative error since there is no (hehehe) sync point between data bits. The greater the baud rate divisor, the more cumulative the error becomes.
>> But the error is *ADDITIVE* fron a per bit basis until the next start >> bit, > Exactly - your 5% error adds up to 50% (or 47.5%, to be exact) error over > the ten bits transmitted. But this is completly independant of the bit > time - it is a relative error.
Let me try to explain clearly what I'm talking about: 1.8432Mhz Clock comes in to 16550, internally divided down to 115200 (by 16) to generate the master clock. From there you can specify a divisor of 1-65535. To get a 2400bps baud clock for example, you need to program up the divisor to 48 (115200/2400). That means for 1 cycle of the 2400bps baud clock, you are incurring the cumulative error of 48 cycles of the master clock, multiplied by 8 bits (or however many you're sending). At 2400bps in this example, there are 48*8=384 master clock cycles of cumulative error per 8 bits of data transferred. At 57600 in this example, there are 2*8=16 master clock cycles of cumulative error. For something like 75 baud, that's 1536 master clock cycles per bit, or 1536*8=12288 master clock cycles of cumulative error. Gets much, much worse as the divisor increases (and the baud rate lowers).
>> Yup, it got worse as the baud rate got lower. See my other response for a >> working example. > Are you suggesting that your microcontroller's uart can divide it's clock > by > a small number without problem, but fails to divide accurately by a larger > number?
I'm saying the baud clock becomes more sensitive to cumulative error as the divisor increases.
> I think you can be pretty confident that there is some other > problem, such as incorrectly setting the divisor bits.
Or a source clock that is far enough out of tolerance that the baud rates can't be used (see above).
> I too have had to > change crystals to get low baud rates, but that is merely because the uart > in question (on an avr8515, IIRC) did not have enough bits in the baud > rate > divisor to reach down to 300 baud from a 8 MHz crystal.
I've had to change crystals due to lousy tolerance AND because of flat out incorrect rates.
>> You're not taking in to account that asynchronous communication *HAS* a >> synchronization method - a start bit! > Does that mean you think gigabit ethernet needs 0.5ppm crystals and 600 > baud > modems can run with +/-40 % tolerance crystal, or does it mean you agree > with me (and the rest of the world - at least, the tiny part that cares > :-) > that the actual rate is irrelevant when discussing the percentage error?
Well, I don't know how ethernet works, so I can't answer that. ;-) I'll say yes that the rate is irrelevant when discussing percentage of error only if it's clocked with no divisor of any sort. -->Neil
Neil Bradley wrote:
<snip>
> The greater the baud rate divisor, the more cumulative the error becomes.
In absolute time, yes, in percentage of BAUD NO.
> Let me try to explain clearly what I'm talking about: > > 1.8432Mhz Clock comes in to 16550, internally divided down to 115200 (by 16) > to generate the master clock. From there you can specify a divisor of > 1-65535. To get a 2400bps baud clock for example, you need to program up the > divisor to 48 (115200/2400). That means for 1 cycle of the 2400bps baud > clock, you are incurring the cumulative error of 48 cycles of the master > clock, multiplied by 8 bits (or however many you're sending). At 2400bps in > this example, there are 48*8=384 master clock cycles of cumulative error per > 8 bits of data transferred. At 57600 in this example, there are 2*8=16 > master clock cycles of cumulative error. For something like 75 baud, that's > 1536 master clock cycles per bit, or 1536*8=12288 master clock cycles of > cumulative error. Gets much, much worse as the divisor increases (and the > baud rate lowers).
This is what we call 'bass ackwards' thinking. The UART does NOT care ( or even know) how many master clocks it takes. ALL the UART sees is the BAUD clock, commonly 1/16 the bit time. The UART state engine starts sampling on the nearest 1/16 bit time to the START edge, and thereafter follows the BAUD clock, with the necessary half bit shift to get centre sampling. If all goes well, the stop bit arrives within the correct sampling window, and a valid byte is flagged as received. -jg
On Tue, 17 Aug 2004 09:56:51 +1200, the renowned Jim Granville
<no.spam@designtools.co.nz> wrote:

>Neil Bradley wrote: ><snip> >> The greater the baud rate divisor, the more cumulative the error becomes. > >In absolute time, yes, in percentage of BAUD NO. > >> Let me try to explain clearly what I'm talking about: >> >> 1.8432Mhz Clock comes in to 16550, internally divided down to 115200 (by 16) >> to generate the master clock. From there you can specify a divisor of >> 1-65535. To get a 2400bps baud clock for example, you need to program up the >> divisor to 48 (115200/2400). That means for 1 cycle of the 2400bps baud >> clock, you are incurring the cumulative error of 48 cycles of the master >> clock, multiplied by 8 bits (or however many you're sending). At 2400bps in >> this example, there are 48*8=384 master clock cycles of cumulative error per >> 8 bits of data transferred. At 57600 in this example, there are 2*8=16 >> master clock cycles of cumulative error. For something like 75 baud, that's >> 1536 master clock cycles per bit, or 1536*8=12288 master clock cycles of >> cumulative error. Gets much, much worse as the divisor increases (and the >> baud rate lowers). > > This is what we call 'bass ackwards' thinking. > The UART does NOT care ( or even know) how many master clocks it >takes. ALL the UART sees is the BAUD clock, commonly 1/16 the bit time. >The UART state engine starts sampling on the nearest 1/16 bit time to >the START edge,
Wouldn't it be the first 1/n bit time *after* the START edge (therefore a maximum error of about 1/n bit time rather than 1/(2*n), or is the UART presumed to be prescient?
>and thereafter follows the BAUD clock, with the >necessary half bit shift to get centre sampling. > If all goes well, the stop bit arrives within the correct sampling >window, and a valid byte is flagged as received. > > >-jg
Best regards, Spehro Pefhany -- "it's the network..." "The Journey is the reward" speff@interlog.com Info for manufacturers: http://www.trexon.com Embedded software/hardware/analog Info for designers: http://www.speff.com
Spehro Pefhany wrote:
<snip>
> On Tue, 17 Aug 2004 09:56:51 +1200, the renowned Jim Granville >> This is what we call 'bass ackwards' thinking. >> The UART does NOT care ( or even know) how many master clocks it >>takes. ALL the UART sees is the BAUD clock, commonly 1/16 the bit time. >>The UART state engine starts sampling on the nearest 1/16 bit time to >>the START edge, > > > Wouldn't it be the first 1/n bit time *after* the START edge > (therefore a maximum error of about 1/n bit time rather than 1/(2*n), > or is the UART presumed to be prescient?
:) - well spotted - when I wrote that, it did occur to me, that just maybe, someone would consider that 'nearest' might apply to both before and after the START edge. Maybe this thread will now have another lease of life on this ? -jg
"Neil Bradley" <nb_no_spam@synthcom.com> writes:
> No, I meant asynchronous, where things like I2C and SPI have separate clock > and data lines. The clock lines can vary wildly and it'll still work. That > won't happen with a UART which requires synchronous clocking to occur - on > byte boundaries at least.
SPI is a synchronous protocol. This does not imply that the clock has to be free-running or at a fixed frequency. A UART is asynchronous, because it requires no clock to be provided with the data. The clock is generated locally, and resynchronized at the leading edge of the start bit. "Asynchronous" is the "A" in "UART".
"Eric Smith" <eric-no-spam-for-me@brouhaha.com> wrote in message 
news:qhu0v2ir08.fsf@ruckus.brouhaha.com...
> "Neil Bradley" <nb_no_spam@synthcom.com> writes: >> No, I meant asynchronous, where things like I2C and SPI have separate >> clock >> and data lines. The clock lines can vary wildly and it'll still work. >> That >> won't happen with a UART which requires synchronous clocking to occur - >> on >> byte boundaries at least. > SPI is a synchronous protocol. This does not imply that the clock has > to be free-running or at a fixed frequency. > > A UART is asynchronous, because it requires no clock to be provided with > the data. The clock is generated locally, and resynchronized at the > leading edge of the start bit. "Asynchronous" is the "A" in "UART".
Thank you for stating in MUCH BETTER TERMS what I had meant to say originally! -->Neil
"Jim Granville" <no.spam@designtools.co.nz> wrote in message 
news:x2bUc.3484$zS6.417715@news02.tsnz.net...
>>> The UART does NOT care ( or even know) how many master clocks it takes. >>> ALL the UART sees is the BAUD clock, commonly 1/16 the bit time. The >>> UART state engine starts sampling on the nearest 1/16 bit time to the >>> START edge, >> Wouldn't it be the first 1/n bit time *after* the START edge >> (therefore a maximum error of about 1/n bit time rather than 1/(2*n), >> or is the UART presumed to be prescient? > :) - well spotted - when I wrote that, it did occur to me, > that just maybe, someone would consider that 'nearest' might > apply to both before and after the START edge. > Maybe this thread will now have another lease of life on this ?
For those interested, here's a shot of a software UART I did to communicate with a windspeed/wind direction CPU with an internal clock (using the INT0 input on the CPU). For those who want to vilify me on why I didn't use the built in UART in the 8051, it was already in use for communicating to a host PC. void WindBaudRateInterrupt(void) interrupt 1 using 3 { if (0 == sg_u8BitCount) { // Framing error - don't do anything. Restart the serial port state machine } else if (1 == sg_u8BitCount) // First bit - let's make sure it's asserted { if (0 == INT0) { // It's asserted, let's reschedule the timer! TR0 = 0; TH0 = FULL_TIME_HIGH; TL0 = FULL_TIME_LOW; TF0 = 0; TR0 = 1; sg_u8BitCount = 2; sg_u8ByteReceived = 0; return; } } else if (sg_u8BitCount != 10) { TR0 = 0; TH0 = FULL_TIME_HIGH; TL0 = FULL_TIME_LOW; TF0 = 0; TR0 = 1; // Look at the state - suck up the bit! sg_u8ByteReceived >>= 1; if (INT0) // That makes it a 0 - nothing to do! { sg_u8ByteReceived |= 0x80; } else // And this makes it a 1 { } ++sg_u8BitCount; return; } else if (10 == sg_u8BitCount) { // It's asserted - reschedule a HALF bit timer and process the // byte! if (INT0) { EX0 = 1; TR0 = 0; TF0 = 0; sg_u8BitCount = 0; WindProcessStream(sg_u8ByteReceived); return; } } // We've got a framing error - resync WindFramingError(); } -->Neil
In article <10i25cpqo4omjff@corp.supernews.com>, nb_no_spam@synthcom.com 
says...
> "David Brown" <david@no.westcontrol.spam.com> wrote in message > > Exactly - your 5% error adds up to 50% (or 47.5%, to be exact) error over > > the ten bits transmitted. But this is completly independant of the bit > > time - it is a relative error. > > Let me try to explain clearly what I'm talking about: > > 1.8432Mhz Clock comes in to 16550, internally divided down to 115200 (by 16) > to generate the master clock. From there you can specify a divisor of > 1-65535. To get a 2400bps baud clock for example, you need to program up the > divisor to 48 (115200/2400). That means for 1 cycle of the 2400bps baud > clock, you are incurring the cumulative error of 48 cycles of the master > clock, multiplied by 8 bits (or however many you're sending). At 2400bps in > this example, there are 48*8=384 master clock cycles of cumulative error per > 8 bits of data transferred. At 57600 in this example, there are 2*8=16 > master clock cycles of cumulative error. For something like 75 baud, that's > 1536 master clock cycles per bit, or 1536*8=12288 master clock cycles of > cumulative error. Gets much, much worse as the divisor increases (and the > baud rate lowers).
But if each of those 12288 clocks is 1% fast then then entire length of the period will be 1% fast. That means the mismatch is 1% not some larger number. The 12288 is not subject to an error just the original clock. There is only a problem when the clock error (before division) is larger enough to exceed the mismatch tolerance for a single bit over the length of the message. Robert
Neil Bradley wrote:
> "David Brown" <david@no.westcontrol.spam.com> wrote in message > >>> Um... well, originally the question was the tolerance of the >>> *CRYSTAL* inside of microprocessors, not the "bit time" error: > >> What difference does that make? If an oscillator is 2% out, >> the the bit time on a uart based on that oscillator is 2% out. > > Very true. My only point was that everything that comes between > the start and stop bits are susceptible to cumulative error > since there is no (hehehe) sync point between data bits.
Another factor is the error in detecting that initial start bit. If the internal sampling clock is say 4x the baud rate, then the error in detecting the start bit proper is up to one of those faster clocks, i.e. up to 25% of a bit time. At the lower baud rates this divisor is usually about 16, for about a 6% error. This reduces the margin available for all the other samples. -- "The most amazing achievement of the computer software industry is its continuing cancellation of the steady and staggering gains made by the computer hardware industry..." - Petroski