EmbeddedRelated.com
Forums

Why should I (not) use an internal oscillator for 8-bit micros

Started by Schwob August 14, 2004
"FLY135" <fly_135(@ hot not not)notmail.com> wrote in message 
news:TFtUc.24919$nx2.3491@newsread2.news.atl.earthlink.net...
> "Neil Bradley" <nb_no_spam@synthcom.com> wrote in message > news:10i4f5eh141ob9e@corp.supernews.com... >> And the more I post, the more people pigenhole me and look for things >> in my statements and write me off as an idiot, insult me, or look for >> ways >> to shift the conversation to continue to make me look wrong about > I think the problem was that you bastardized the synchronous/asynchronous > terminology and then tried to defend it instead of just going... whoops, I > fouled that up.
Like I said before, translation was lost between what I was thinking and what I actually posted. I was NOT trying to imply that a UART is synchronous communication, only that for the period of a byte transmission the two ends have to be at least semi-synchronized. Note that this is different from synchronous (clock based) communication. I'm perfectly aware of that, and I stated as such many, many times. Not sure why people aren't realizing that.
> A UART doesn't use "synchronous clocking", it uses clocks > that are within a frequency tolerance in order to communicate.
I never once said that a UART used synchronous clocking. My original statement meant to say that in a sychronous transport (like I2C or SPI), the clocking can be as sloppy as it needs to be and it won't matter. -->Neil
> >
"Paul Carpenter" <paul$@pcserv.demon.co.uk> wrote in message 
news:20040817.1842.302217snz@pcserv.demon.co.uk...
>>>> Very true. My only point was that everything that comes between the >>>> start >>>> and stop bits are susceptible to cumulative error since there is no >>> (hehehe) >>>> sync point between data bits. > All UARTS (or other similar multi-mode devices) that use n-bit start/stop > byte transfers use a 16 * clock and have done for decades. They could use > 8 or 4 times clock, but as with everything more accuracy comes from a > higher > multiple for sampling frequency. See later.
Sure, effecitvely oversampling/voting as (I think) David had mentioned.
> Once a start bit edge is detected, 8 counts of the clock are used to then > sample in the middle of the bit to allow for slew and other > characteristics > of line transmission as the line could be VERY long or lossy. If the level > is 0 then a start bit is detected, then 16 clock counts later the first > bit > is sampled and so on for the number of bits being sent. After the last > number > of expected bits (data and parity) has been received the stop bit is > sampled > to make sure it is 1 for the number of stop bit times expected. Failure to > see stop bit at 1 sets the framing error flag/bit for the device.
The 16550 docs don't internally say how many samples it needs on a given data bit boundary to consider it one state or the other.
>>But the net effect is still the same even if the internal description >>isn't >>right. The lower the baud rate, the worse the cumulative error becomes. > By using a 16 * clock the cumulative errors is NOT the problem, it is the > drift of the actual bit rate clock at the receiver compared to the > transmitter > as a PERCENTAGE.
The percentage of time per cycle that they're off is constant. But at the start bit, the two internal UART clocks will drift apart. Call it phase shift, cumulative error, or whatever, the net effect is the UART samples the wrong data in the wrong place and eventually gets it wrong.
> Using a 16 * clock means you have more chance of sampling > at the middle of the bit time for each bit than using a 1 * clock to > sample.
One would think that to be able to sample semiaccurately that it would need a 2X clock to sample.
> It is the drift of this sample point that matters with respect to the > actual transmitted clock rate, NOT the expected clock rate.
Agreed.
> gives you even more benefit of less drift from bit to bit. So cummultaive > error for a FRAME becomes a function of bit to bit timing as each bit > width > can only vary by 1/16 * 1/(master clock divisor) * master clock drift.
Thanks for stating what I have been trying to say all along. ;-)
> For the frame cumulative error to happen the 16 * clock must not drift > more > than 1/(n bits) MAXIMUM from the transmitted clock rate. In a 10bit frame > (8 data, 1 stop, 1 start) which means 1/10 = 10%, to achieve this both end > must not drift by more than 5% which is the bit rate PERCENTAGE error.
Wouldn't the approach to sampling also have an effect on what percentage of error was tolerable?
> [snip]
Very nice examples - perfectly understood.
>>No, this is one of those communication things where if I'm not 110% clear, >>people jump all over me rather than realizing by my descriptions that I >>really have a clue what I'm talking about but may not be describing it >>well. >>My problem is not one of lack of understanding, but rather not being >>detailed and clear enough in describing everything I'm thinking. > > No it is a concept block about forgetting the effects of dividing down a > faster clock to get a more accurate slower clock.
In the context of at least the sampling, it was a lack of understanding of how that UART worked. But I'm clear and have been clear on the difference between synchronous communication and asynchornous communication from day 1, even if it's lost in translation between what I thought and what I wrote.
>>I actually did develop modem and fax machine firmware for a couple of >>years, >>doing async and sync communication, so I really do have relevant >>experience >>even if my vernacular isn't 100% to spec. > Revisit the problem by actually doing the calculations from the oscillator > downwards for drift and variances like integer rounding of divisors.
The divisor is going to be the bigger impact (agreed) to baud rate problems than oscillator/clock drift. But I think the original poster said it was +/-20%, and even changing the clock from 1.84Mhz to 2Mhz is an 8% difference that I know from experience won't work. However, as you state there's more to it than that - the sampling mechanism and internal clocking can have a significant effect on tolerance. -->Neil
Paul Carpenter wrote:
>
... snip about 16 x clocks usage ...
> > Once a start bit edge is detected, 8 counts of the clock are used > to then sample in the middle of the bit to allow for slew and other > characteristics of line transmission as the line could be VERY long > or lossy. If the level is 0 then a start bit is detected, then 16 > clock counts later the first bit is sampled and so on for the > number of bits being sent. After the last number of expected bits > (data and parity) has been received the stop bit is sampled to make > sure it is 1 for the number of stop bit times expected. Failure to > see stop bit at 1 sets the framing error flag/bit for the device.
A point that should be appreciated is that the stop bit is detected at 9.5 times the baud rate period (for start, 8 bits, stop) after which the UART is ready to detect a new start bit. If the transmitting clock rate is off far enough for that detection point to be too late to detect the new start bit, either a framing error will result right then, or the detection points will continue to wander. This results in missing data when transmission is as fast as possible. A way to reduce this requirement is to configure the sender to use 1 1/2, or even 2, stop bits. This is not possible on all UARTs. -- "The most amazing achievement of the computer software industry is its continuing cancellation of the steady and staggering gains made by the computer hardware industry..." - Petroski
schwobus@aol.com (Schwob) wrote in message news:<123e50e1.0408171246.4f10c00@posting.google.com>...
> Thanks everybody for the inputs. Sorry if I started a fight between > synchronous / synchronized.... wording, was not my intention. In the > end I have the feeling that most voilently agreed ;-) > > > There were recommendations to fine tune the devices if possible. I > have seen this option for some architectures, e.g. Microchip or > Philips but it does not really help in many applications if the other > communication partner does not provide data to adapt the own > frequency, did not want to use the stnc-word. Also software to > resample the baudrate (sort of autobauding) would be needed, using a > timer to measure the duration of the data byte would probably the best > method. > > My goal is to have it simple, not to trimm while communicating and as > I understand the different on-chip oscillators, this is not possible > with many of them but it is possible with some others. There are > devices from Silabs that fit the requirement for accuracy but not the > requirement for my BOM (a "little" too expensive) then there are those > LPC900 devices and one in particular looks ideal, the LPC916 with 2k > Flash, SPI, UART and I2C all on one low cost 16-pin micro. The only > possible catch is, there is no option to connect an external crystal. > If somebody knows about competitive devices with similar features and > << $1 in 10k+ quantities, please let me know because I have not been > able to find one and it is pretty much mandatory in our company to > present 2 REAL alternatives to the financial controller. > > So, if you know about devices that offer a ggod internal osc. all 3 > options for serial communication with 2k flash and an ADC, let me > know, if not I guess I found the best solution already. > > Cheers, Schwob
Hello Schwob, although I don't know about other devices (non-LPC900) with good serial features AND a good enough internal oscillator for less than a Dollar, I posted something along the lines of your questions recently in a Yahoo user group. There seem to be some knowledgable people about the LPC900 devices there. May be you can get help there: http://groups.yahoo.com/group/lpc900_users/message/13 CU
"Neil Bradley" <nb_nospam@synthcom.com> wrote in message
news:10i4upq53dpmde3@corp.supernews.com...
> "FLY135" <fly_135(@ hot not not)notmail.com> wrote in message > news:TFtUc.24919$nx2.3491@newsread2.news.atl.earthlink.net... > > "Neil Bradley" <nb_no_spam@synthcom.com> wrote in message > > news:10i4f5eh141ob9e@corp.supernews.com... > >> And the more I post, the more people pigenhole me and look for things > >> in my statements and write me off as an idiot, insult me, or look for > >> ways > >> to shift the conversation to continue to make me look wrong about > > I think the problem was that you bastardized the
synchronous/asynchronous
> > terminology and then tried to defend it instead of just going... whoops,
I
> > fouled that up. > > Like I said before, translation was lost between what I was thinking and > what I actually posted. I was NOT trying to imply that a UART is
synchronous
> communication, only that for the period of a byte transmission the two
ends
> have to be at least semi-synchronized. Note that this is different from > synchronous (clock based) communication. I'm perfectly aware of that, and
I
> stated as such many, many times. Not sure why people aren't realizing
that.
> > > A UART doesn't use "synchronous clocking", it uses clocks > > that are within a frequency tolerance in order to communicate. > > I never once said that a UART used synchronous clocking. My original > statement meant to say that in a sychronous transport (like I2C or SPI),
the
> clocking can be as sloppy as it needs to be and it won't matter. > > -->Neil >
I'm now reasonably convinced that you understand most of how uart communication works - and I'm quite happy to accept that you have the experiance to get one working without the understanding (you don't need to understand *why* 5% match is an absolute requirement, regardless of baud rate, divisor rate, crystal rate, etc., to be able to match the clocks well enough). You still have this obsession with cummulative absolute errors despite their irrelevancy to the reliability of the communication, and you still have incorrect ideas about the effect of divisors. However, as you say yourself, most of the problem lies in the communication between yourself and other posters, rather than communication between microcontrollers. I'll leave you with a quotation from one of your earliest posts in this thread, which you may consider in light of your denial above, and then you'll see why people think you are confused (not an idiot, but definitely confused as to the terminonolgy here): Neil Bradley" <nb_no_spam@synthcom.com> wrote in message news:10hvb8dn9hcqoa6@corp.supernews.com...> "CBFalconer" <cbfalconer@yahoo.com> wrote in message
> news:411F4D38.BF6EEF59@yahoo.com... > > Neil Bradley wrote: > >> "Schwob" <schwobus@aol.com> wrote in message > >>> It is my understanding that a synchronous interface such as SPI > >>> or I2C should work without problems even if the transmition rate > >>> for I2C is not 400 kbit/s but rather something like 250 kbit/s. > >>> Same is true for SPI. > >> Correct. Asynchronous protocols don't matter much (if at all). > > ITYM synchronous. > > No, I meant asynchronous, where things like I2C and SPI have separate
clock
> and data lines. The clock lines can vary wildly and it'll still work. That > won't happen with a UART which requires synchronous clocking to occur - on > byte boundaries at least. >
"Paul Carpenter" <paul$@pcserv.demon.co.uk> wrote in message
news:20040817.1842.302217snz@pcserv.demon.co.uk...

> No it is a concept block about forgetting the effects of dividing down a > faster clock to get a more accurate slower clock. > > Consider the problem of cheap digital clocks/watches that use a 32.768KHz > oscillator, using cheap as possible components but still only drift a few > seconds a month or even year. This is because the second timing is
produced
> by dividing down the 32.768KHz by 32768 (15bits) to produce the 1 second > pulse, so the error rate is the oscillator drift * 1/32768 !! >
I don't know whether to laugh or cry at this post. If a 32.768 KHz crystal is 1% out, then when the signal is divided by 32768 to get a second pulse, the second pulse will be 1% out. It's that simple! It doesn't matter a **** if the crystal is at 32.768 KHz or 9.192631770 GHz (the oscillation frequency of the most common type of atomic clock) - the percentage error of the source frequency translates directly to the percentage error in the divided frequency. The main reason 32.678 kHz crystals are used in watches is because 32.768 kHz crystals are used in watches - i.e., the economics of volume make it cheaper to make that particular frequency with high accuracies. A low frequency was picked because a low frequency means low power. But if logic and physics fail those of you who don't understand timing errors, fall back on experience - the most accurate (in terms of how accuractly it measures time, not how accuractly you happen to have set it) clock in your room is probably your digital watch, running off a crystal in the kHz range. The least accurate will be your PC's clock, running off a crystal in the MHz range. But the most accurate you have access to will be if you have a GPS receiver, since they get their time from satellites with atomic clocks running in the GHz range. Perhaps that will make you realise that the source rate and the divisor do not matter !
On Wednesday, in article <cfv4uf$p6p$1@news.netpower.no>
     david@no.westcontrol.spam.com "David Brown" wrote:
>"Paul Carpenter" <paul$@pcserv.demon.co.uk> wrote in message >news:20040817.1842.302217snz@pcserv.demon.co.uk... >> No it is a concept block about forgetting the effects of dividing down a >> faster clock to get a more accurate slower clock. >> >> Consider the problem of cheap digital clocks/watches that use a 32.768KHz >> oscillator, using cheap as possible components but still only drift a few >> seconds a month or even year. This is because the second timing is >produced >> by dividing down the 32.768KHz by 32768 (15bits) to produce the 1 second >> pulse, so the error rate is the oscillator drift * 1/32768 !! >> >I don't know whether to laugh or cry at this post. If a 32.768 KHz crystal >is 1% out, then when the signal is divided by 32768 to get a second pulse, >the second pulse will be 1% out. It's that simple! It doesn't matter a
When the divisor used is an EXACT match with the prefered divisor then yes. When the divisor used is integer rounded it is not an exact match with prefered divisor, and does have an effect on what is going to happen. If expected divisor is 32890 and only 32768 can be achieved the ratio is different due to the combination of the effects, what was missed off of the above was "* expected divisor" (due to late night).. i.e. expected divisor 1 * ---------------- or * -------------- * expected divisor actual divisor actual divisor For EXACT matching ratios (e.g. powers of two) this would reduce to '1'
>**** if the crystal is at 32.768 KHz or 9.192631770 GHz (the oscillation >frequency of the most common type of atomic clock) - the percentage error of >the source frequency translates directly to the percentage error in the >divided frequency.
Both of those examples are exact powers of two, chosen for that precise reason.
>The main reason 32.678 kHz crystals are used in watches is because 32.768 >kHz crystals are used in watches - i.e., the economics of volume make it >cheaper to make that particular frequency with high accuracies. A low >frequency was picked because a low frequency means low power.
Also the NOMINAL value of the crystal is an exact power of two. No rounding of divisors putting errors in the way, and simpler logic required, less logic also means less power.
>But if logic and physics fail those of you who don't understand timing >errors, fall back on experience - the most accurate (in terms of how >accuractly it measures time, not how accuractly you happen to have set it) >clock in your room is probably your digital watch, running off a crystal in >the kHz range.
Nominal frequency is exact power of two.
> The least accurate will be your PC's clock, running off a >crystal in the MHz range.
Using a cheap 4 * NTSC subcarrier 14.3818MHz crystal[1], made in huge volumes with economies of scale to make them cheaply and with high accuracy. This frequency is then divided by 12 then divided by 65536, to give an approx 18.2Hz interupt rate (actually 18.287..Hz), to achieve 18Hz interupt would require a divisor of 66582.407, more than 16bits of the 8254. This feeds interupt and timer software that has loads of kludges in to add part seconds every n or x or y seconds, to get nearly right. It was never designed to be a time piece, we all know how deterministic PC software is. The errors come from many parts, so not a valid comparison. Why they did not try to achieve 20Hz interupt rates instead still amazes me, as this would have a much more accurate timebase clock relative to counters of seconds.
>But the most accurate you have access to will be >if you have a GPS receiver, since they get their time from satellites with >atomic clocks running in the GHz range. Perhaps that will make you realise >that the source rate and the divisor do not matter !
GPS is chosen to be an accurate time piece, otherwise GPS will not work. This has to be designed to be an extremely accurate time piece first using an exact power of two for the purpose of not introducing rounding errors. [1] The frequency was chosen for using the clock to drive the output from a CGA card to a TV using NTSC output. Using it as the clock for the timing circuit of the system clock was more an afterthought of reducing numbers of oscillators. -- Paul Carpenter | paul@pcserv.demon.co.uk <http://www.pcserv.demon.co.uk/> Main Site <http://www.gnuh8.org.uk/> GNU H8 & mailing list info. <http://www.badweb.org.uk/> For those web sites you hate.
Paul Carpenter wrote:
<snip>
> Using a cheap 4 * NTSC subcarrier 14.3818MHz crystal[1], made in huge volumes > with economies of scale to make them cheaply and with high accuracy. > This frequency is then divided by 12 then divided by 65536, to give an > approx 18.2Hz interupt rate (actually 18.287..Hz), to achieve 18Hz interupt > would require a divisor of 66582.407, more than 16bits of the 8254. This > feeds interupt and timer software that has loads of kludges in to add part > seconds every n or x or y seconds, to get nearly right. It was never designed > to be a time piece, we all know how deterministic PC software is. The errors > come from many parts, so not a valid comparison. Why they did not try to > achieve 20Hz interupt rates instead still amazes me, as this would have a > much more accurate timebase clock relative to counters of seconds.
Probably design laziness, and a "we'll fix it later" mindset.... Given the 8254 hardware, and the Xtal you mention, they could have got to within 2.78ppm of 20Hz, with 59924 divisor, and that is within the trim range of any crystal. Did they use all 3 8254 channels ? - ISTR one was for refresh, but the next sensible design step would have been to cascade two timers, and cascade-clock using the 20Hz sq wave output, which would have meant even windows could keep good time. ( of course, windows was a long way in the future, and who would ever need more than 640K anyway.... :) -jg
On Thu, 19 Aug 2004 10:26:25 +1200, in article
     <UHQUc.3954$zS6.446259@news02.tsnz.net>
     no.spam@designtools.co.nz "Jim Granville" wrote:

>Paul Carpenter wrote: ><snip> >> Using a cheap 4 * NTSC subcarrier 14.3818MHz crystal[1], made in huge volumes >> with economies of scale to make them cheaply and with high accuracy. >> This frequency is then divided by 12 then divided by 65536, to give an >> approx 18.2Hz interupt rate (actually 18.287..Hz), to achieve 18Hz interupt >> would require a divisor of 66582.407, more than 16bits of the 8254. This >> feeds interupt and timer software that has loads of kludges in to add part >> seconds every n or x or y seconds, to get nearly right. It was never designed >> to be a time piece, we all know how deterministic PC software is. The errors >> come from many parts, so not a valid comparison. Why they did not try to >> achieve 20Hz interupt rates instead still amazes me, as this would have a >> much more accurate timebase clock relative to counters of seconds. > > Probably design laziness, and a "we'll fix it later" mindset.... >Given the 8254 hardware, and the Xtal you mention, they could have got >to within 2.78ppm of 20Hz, with 59924 divisor, and that is within the >trim range of any crystal.
That was my view from a quick look at calcs and the Tech refs.
> Did they use all 3 8254 channels ? - ISTR one was for refresh, but the
The other timer was used programmable Tone generator for the speaker, this I checked just now in some old IBM XT and AT Tech Reference manuals I have. Even then they were worried about getting sound effects to help you type a letter :-^
>next sensible design step would have been to cascade two timers, and >cascade-clock using the 20Hz sq wave output, which would have meant >even windows could keep good time. ( of course, windows was a long way >in the future, and who would ever need more than 640K anyway.... :) >-jg
Even using a 20Hz interupt rate would have made less work for coding and system overhead, and a minor improvement on determinisity. I can remember how often things had to be coded to get around the problems of getting 'normal' multiples of seconds for events reasonably reliably. -- Paul Carpenter | paul@pcserv.demon.co.uk <http://www.pcserv.demon.co.uk/> Main Site <http://www.gnuh8.org.uk/> GNU H8 & mailing list info. <http://www.badweb.org.uk/> For those web sites you hate.
On Wednesday, in article
     <20040818.2000.302242snz@pcserv.demon.co.uk>
     paul$@pcserv.demon.co.uk "Paul Carpenter" wrote:

> Using a cheap 4 * NTSC subcarrier 14.3818MHz crystal[1], made in huge volumes > with economies of scale to make them cheaply and with high accuracy. > This frequency is then divided by 12 then divided by 65536, to give an > approx 18.2Hz interupt rate (actually 18.287..Hz), to achieve 18Hz interupt > would require a divisor of 66582.407, more than 16bits of the 8254. This > feeds interupt and timer software that has loads of kludges in to add part > seconds every n or x or y seconds, to get nearly right. It was never designed > to be a time piece, we all know how deterministic PC software is. The errors > come from many parts, so not a valid comparison. Why they did not try to > achieve 20Hz interupt rates instead still amazes me, as this would have a > much more accurate timebase clock relative to counters of seconds.
It has long been suggested that the designers were *trying* to get an 18.2 Hz clock rate, to make counting hours simple: a 16-bit counter, incremented once per cycle at 18.20444444 Hz, rolls over exactly once per hour. It would have been rather neat (for some value of "neat") if they had managed it. (Of course, this may be post-debacle revisionism; but it has a ring of plausibility to it.) -- Simon Turner DoD #0461 simon@twoplaces.co.uk Trust me -- I know what I'm doing! -- Sledge Hammer