EmbeddedRelated.com
Forums
Memfault Beyond the Launch

Why should I (not) use an internal oscillator for 8-bit micros

Started by Schwob August 14, 2004
"Neil Bradley" <nb_no_spam@synthcom.com> wrote in message
news:10i25cpqo4omjff@corp.supernews.com...
> Let me try to explain clearly what I'm talking about: > > 1.8432Mhz Clock comes in to 16550, internally divided down to 115200 (by
16)
> to generate the master clock. From there you can specify a divisor of > 1-65535. To get a 2400bps baud clock for example, you need to program up
the
> divisor to 48 (115200/2400). That means for 1 cycle of the 2400bps baud > clock, you are incurring the cumulative error of 48 cycles of the master > clock, multiplied by 8 bits (or however many you're sending). At 2400bps
in
> this example, there are 48*8=384 master clock cycles of cumulative error
per
> 8 bits of data transferred. At 57600 in this example, there are 2*8=16 > master clock cycles of cumulative error. For something like 75 baud,
that's
> 1536 master clock cycles per bit, or 1536*8=12288 master clock cycles of > cumulative error. Gets much, much worse as the divisor increases (and the > baud rate lowers).
Nice explanation, but 384 master clock cycles at 2400bps or 12288 cycles at 75bps are *exactly the same* bit error as a percentage of the bit time. Meindert
I don't follow all of your code here (not having had the dubious pleasure of
working with 8051's, and not having the rest of your code), but I've written
software uarts for several systems - including one on an avr that ran
flawlessly at 38400 baud.  So if I outline the way the state machine works,
can you confirm that your code works this way?  I have a vague suspicion
that you are not handling the start bit properly, but it could be in other
missing code.  This outline is missing features such as multiple sampling
with majority voting, which is standard in hardware uarts but often missing
in software uarts (unless they are low speed, or the processor has lots of
spare time).

1) Identify a falling edge on input.  This can be done by an interrupt on
the pin, or by continuous sampling at a rate of at least 4x baud (hardware
uarts almost always use 16x baud).  When an edge is detected, set your
sampler timer for *half* a bit time.

2) After the half bit time, check the input.  If it is still low, you've got
a start bit and your sampler timer can be set for a full bit time.  If it is
high, it was noise - restart the state machine.

3) For the next 8 bits, sample the input, and set the sampler timer for a
full bit time (actually, if you are using a hardware timer for this, it
should be set up to reset itself to minimize jitter).

4) For the final bit (at 9.5 bit times from the initial falling edge), check
for a high stop bit.  Clear the state machine ready for a new byte.  Note
that the whole process is finished half a bit time before the sender has
finished sending the stop bit (assuming a perfect match, of course - if the
sender is 5% slower, the receiver finishes as the sender starts the stop
bit, and if the sender is 5% faster, the receiver finishes as the sender
ends the stop bit).

Any comments?

David




"Neil Bradley" <nb_no_spam@synthcom.com> wrote in message
news:10i2ikun5i1ss4a@corp.supernews.com...
> "Jim Granville" <no.spam@designtools.co.nz> wrote in message > news:x2bUc.3484$zS6.417715@news02.tsnz.net... > >>> The UART does NOT care ( or even know) how many master clocks it
takes.
> >>> ALL the UART sees is the BAUD clock, commonly 1/16 the bit time. The > >>> UART state engine starts sampling on the nearest 1/16 bit time to the > >>> START edge, > >> Wouldn't it be the first 1/n bit time *after* the START edge > >> (therefore a maximum error of about 1/n bit time rather than 1/(2*n), > >> or is the UART presumed to be prescient? > > :) - well spotted - when I wrote that, it did occur to me, > > that just maybe, someone would consider that 'nearest' might > > apply to both before and after the START edge. > > Maybe this thread will now have another lease of life on this ? > > For those interested, here's a shot of a software UART I did to
communicate
> with a windspeed/wind direction CPU with an internal clock (using the INT0 > input on the CPU). For those who want to vilify me on why I didn't use the > built in UART in the 8051, it was already in use for communicating to a
host
> PC. > > void WindBaudRateInterrupt(void) interrupt 1 using 3 > { > if (0 == sg_u8BitCount) > { > // Framing error - don't do anything. Restart the serial port state > machine > } > else > if (1 == sg_u8BitCount) // First bit - let's make sure it's asserted > { > if (0 == INT0) > { > // It's asserted, let's reschedule the timer! > > TR0 = 0; > TH0 = FULL_TIME_HIGH; > TL0 = FULL_TIME_LOW; > TF0 = 0; > TR0 = 1; > sg_u8BitCount = 2; > sg_u8ByteReceived = 0; > > return; > } > } > else > if (sg_u8BitCount != 10) > { > TR0 = 0; > TH0 = FULL_TIME_HIGH; > TL0 = FULL_TIME_LOW; > TF0 = 0; > TR0 = 1; > > // Look at the state - suck up the bit! > > sg_u8ByteReceived >>= 1; > > if (INT0) // That makes it a 0 - nothing to do! > { > sg_u8ByteReceived |= 0x80; > } > else // And this makes it a 1 > { > } > > ++sg_u8BitCount; > return; > } > else > if (10 == sg_u8BitCount) > { > // It's asserted - reschedule a HALF bit timer and process the > // byte! > > > if (INT0) > { > EX0 = 1; > TR0 = 0; > TF0 = 0; > sg_u8BitCount = 0; > > WindProcessStream(sg_u8ByteReceived); > return; > } > } > > // We've got a framing error - resync > > WindFramingError(); > } > > > -->Neil > >
"Neil Bradley" <nb_no_spam@synthcom.com> wrote in message
news:10i25cpqo4omjff@corp.supernews.com...
> "David Brown" <david@no.westcontrol.spam.com> wrote in message > news:cfq2tv$pht$1@news.netpower.no... > >> Um... well, originally the question was the tolerance of the *CRYSTAL* > >> inside of microprocessors, not the "bit time" error: > > What difference does that make? If an oscillator is 2% out, the the bit > > time on a uart based on that oscillator is 2% out. > > Very true. My only point was that everything that comes between the start > and stop bits are susceptible to cumulative error since there is no
(hehehe)
> sync point between data bits. > > The greater the baud rate divisor, the more cumulative the error becomes. > > >> But the error is *ADDITIVE* fron a per bit basis until the next start > >> bit, > > Exactly - your 5% error adds up to 50% (or 47.5%, to be exact) error
over
> > the ten bits transmitted. But this is completly independant of the bit > > time - it is a relative error. > > Let me try to explain clearly what I'm talking about: > > 1.8432Mhz Clock comes in to 16550, internally divided down to 115200 (by
16)
> to generate the master clock. From there you can specify a divisor of > 1-65535. To get a 2400bps baud clock for example, you need to program up
the
> divisor to 48 (115200/2400). That means for 1 cycle of the 2400bps baud > clock, you are incurring the cumulative error of 48 cycles of the master > clock, multiplied by 8 bits (or however many you're sending). At 2400bps
in
> this example, there are 48*8=384 master clock cycles of cumulative error
per
> 8 bits of data transferred. At 57600 in this example, there are 2*8=16 > master clock cycles of cumulative error. For something like 75 baud,
that's
> 1536 master clock cycles per bit, or 1536*8=12288 master clock cycles of > cumulative error. Gets much, much worse as the divisor increases (and the > baud rate lowers). >
I know this is a pendant point, but if your misunderstanding of timing errors had been correct, it would be very relevant - when the 1.8MHz clock comes into the 16550, it is not divided by 16. It is divided by the programmable divisor to generate a baudx16 clock which is used for oversampling the input. Since the order of the divisors makes no difference here in the real world, it is easier to do the sums if you pretend the 16 is a pre-divisor. You are correct that at 2400 baud there will be 24 times as many cumulative absolute timing errors from the clock as there will be at 57600. However, each bit is 24 times as long, so the percentage error is exactly the same. If the 1.8 MHz crystal is 2% out, then your 2400 baud bit time will be 2% out, just like your 57600 baud bit time. In absolute terms, measured in microseconds, it will be longer, but THAT DOESN'T MATTER. It is only the relative error that makes any difference. This appears to be one of these "concept block" things that we all get on occasion, when we are confident of something that everyone else thinks is wrong. I might sound a bit exasperated in my posts, but I'm not trying to ridicule you - I'm hoping suddenly it will "click" for you and you will understand the point. You've just got the wrong end of the stick* at the moment. * That phrase has a wonderfully graphic origin. When the Roman legions were "on tour", so to speak, they lived in tents of 6 to 10 men. Due to lack of leaves near big camps, each tent was issued with a stick with a sponge at one end for sanitory purposes. It was this stick that you really didn't want to grab the wrong end of!
"Meindert Sprang" <mhsprang@NOcustomSPAMware.nl> wrote in message 
news:10i3allg849f442@corp.supernews.com...
> "Neil Bradley" <nb_no_spam@synthcom.com> wrote in message > news:10i25cpqo4omjff@corp.supernews.com... >> Let me try to explain clearly what I'm talking about: >> 1.8432Mhz Clock comes in to 16550, internally divided down to 115200 (by >> to generate the master clock. From there you can specify a divisor of >> 1-65535. To get a 2400bps baud clock for example, you need to program up >> divisor to 48 (115200/2400). That means for 1 cycle of the 2400bps baud >> clock, you are incurring the cumulative error of 48 cycles of the master >> clock, multiplied by 8 bits (or however many you're sending). At 2400bps >> this example, there are 48*8=384 master clock cycles of cumulative error >> 8 bits of data transferred. At 57600 in this example, there are 2*8=16 >> master clock cycles of cumulative error. For something like 75 baud, >> 1536 master clock cycles per bit, or 1536*8=12288 master clock cycles of >> cumulative error. Gets much, much worse as the divisor increases (and the >> baud rate lowers). > Nice explanation, but 384 master clock cycles at 2400bps or 12288 cycles > at > 75bps are *exactly the same* bit error as a percentage of the bit time.
And cars get better fuel economy than trucks do, as a rule. What does that have to do with anything? That wasn't even a point I was trying to make! -->Neil
"David Brown" <david@no.westcontrol.spam.com> wrote in message 
news:cfsd24$r3s$1@news.netpower.no...
>> Very true. My only point was that everything that comes between the start >> and stop bits are susceptible to cumulative error since there is no > (hehehe) >> sync point between data bits. >> to generate the master clock. From there you can specify a divisor of >> 1-65535. To get a 2400bps baud clock for example, you need to program up >> 1536 master clock cycles per bit, or 1536*8=12288 master clock cycles of >> cumulative error. Gets much, much worse as the divisor increases (and the >> baud rate lowers). > I know this is a pendant point, but if your misunderstanding of timing > errors had been correct, it would be very relevant - when the 1.8MHz clock > comes into the 16550, it is not divided by 16. It is divided by the > programmable divisor to generate a baudx16 clock which is used for > oversampling the input. Since the order of the divisors makes no > difference > here in the real world, it is easier to do the sums if you pretend the 16 > is > a pre-divisor.
I don't think I ever stated that it did matter. However, imperically speaking, there is a 16 bit divisor from the input clock. If you take a look at 6.0 RCLK description in the 16550 document it states: "RCLK, Receiver clock, pin 9: This input is the 16 x baud rate clock for the receiver section of the chip" This indicates to me there is a 16 bit divisor from the clock which generates the baud clock (which is in turn fed to a divisor). The chip operates this way imperically as well. So I guess what you're saying is that it's really 16X the baud rate internally, and the sampling state machine uses that as a basis for adjusting when it samples for a 1 or 0. If this isn't entirely clear, please try to understand what I'm trying to say rather than blasting me for my words. But the net effect is still the same even if the internal description isn't right. The lower the baud rate, the worse the cumulative error becomes.
> You are correct that at 2400 baud there will be 24 times as many > cumulative > absolute timing errors from the clock as there will be at 57600. However, > each bit is 24 times as long, so the percentage error is exactly the same.
For each bit, yes, but I never said it wasn't.
> If the 1.8 MHz crystal is 2% out, then your 2400 baud bit time will be 2% > out, just like your 57600 baud bit time. In absolute terms, measured in > microseconds, it will be longer, but THAT DOESN'T MATTER. It is only the > relative error that makes any difference.
Again, I never said it wasn't.
> This appears to be one of these "concept block" things that we all get on > occasion, when we are confident of something that everyone else thinks is > wrong.
No, this is one of those communication things where if I'm not 110% clear, people jump all over me rather than realizing by my descriptions that I really have a clue what I'm talking about but may not be describing it well. My problem is not one of lack of understanding, but rather not being detailed and clear enough in describing everything I'm thinking. And the more I post, the more people pigenhole me and look for things wrong in my statements and write me off as an idiot, insult me, or look for ways to shift the conversation to continue to make me look wrong about something. The more I try to describe something, the more scrutiny I receive. This is typical Usenet, though. Sheesh. Even after I very clearly laid out what I was talking about, people continued to address issues that I wasn't even debating! I actually did develop modem and fax machine firmware for a couple of years, doing async and sync communication, so I really do have relevant experience even if my vernacular isn't 100% to spec. -->Neil
On Sun, 15 Aug 2004 17:16:31 -0400, "Doug Dotson"
<dougdotson@NOSPAMcablespeed.NOSPAMcom> wrote:
[top posting fixed]

>"Neil Bradley" <nb_no_spam@synthcom.com> wrote in message >news:10hvisv6ucimkf3@corp.supernews.com... >> "CBFalconer" <cbfalconer@yahoo.com> wrote in message >> news:411FC1AD.91751819@yahoo.com... >> > Neil Bradley wrote: >> >> "CBFalconer" <cbfalconer@yahoo.com> wrote in message >> >>> Neil Bradley wrote: >> >>>> "Schwob" <schwobus@aol.com> wrote in message >> >>>>> It is my understanding that a synchronous interface such as SPI >> >>>>> or I2C should work without problems even if the transmition rate >> >>>> Correct. Asynchronous protocols don't matter much (if at all). >> >>> ITYM synchronous. >> >> No, I meant asynchronous, where things like I2C and SPI have >> >> separate clock and data lines. The clock lines can vary wildly >> > No, you do mean synchronous. >> >> No, I meant synchronous. I'm referring to the period of time when the byte >> itself is transferred. During byte transmission, a UART requires both ends >> to be completely synchronous. Of course the positions of when those bytes >> come is completely asynchronous. In a clock/data driven environment like >> I2C, you can clock at a completely irregular rate during the byte and it >> won't matter. You can't do that with a UART. >> >> -->Neil >I believe that UART stands for "Universal ASYNCRONOUS Receiver >Transmitter". You need to go back and study the difference between >sync and async.
One must distinguish between BIT synchronous/asynchronous or BYTE or MESSAGE synchronous. the asynchronous in UART refers to the fact that one or more bytes may be transmitted at any time. The time between bytes must be at least 1 or 2 stop bits, but can as long as one wants. On the bit level though it can be either synchronous or asynchronous. I think that when a clock is transmitted together with a byte level asynchronous protocol, it is refered to as isonchronous. Regards Anton
"David Brown" <david@no.westcontrol.spam.com> wrote in message 
news:cfsc6m$qj0$1@news.netpower.no...
>I don't follow all of your code here (not having had the dubious pleasure >of > working with 8051's, and not having the rest of your code),
You're not missing much. ;-( In either the code or the 8051. ;-)
> software uarts for several systems - including one on an avr that ran > flawlessly at 38400 baud. So if I outline the way the state machine > works, > can you confirm that your code works this way? I have a vague suspicion > that you are not handling the start bit properly, but it could be in other > missing code.
Yeah, the main interrupt handler isn't in the listing. Basically it's a falling edge (don't worry, I have an inverter, and it's not RS232, it's TTL between the CPUs) of the start bit that starts everything.
> This outline is missing features such as multiple sampling > with majority voting, which is standard in hardware uarts but often > missing > in software uarts (unless they are low speed, or the processor has lots of > spare time).
Nope, it doesn't do multiple sampling. I thought at one time I might need it, but the clock is stable enough on both sides that it hasn't been a problem for 1.5 solid years now with a constant 1200bps stream (no framing errors - I count 'em). The two chips are sitting about 1 inch from eachother on a board, and it's all fairly low speed CMOS.
> 1) Identify a falling edge on input. This can be done by an interrupt
Correct.
> the pin, or by continuous sampling at a rate of at least 4x baud (hardware > uarts almost always use 16x baud). When an edge is detected, set your > sampler timer for *half* a bit time.
I do this without the oversampling. When I get a falling edge of the start bit (it's a falling edge when it enters the pin on my micro), I wait half a bit time, then start a full bit time timer, so I'm sampling in where the middle of the clock would be. Agreed, not the most reliable way, but it is sufficient for this specific application. I don't do any oversampling.
> 2) After the half bit time, check the input. If it is still low, you've > got > a start bit and your sampler timer can be set for a full bit time. If it > is > high, it was noise - restart the state machine.
It's more crude than that since I have a reliable environment. I just detect an edge. There isn't enough noise in this circuit to cause problems, but I do see and ack your point that that would be required for a more robust implementation.
> 3) For the next 8 bits, sample the input, and set the sampler timer for a > full bit time (actually, if you are using a hardware timer for this, it > should be set up to reset itself to minimize jitter).
I didn't set the timer for auto reload (don't ask why - it has to do with 8051 timers and how the others were being used), but I did calculate that each countdown would be off by something like 7 microseconds per bit with the system set up as it is. Not enough to cause problems @ 1200bps..
> 4) For the final bit (at 9.5 bit times from the initial falling edge), > check > for a high stop bit. Clear the state machine ready for a new byte. Note > that the whole process is finished half a bit time before the sender has > finished sending the stop bit
Yes, exactly. Like I said above, it's more crude than that and nowhere near as robust. This algorithm that I've implemented wouldn't work reliably in a noisy(er) environment. -->Neil
WARNING long post with claculations to prove the points!

On Tuesday, in article <10i4f5eh141ob9e@corp.supernews.com>
     nb_no_spam@synthcom.com "Neil Bradley" wrote:
>"David Brown" <david@no.westcontrol.spam.com> wrote in message >news:cfsd24$r3s$1@news.netpower.no... >>> Very true. My only point was that everything that comes between the start >>> and stop bits are susceptible to cumulative error since there is no >> (hehehe) >>> sync point between data bits. >>> to generate the master clock. From there you can specify a divisor of >>> 1-65535. To get a 2400bps baud clock for example, you need to program up >>> 1536 master clock cycles per bit, or 1536*8=12288 master clock cycles of >>> cumulative error. Gets much, much worse as the divisor increases (and the >>> baud rate lowers). >> I know this is a pendant point, but if your misunderstanding of timing >> errors had been correct, it would be very relevant - when the 1.8MHz clock >> comes into the 16550, it is not divided by 16. It is divided by the >> programmable divisor to generate a baudx16 clock which is used for >> oversampling the input. Since the order of the divisors makes no >> difference >> here in the real world, it is easier to do the sums if you pretend the 16 >> is >> a pre-divisor. > >I don't think I ever stated that it did matter. However, imperically >speaking, there is a 16 bit divisor from the input clock. If you take a look
^^^^^^^^^^^^^^ Divide by 16 surely
>at 6.0 RCLK description in the 16550 document it states: > >"RCLK, Receiver clock, pin 9: This input is the 16 x baud rate clock for the >receiver section of the chip"
So it is a divide by 16. Nothing new there that has been the de facto way of doing asynchronous byte transmission/reception using start/stop handshaking for decades.
>This indicates to me there is a 16 bit divisor from the clock which >generates the baud clock (which is in turn fed to a divisor). The chip >operates this way imperically as well. So I guess what you're saying is that >it's really 16X the baud rate internally, and the sampling state machine >uses that as a basis for adjusting when it samples for a 1 or 0. If this >isn't entirely clear, please try to understand what I'm trying to say rather >than blasting me for my words.
All UARTS (or other similar multi-mode devices) that use n-bit start/stop byte transfers use a 16 * clock and have done for decades. They could use 8 or 4 times clock, but as with everything more accuracy comes from a higher multiple for sampling frequency. See later. Once a start bit edge is detected, 8 counts of the clock are used to then sample in the middle of the bit to allow for slew and other characteristics of line transmission as the line could be VERY long or lossy. If the level is 0 then a start bit is detected, then 16 clock counts later the first bit is sampled and so on for the number of bits being sent. After the last number of expected bits (data and parity) has been received the stop bit is sampled to make sure it is 1 for the number of stop bit times expected. Failure to see stop bit at 1 sets the framing error flag/bit for the device.
>But the net effect is still the same even if the internal description isn't >right. The lower the baud rate, the worse the cumulative error becomes.
By using a 16 * clock the cumulative errors is NOT the problem, it is the drift of the actual bit rate clock at the receiver compared to the transmitter as a PERCENTAGE. Using a 16 * clock means you have more chance of sampling at the middle of the bit time for each bit than using a 1 * clock to sample. It is the drift of this sample point that matters with respect to the actual transmitted clock rate, NOT the expected clock rate.
>> You are correct that at 2400 baud there will be 24 times as many >> cumulative >> absolute timing errors from the clock as there will be at 57600. However, >> each bit is 24 times as long, so the percentage error is exactly the same. > >For each bit, yes, but I never said it wasn't.
The cummulative error of the clock over a whole start/stop data frame must not be such that when sampling what is thought to be the last bit (STOP), you are actually sampling the parity or last data bit, or conversely sampling what is expected to be parity or last data bit, actually sampling the STOP bit. This is still an error for EACH BIT, unless you expect your clock to drift wildly from bit to bit. To stop drift from bit to bit, the 16 * clock gives you this as a benefit, dividing the master clock down to the 16 * clock gives you even more benefit of less drift from bit to bit. So cummultaive error for a FRAME becomes a function of bit to bit timing as each bit width can only vary by 1/16 * 1/(master clock divisor) * master clock drift. For the frame cumulative error to happen the 16 * clock must not drift more than 1/(n bits) MAXIMUM from the transmitted clock rate. In a 10bit frame (8 data, 1 stop, 1 start) which means 1/10 = 10%, to achieve this both end must not drift by more than 5% which is the bit rate PERCENTAGE error. Most people will design for total system drift (of both ends) way under 5% usually 2.5% to give better safety margins and reliable operation. So if you had a 20MHz master clock divided down to give the 16 * clock for 2400 baud 20,000,000 2400 = ---------- X in this instance is 520.833333 16 * X So the error is 1/8336 * master clock drift, +/- the difference between 520.833333 and expected divisor of 521, which gives two error factors 1/ 0.032% error in bit rate clock for the receiver (521 divisor) due to integer rounding of divisors 2/ the 1/8336 (0.011996161%) that will be caused by drift in the master clock. Unless your clock drifts MASSIVELY this will NOT be a problem. The main thing to ensure that you start with a fast clock, and divide it down a lot to give accurate bit time. If you use a lower master clock frequency the percentage error for drift goes up.
>> If the 1.8 MHz crystal is 2% out, then your 2400 baud bit time will be 2% >> out, just like your 57600 baud bit time. In absolute terms, measured in >> microseconds, it will be longer, but THAT DOESN'T MATTER. It is only the >> relative error that makes any difference.
For 1.8MHz the calculation is divisor = 1,800,000 = 46.875 you would use 47 ---------- 16 * 2400 The error in bit rate from a 2% clock deviation is 1/ (16 * 47 ) * 2% which is 0.13298%, so the drift of the oscillator is NEGLIGIBLE in this context. The difference due to integer rounding of the divisor is more pronounced as 1,800,000 bit rate = --------- = 2393.617 an error of 0.26595% 16 * 47 So at 2% drift the bit rate clock has changed from the 2393.617 to +2% 1.836MHz giving bit rate of 2441.489 using the same divisor. a 1.7287% total error for the receiver -2% 1.764MHz giving bit rate of 2345.745 using the same divisor. a 0.040724% total error for the receiver Both of these errors are WELL below the 1/(n bits) max error for a 10bit frame of 10% for the whole system, or 5% for one end, in fact it is well below the 2.5% IF the other end does not drift too far. If the other end uses the same setup the TOTAL SYSTEM drift is clock drift bit rate drift +2% 2 * 1.728 % = 3.456% -2% 2 * 0.040724 % = 0.081448% Neither of these look like they will realistically cause a problem.
>Again, I never said it wasn't. > >> This appears to be one of these "concept block" things that we all get on >> occasion, when we are confident of something that everyone else thinks is >> wrong. > >No, this is one of those communication things where if I'm not 110% clear, >people jump all over me rather than realizing by my descriptions that I >really have a clue what I'm talking about but may not be describing it well. >My problem is not one of lack of understanding, but rather not being >detailed and clear enough in describing everything I'm thinking.
No it is a concept block about forgetting the effects of dividing down a faster clock to get a more accurate slower clock. Consider the problem of cheap digital clocks/watches that use a 32.768KHz oscillator, using cheap as possible components but still only drift a few seconds a month or even year. This is because the second timing is produced by dividing down the 32.768KHz by 32768 (15bits) to produce the 1 second pulse, so the error rate is the oscillator drift * 1/32768 !!
>I actually did develop modem and fax machine firmware for a couple of years, >doing async and sync communication, so I really do have relevant experience >even if my vernacular isn't 100% to spec.
Revisit the problem by actually doing the calculations from the oscillator downwards for drift and variances like integer rounding of divisors. -- Paul Carpenter | paul@pcserv.demon.co.uk <http://www.pcserv.demon.co.uk/> Main Site <http://www.gnuh8.org.uk/> GNU H8 & mailing list info. <http://www.badweb.org.uk/> For those web sites you hate.
"Neil Bradley" <nb_no_spam@synthcom.com> wrote in message
news:10i4f5eh141ob9e@corp.supernews.com...
> And the more I post, the more people pigenhole me and look for things
wrong
> in my statements and write me off as an idiot, insult me, or look for ways > to shift the conversation to continue to make me look wrong about
something. I think the problem was that you bastardized the synchronous/asynchronous terminology and then tried to defend it instead of just going... whoops, I fouled that up. A UART doesn't use "synchronous clocking", it uses clocks that are within a frequency tolerance in order to communicate.
Thanks everybody for the inputs. Sorry if I started a fight between
synchronous / synchronized.... wording, was not my intention. In the
end I have the feeling that most voilently agreed ;-)


There were recommendations to fine tune the devices if possible. I
have seen this option for some architectures, e.g. Microchip or
Philips but it does not really help in many applications if the other
communication partner does not provide data to adapt the own
frequency, did not want to use the stnc-word. Also software to
resample the baudrate (sort of autobauding) would be needed, using a
timer to measure the duration of the data byte would probably the best
method.

My goal is to have it simple, not to trimm while communicating and as
I understand the different on-chip oscillators, this is not possible
with many of them but it is possible with some others. There are
devices from Silabs that fit the requirement for accuracy but not the
requirement for my BOM (a "little" too expensive) then there are those
LPC900 devices and one in particular looks ideal, the LPC916 with 2k
Flash, SPI, UART and I2C all on one low cost 16-pin micro. The only
possible catch is, there is no option to connect an external crystal.
If somebody knows about competitive devices with similar features and
<< $1 in 10k+ quantities, please let me know because I have not been
able to find one and it is pretty much mandatory in our company to
present 2 REAL alternatives to the financial controller.

So, if you know about devices that offer a ggod internal osc. all 3
options for serial communication with 2k flash and an ADC, let me
know, if not I guess I found the best solution already.

Cheers, Schwob

Memfault Beyond the Launch