PIC vs AVR vs ARM

Started by Miem October 2, 2006
To add to this question, I've been doing mostly AVR work and I'd like to
branch into ARM.  I had planned on just picking up some of Atmel's ARM
gear, but would anyone suggest any other first leap friendly ARM procs?
 The Luminary has already been mentioned and seems interesting.


Jason
The place where you made your stand never mattered,
only that you were there... and still on your feet


Miem wrote:
> Hi All, > > As an amateur embedded circuit player, I have used couple of AVR and > PIC microcontrollers in the past. > > In these days it is not to hard to find small, ARM based ready to use > embedded boards under $100. They seems to have faster clock speed then > most of the AVR and PIC boards. > > Can anybody shortly compare ARM with PIC ad AVR interms of (a) > performance (b) software support (c) price? > > Regards, > > Miem >
linnix wrote:
>> Almost all ARM have JTAG so if you need OCD you lose multiple pins. > > That is the positive side of ARM. Jtag is always there, and reliable. > AVR Jtag, on the other hand, could be disabled, and thus un-reliable > by definition.
JTAG can be disabled in AT91SAM7 circuits as well. It is *MANDATORY* if you want any type of code protection... (The Boundary Scan will still work of course) -- Best Regards, Ulf Samuelsson ulf@a-t-m-e-l.com This message is intended to be my own personal view and it may or may not be shared by my employer Atmel Nordic AB
Hi Miem,

In article <1159777724.426016.42900@b28g2000cwb.googlegroups.com>, 
MiemChan@gmail.com says...
> As an amateur embedded circuit player, I have used couple of AVR and > PIC microcontrollers in the past. > > In these days it is not to hard to find small, ARM based ready to use > embedded boards under $100. They seems to have faster clock speed then > most of the AVR and PIC boards. > > Can anybody shortly compare ARM with PIC ad AVR interms of (a) > performance (b) software support (c) price?
Unless the project requires it, I would say stick with an AVR (my first choice). I've finished one project using some AVRs and now I'm attempting to use an NXP/Philips LPC2103. I went with the LPC2103 mainly because it has a fast A/D and it's inexpensive. I've worked with 32-bit processors on other projects, including ARMs. Here's my lengthy comparason of AVR vs. ARM development... For the AVR I use CodeVision and I find it to be a very good compiler, from a user perspective. I found the peripheral wizard in CAVR is *very* handy -- you can start using the peripheral very quickly and you don't have to remember the sometimes complicated initalization sequence or register settings. With CAVR, when you're done compiling, you get useful information on RAM and Flash resource utilization. I use UltraEdit32 for the code writing, so I didn't use CAVR's IDE that much, but I found it a sufficient IDE. I did debugging using the Atmel JTAG ICE mkII and AVR studio and debugWire. I didn't think it would very well, but surprisingly I have very few complaints. The debugging capabilities of the new AVRs (JTAG or debugWire) is quite good, single-stepping was very fast (you hit a key, it steps instantly), and overall AVR Studio worked well. You can do all the standard things you want, look at registers, memory locations, watch variables, etc. Since AVR Studio is written by Atmel, you get views of peripheral registers which are named, with their port bits broken down, and you can toggle the bits as you see fit. There are some rough spots (enabling/disabling debugWire should be done automatically if you goto into programming mode or debug mode, is my major gripe). CAVR has some nice extentions like PORTC.3 = 1 means bit 3 of port C is set to 1. Those kinds of extentions, I found, are very handy in embedded prorgamming. Contrast this to my current setup with the LPC2103. I am using the GNUARM toolchain set (thanks Rick/Pablo/everyone else who put it together) which in itself works. I followed a tutorial written by "Jim Lynch" which shows how to get GNUARM, the Eclipse IDE and the OpenOCD GBD daemon all working together. I have an existing piece of JTAG hardware that works with OpenOCD, so I didn't have any additional hardware costs. With the ARM development you'll have to make a choice between sticking your code in FlashROM and executing from there (can be slower, but usually more code space) or putting it in RAM (not much room). This is a limitation of working with a CPU vs. a microcontroller. A big deal for ARM7-TDMI devices is they only have two hardware break-points. So if you want to single-step your code which is in Flash, that requires both hardware breakpoints. If you're using any open source tools, you can almost forget about single-stepping and setting meaningful breakpoints. If you want software breakpoints, you'll need to stick your code in the limited RAM. This a big tradeoff, for the LPC2103 there is 32 KBytes of Flash but only 8KBytes of RAM. Getting the GNUARM+Eclipse+OpenOCD working is a time consuming setup in my opinion. The compiler works, but you'll spend a decent amount of time mucking with C run-time files (crt0.s), assembly initalization code, linker scripts and other things. Thank fully the LPC2000 forum at Yahoo has some pre-exiting examples you can use as a starting point. Eclipse has (in my opinion) an overly complicated user interface that can be quite slow and unresponsive at times. It seems like it's very customizable, but if you start digging, you'll find you can't streamline it too much. Using the Eclipse IDE for writing code works OK, but using the "Zylin Embedded CDT Debugger" is not a pleasant experience (at least with OpenOCD), I found it very unreliable. I have since switched to the Insight debugger with my code executing from RAM. Insight works OK, but single-stepping takes 4-5 seconds per step! The AVR setup single-steps instantly (or so it feels). Insight of course has no knowledge of the chip's peripherals, so if you want twiddle enable bits or look at peripheral settings, you'll have to dump the memory location and work backwards. So, on paper using one of these ultra-cheap ARM "microcontrollers" looks good, but I think you'll find there's a decent sized leap to get it going. I had been thinking of using these ARM parts in some personal projects, but for now I'm sticking with the AVRs. Someone might be quick to point out a commercial compiler would work better and that it is unfair to compare CAVR, a commercial compiler, to the free GNU toolset. This might be true, but commercial ARM compilers are usually more than a few hundred $$ and they usually only work with their JTAG debug tools, so you're very quickly locked in. Many of the commercial ARM toolchains (Keil, Rowley for example) are based on the GNU toolchain, so all of those limitations come along for the ride. My $0.02 John.
Buddy Smith wrote:
> steve <bungalow_steve@yahoo.com> wrote: > >> AVR and PIC aren't really comparable with ARM, the first two are very >> low cost/power 8 bit machines, the ARM is a higher power, higher cost >> 32 bit machine. If you need to make a device that needs to run on a >> coin cell for 2 years, you can't pick an ARM processor, if you need a >> CPU that can do real time FFT, a PIC won't do it. > > I thought so too, but the products from luminary micro > (luminarymicro.com), discussed in this newsgroup recently and in > Circuit Cellar, have changed my mind. > > They make ARM CPUs with very little RAM and flash, on the cheap.... > they say less than one dollar in 10k quantities (from an advertising > spiel)
LMI make Cortex chips which are incompatible with most of the others. Apparently they are financed by ARM themselves. I guess that is one reason why the uptake is not dramatic.
> ttyl, > > --buddy
-- Best Regards, Ulf Samuelsson ulf@a-t-m-e-l.com This message is intended to be my own personal view and it may or may not be shared by my employer Atmel Nordic AB
Ulf Samuelsson wrote:

> > > LMI make Cortex chips which are incompatible with most of the others. > Apparently they are financed by ARM themselves. > I guess that is one reason why the uptake is not dramatic. >
Hi Ulf, You might have been seriously misinformed :-) LMI is not financed by ARM. We are two different companies, and LMI is a ARM partner. The definition of incompatible is a bit unclear. Same as any Cortex-M3 chips, the LuminaryMicro Cortex-M3 chips are not binary compatible with traditional ARM processors. The Thumb instructions is the same (except BLX and SETEND instructions). But startup code, interrupt handlers and system control codes (e.g. mode switching) will have to be rewritten. However, application codes developed for LuminaryMicro parts will work on any other Cortex-M3 parts (of course some code might need to be changed if the peripherals / memory map are different). regards, Joseph
John wrote:
> Hi Miem, > *snip *
I guess I should add my $0.02 as well. I did not find the transition from PIC/8051 MCUs I was working with before to ARM chips to be very difficult at all. Yes I had to write my initialization code and the linker scripts but they are quite easy to learn. At first I was scared by linker scrips because everytime I opened one up I'd be like "what the hell is this?" but after learning the syntax its not so bad. I am working with the AT91SAM7S256, which is a pretty pleasant chip to work with. I did also read the tutorial but I didn't read through all of it. Eclipse is damn terrible, consumes a large amount of memory (seriously, on my system it consumes almost as much physical memory as that FEAR game) and is very slow. Since I am working on a VERY limited budget, I use Crimson Editor to edit and compile my code and then use Insight to debug it. For me, its simple, simply press Ctrl+2 to do a make clean and Ctrl+1 to build the source to both an ELF and binary. I'd say to learn it because there might be a time in which you will need a 32-bit MCU and you don't want the additional burden of learning at that time. Also if you are now working with the 8-bit AVR, why not try the MSP430 as well? I have a cheap board on it that is powered with a watch battery and it keeps going (of course the CPU is running off the internal DCO, which is only around 800kHz). -Isaac
steve wrote:
> Buddy Smith wrote: > > > I thought so too, but the products from luminary micro > > (luminarymicro.com), discussed in this newsgroup recently and in Circuit > > Cellar, have changed my mind. > > > > They make ARM CPUs with very little RAM and flash, on the cheap.... they > > say less than one dollar in 10k quantities (from an advertising spiel) > > > > ttyl, > > > > --buddy > > yes but they are very high power, I think 10x the power of the AVR at > 1Mhz, if I remember correctly
I think you are mistaken. If you compare the ARM MCUs at the same frequency that the AVR runs, you will see that the power for the ARM can be lower than for the AVR. That is one of the big reasons that we recently used an ARM in a new design in place of the AVR which we have typically used in the past. It may be that in the smaller configurations an AVR can run at much lower power, but if you are comparing apples and not oranges, I think the ARM chips can keep up with most 8 bit parts in terms of power.
We have used AVR MCUs in many of our products and were very happy with
them.  On a new project I decided to take a look at the ARM MCUs to see
if we could branch out from some of the limitations of the AVR.  We did
a very exhaustive comparison between the various ARM processors and the
ATmega128 and found that the ARM chips were generally lower power,
lower cost and fit in a smaller footprint on the board.  We also were
able to use a much smaller crystal.

The ARM we chose for this project was the AT91SAM7S64 due to its
combination of low cost and low power.  The Philips parts seem to run a
close second and may even beat the Atmel SAM7 parts depending on
exactly the combination of features you need.  If you don't need the
lowest power then the other brands of ARM chips could be considered, ST
Micro STR7, TI TMS470 and Analog Devices ADuc7 among others.

Did you check out the feature comparison chart at www.gnuarm.com?
Click to the Resources page and scroll down to the ARM chips section
where you will find three different links for the comparison chart.



Jason wrote:
> To add to this question, I've been doing mostly AVR work and I'd like to > branch into ARM. I had planned on just picking up some of Atmel's ARM > gear, but would anyone suggest any other first leap friendly ARM procs? > The Luminary has already been mentioned and seems interesting. > > > Jason > The place where you made your stand never mattered, > only that you were there... and still on your feet > > > Miem wrote: > > Hi All, > > > > As an amateur embedded circuit player, I have used couple of AVR and > > PIC microcontrollers in the past. > > > > In these days it is not to hard to find small, ARM based ready to use > > embedded boards under $100. They seems to have faster clock speed then > > most of the AVR and PIC boards. > > > > Can anybody shortly compare ARM with PIC ad AVR interms of (a) > > performance (b) software support (c) price? > > > > Regards, > > > > Miem > >
rickman wrote:

> I think you are mistaken. If you compare the ARM MCUs at the same > frequency that the AVR runs, you will see that the power for the ARM > can be lower than for the AVR.
Depends alot on how fast you run them, but the ARM's always use more power per frequency, the AVR is an 8 bit device that can operate down to 1.8 Volts the ARM is a 32 bit device that requires 3.3 Volts, so it obvious who is going to use less power (assuming all else being equal, process, I/O, RAM, FLASH etc). looking up a couple datasheets Analog Devices ARM 7021 7.2mA@1.3 Mhz(typical) Atmel Atmega164 .4mA@1.0 Mhz(typical) At higher speeds the ARM's don't have as bad mA/ Mhz ratio Luminary Micro LM3S101 35mA@20 Mhz (typical, running out of SRAM, no active peripherals) Analog Devices ARM 7021 33mA@41 Mhz(typical) That is one of the big reasons that we
> recently used an ARM in a new design in place of the AVR which we have > typically used in the past. >
Which ARM and AVR did you compare? At what speed?
> It may be that in the smaller configurations an AVR can run at much > lower power, but if you are comparing apples and not oranges, I think > the ARM chips can keep up with most 8 bit parts in terms of power.
you can make the argument for math intensive applications the ARM can execute it much faster, thus only needs to be on for a much smaller period so less total power that way, was that how you did the analysis? The AVR's also have much better power down and sleep mode currents, which may or may not be important for your application.
Take a look at:
http://www.netburner.com/products/core_modules/mod5213.html

For $99.00 a 32 bit dev kit. Faster and more capable than most of the
small ARMs.




Paul

"An Schwob in the USA" <schwobus@aol.com> wrote in message 
news:1160196565.981433.178410@i42g2000cwa.googlegroups.com...
> > On Oct 3, 1:56 am, Joseph <joseph....@somewhere-in-arm.com> wrote:
>> LMI is not financed by ARM. We are two different companies, and LMI >> is a ARM partner. > Sure different companies but almost as sure ARM puts in a lot of money > into Luminary. How could they otherwise survive?
ARM did invest a small amount, but only a tiny amount of what a startup needs. They can survive because they will make money once they get to volume production. Even with a tiny profit per chip selling millions of them means lots of money.
> Having devices that > use the lowest power ARM CPU yet they are much higher current than > SAM7 or LPC2000.
The initial devices are on an older process than the Atmel and Philips devices. Even so, since the M3 is a lot faster, it doesn't need to run at the same frequency as other MCUs so it may actually use less power.
> THe marketing gimick of the 99 cent device which I yet have to see > available anywhere is a 20 MHz max 28-pin device. You get an LPC2101 > for a $1.47 @ 10k that runs 3.5 times faster, uses a lot less current
You're falling into the MHz = performance trap. An M3 is about twice as fast as ARM7 MCUs running Thumb code (it runs Thumb-2 code at the same efficiency of an ARM9 running ARM code). That means you don't need to run at a high frequency to get the same performance, and you use less power due to the lower frequency. A 20MHz Cortex-M3 is faster than just about all existing 8/16-bit CPUs, including for example a 100MHz single cycle 8051. And the 50MHz version outperforms current ARM7 MCUs by a huge margin.
> This company (Luminary) can not really make any money with the devices > that lack performance, are high power and can not keep up with the wide > range of available ARM7 devices, so they must be sponsored and who but > ARM would be interested to finance it?
Maybe it's a conspiracy?
> Talking about ARM partner, the only thing that I see Luminary attacking > are other ARM partners (canibalism!?)
On the contrary, Luminary is aiming their chips primarily to people who upgrade from 8-bit and 16-bit chips when they run out of steam. Other manufacturers can and do make cheap ARM MCUs, competition is good. What evidence do you have Luminary is "attacking" ARM partners?
> They should compare themselves to the PIC24, dsPIC, AVRs (sorry Ulf), > ST10, HC12, S12, MSP430..... you name them.
Existing ARMs outperform most of those already by a large margin. In cases where they don't, the M3 does now - the M3 exists solely to beat the existing 8/16-bitters on every aspect. As it happens it does beat ARM7 as well, but then again ARM7 is over 10 years old... Wilco
An Schwob in the USA wrote:
> On Oct 3, 1:56 am, Joseph <joseph....@somewhere-in-arm.com> wrote: > > Ulf Samuelsson wrote: > > > > > LMI make Cortex chips which are incompatible with most of the others. > > > Apparently they are financed by ARM themselves. > > > I guess that is one reason why the uptake is not dramatic.Hi Ulf, > > > > You might have been seriously misinformed :-) > > LMI is not financed by ARM. We are two different companies, and LMI > > is a ARM partner. > Sure different companies but almost as sure ARM puts in a lot of money > into Luminary. How could they otherwise survive? Having devices that > use the lowest power ARM CPU yet they are much higher current than > SAM7 or LPC2000.
You are assuming facts not in evidence. If you read their web site you will see that they just completed a second round of financing... "Luminary Micro closed its first private funding of $5M in February 2005 with lead investor EXA Ventures. Luminary Micro also closed a Series B of $14 million as announced on June 12, 2006. " I am sure they are getting a ton of support from ARM and they may have gotten their initial funding in an indirect way from ARM. I have seen startups get free office space, free use of facilities and even the loan of personel from interested parties without showing it as "financing". But I have no doubt that at this point LM is self sufficient and will be turning a profit in the next year or so.
> THe marketing gimick of the 99 cent device which I yet have to see > available anywhere is a 20 MHz max 28-pin device. You get an LPC2101 > for a $1.47 @ 10k that runs 3.5 times faster, uses a lot less current > at the same speed, offer more peripherals and a nice upgrade path that > does not multiply the price just because the next device runs a > whopping 25 MHz.
I don't agree that the $.99 LM3S101 is a "gimick". At that price point there are any number of sockets this device can steal from the 8 bit world. I seem to recall a PIC clone that ran at 50 MHz and emulated a lot of peripherals in software. Clearly there is a place for higher horsepower at the low price end of the market. These smallest parts are clearly aimed at the automotive market. If you don't think this market is significant, ask TI (whose ARM parts were only for automotive customers until a year or so ago) and Micronas who makes several ARM parts that they target to the automotive world. Neither TI nor Micronas make low power devices since this is not the factor in this segment that it is in other markets.
> This company (Luminary) can not really make any money with the devices > that lack performance, are high power and can not keep up with the wide > range of available ARM7 devices, so they must be sponsored and who but > ARM would be interested to finance it?
Of course they can make money. The claim of lacking performance is not in any way justified. Their core can run at 50 MHz from Flash with no wait states regardless of the code being executed which no other ARM part can do. The Philips parts use a lookahead buffer, but when a jump is executed you get a multi clock cycle delay while a new Flash access is started. The high power claim is a bit exaggerated. LM parts use more power than the Atmel or Philips parts, but they are in the same ballpark as the ADI, ST and TI product lines. Are you suggesting that none of these four companies have a chance of competing in the ARM market place?
> Talking about ARM partner, the only thing that I see Luminary attacking > are other ARM partners (canibalism!?)
I'm not sure what that is supposed to mean. All ARM vendors compete against one another.
> They should compare themselves to the PIC24, dsPIC, AVRs (sorry Ulf), > ST10, HC12, S12, MSP430..... you name them.
I guess you are referring to their marketing. My guess is that currently there is a lot of market focus on moving to the 32 bit world. There are any number of articles in the trade journals about how users are skipping 16 bits and moving directly from 8 bit parts to the 32 bit devices. In that context the smart marketing is to use that momentum as leverage and show your advantages over the other 32 bit devices. I guess your requirements exclude the LM ARM devices. But that does not mean they are making inferior products. Different customers have different requirements.

On Oct 3, 1:56 am, Joseph <joseph....@somewhere-in-arm.com> wrote:
> Ulf Samuelsson wrote: > > > LMI make Cortex chips which are incompatible with most of the others. > > Apparently they are financed by ARM themselves. > > I guess that is one reason why the uptake is not dramatic.Hi Ulf, > > You might have been seriously misinformed :-) > LMI is not financed by ARM. We are two different companies, and LMI > is a ARM partner.
Sure different companies but almost as sure ARM puts in a lot of money into Luminary. How could they otherwise survive? Having devices that use the lowest power ARM CPU yet they are much higher current than SAM7 or LPC2000. THe marketing gimick of the 99 cent device which I yet have to see available anywhere is a 20 MHz max 28-pin device. You get an LPC2101 for a $1.47 @ 10k that runs 3.5 times faster, uses a lot less current at the same speed, offer more peripherals and a nice upgrade path that does not multiply the price just because the next device runs a whopping 25 MHz. This company (Luminary) can not really make any money with the devices that lack performance, are high power and can not keep up with the wide range of available ARM7 devices, so they must be sponsored and who but ARM would be interested to finance it? Talking about ARM partner, the only thing that I see Luminary attacking are other ARM partners (canibalism!?) They should compare themselves to the PIC24, dsPIC, AVRs (sorry Ulf), ST10, HC12, S12, MSP430..... you name them. An Schwob
Everett M. Greene wrote:
> "Ulf Samuelsson" <ulf@a-t-m-e-l.com> writes: > [snip] >>> I did also read the tutorial but I didn't read through all of it. >>> Eclipse is damn terrible, consumes a large amount of memory (seriously, >>> on my system it consumes almost as much physical memory as that FEAR >>> game) and is very slow. >> I attended an Eclipse Seminar, and 1GB RAM is minimum >> and many need 2 GB to run properly. > > Good gawd, Gertie! What were the implementors using for > brains (presuming they had any)? The bloated size and > user testimonial above indicates that the implementors > have the intelligent level of a rock.
The implementors largely worked for large companies selling computer hardware. They did their job. Pete
rickman wrote:
> Terran Melconian wrote: >> On 2006-10-05, Isaac Bosompem <x86asm@gmail.com> wrote: >>> CBFalconer wrote: >>> >>>> A UART needs much better precision (and stability) than you can >>>> expect from any r/c oscillator. >>> Yes, from what I've read you only got an error margin of 1 or 2%. Not >>> too big if you ask me. >> Assume a byte is 10 bits long (one start, one stop, no parity). If we >> sample the bits in the middle of where we think they are, then by the >> end of the byte we can be off by just under half a bit and still read >> correctly. This looks like about 5% to me. > > That will work if the other end is timed perfectly, but in reality you > need to split the timing error budget between sender and receiver. So > that gives you 2.5% in an otherwise perfect world. So in the real > world 2% is a better figure. >
The theoretical limit (for 10-bit chars, including start and stop) is a 5% match. If each end is within 2.5%, you'll get a match - you don't need to chop off anything extra because of uneven splits. In fact, you can often *add* significant margins because of uneven splits - if one end is a PC, or other equipment with known tight margins, you might be confident of a 1% margin at the PC end, giving you 4% to play with at the embedded end. However, there are other things that eat away at your margins. Assuming a balanced split, you need to aim for (0.5 / 10) / 2 = 2.5% for a half-bit error. If you want to take advantage of the majority vote noise immunity feature of many uarts, you want to be maximally 7/16 bits out over a total of 9 + 9/16 bits (only half the stop bit is used), giving you 4.6%/2 = 2.3% margins. Then you need to look at your drivers, terminations, cable load, etc., and how they affect your timings. In particular, you are looking for the skew between the delays on a rising edge and the delays on a falling edge. These delays are absolute timings, independent of the baud rate and the length of each character. Thermal and voltage effects will also affect your timing reference, and can be considered as part of the same margin calculations.
> >> I wonder how exactly they arrive at 1% or 2%; do you know?
I'd say 2% for a simple RS-232 or RS-485 link that was not stretching speed or distance limits, and 1% when using optical isolation or cables with significant capacitance. YMMV, of course.
>> >> I've used the internal oscillator on an AVR and it usually worked >> (except when it didn't, of course); good enough if it was only to be >> used for debugging and turned off in the final product. > > Exactly. I was under the impression that on the newer parts internal > oscillators were being trimmed and compensated to get under the 2% > figure over temperature and voltage. But since I normally use > crystals, I have not followed this closely. >
If you have good control of the environment (temperature and voltage), and simple links, then 2% is good enough. If one end of the communication link has a more precise reference, then 2% is more than sufficient.
>>
>> And the R/C oscillator is only useful in a small percentage of >> applications where you don't need any more timing precision than >> what is required to run a UART, and just barely that! > > A UART needs much better precision (and stability) than you can > expect from any r/c oscillator. >
Not when you can calibrate the R/C oscillator against a known source (like the incoming data on the serial port) The LIN protocol for automotive application was designed to allow the use controllers running from an R/C oscillator. The protocol start with a "wakeup" byte followed by a "calibration" byte and after that, the uC knows the bit time in clocks regardless of its current frequency. So if you have a choice of protocol, and choose a protocol that requires a stable CPU clock, you just did not do your homework. -- Best Regards, Ulf Samuelsson This is intended to be my personal opinion which may, or may not be shared by my employer Atmel Nordic AB
Terran Melconian wrote:
> On 2006-10-05, Isaac Bosompem <x86asm@gmail.com> wrote: >> CBFalconer wrote: >> >>> A UART needs much better precision (and stability) than you can >>> expect from any r/c oscillator. >> >> Yes, from what I've read you only got an error margin of 1 or 2%. Not >> too big if you ask me. > > Assume a byte is 10 bits long (one start, one stop, no parity). If we > sample the bits in the middle of where we think they are, then by the > end of the byte we can be off by just under half a bit and still read > correctly. This looks like about 5% to me. > > I wonder how exactly they arrive at 1% or 2%; do you know?
There are two ends. If each is off by 2% the disagreement is 4%. You also haven't considered the end effects of locating that middle of the bit sample. For a 16x master clock that position can be off by 6% all by itself, which is the equivalent of 0.6% at the 10th position. Then there is the quantizing error due again to the period of the master clock. Draw timing diagrams to see all this. -- Some informative links: <news:news.announce.newusers <http://www.geocities.com/nnqweb/> <http://www.catb.org/~esr/faqs/smart-questions.html> <http://www.caliburn.nl/topposting.html> <http://www.netmeister.org/news/learn2quote.html> <http://cfaj.freeshell.org/google/>
Terran Melconian wrote:
> On 2006-10-05, Isaac Bosompem <x86asm@gmail.com> wrote: > > > > CBFalconer wrote: > > > >> A UART needs much better precision (and stability) than you can > >> expect from any r/c oscillator. > > > > Yes, from what I've read you only got an error margin of 1 or 2%. Not > > too big if you ask me. > > Assume a byte is 10 bits long (one start, one stop, no parity). If we > sample the bits in the middle of where we think they are, then by the > end of the byte we can be off by just under half a bit and still read > correctly. This looks like about 5% to me.
That will work if the other end is timed perfectly, but in reality you need to split the timing error budget between sender and receiver. So that gives you 2.5% in an otherwise perfect world. So in the real world 2% is a better figure.
> I wonder how exactly they arrive at 1% or 2%; do you know? > > I've used the internal oscillator on an AVR and it usually worked > (except when it didn't, of course); good enough if it was only to be > used for debugging and turned off in the final product.
Exactly. I was under the impression that on the newer parts internal oscillators were being trimmed and compensated to get under the 2% figure over temperature and voltage. But since I normally use crystals, I have not followed this closely.
Terran Melconian wrote:
> On 2006-10-05, Isaac Bosompem <x86asm@gmail.com> wrote: > > > > CBFalconer wrote: > > > >> A UART needs much better precision (and stability) than you can > >> expect from any r/c oscillator. > > > > Yes, from what I've read you only got an error margin of 1 or 2%. Not > > too big if you ask me. > > Assume a byte is 10 bits long (one start, one stop, no parity). If we > sample the bits in the middle of where we think they are, then by the > end of the byte we can be off by just under half a bit and still read > correctly. This looks like about 5% to me. > > I wonder how exactly they arrive at 1% or 2%; do you know? > > I've used the internal oscillator on an AVR and it usually worked > (except when it didn't, of course); good enough if it was only to be > used for debugging and turned off in the final product.
Here is the aforementioned document: http://www.maxim-ic.com/appnotes.cfm/appnote_number/2141/ You are actually pretty close. I guess Intersil put in conservative numbers. Under less than harsh settings you get 3.3% according to the document. -Isaac
On 2006-10-05, Terran Melconian <te_rem_ra_ove_an_forspam@consistent.org> wrote:
> On 2006-10-05, Isaac Bosompem <x86asm@gmail.com> wrote: >> >> CBFalconer wrote: >> >>> A UART needs much better precision (and stability) than you can >>> expect from any r/c oscillator. >> >> Yes, from what I've read you only got an error margin of 1 or 2%. Not >> too big if you ask me. > > I wonder how exactly they arrive at 1% or 2%; do you know?
Search for Dallas/Maxim's app note 2141, "Determining Clock Accuracy Requirements for UART Communications" -- John W. Temples, III