I guess I'm out on a limb here, but since you're not involved
with the DSP or
FPGA, here we go anyway.
1. Any protocol session that "locks up" on missed bytes already
indicates very poor
SW design and subsequent development.
Whenever I implement protocols (especially thru RF) it's always SW
state machine
driven from the Physical layer up (even Media Access wise)- so you can
recover from ANY error.
2. "Locking" the DCO to "something" does work : I've
done it to implement the concept of a correlator
to acquire a PN sequence, and then go to Slip and Track mode,
effectively despreading low chip rate Direct Sequence baseband.
The problem is that you need extra H/W, and you need to know if the PLL
is locked.
An externally injected current gives good analog control over the DCO
freq.
The saving of current will be blown by the processing overhead in getting
the DCIO on frequency
(uAH wise). Don't need a slide rule to figure it out.
Why can't they just put a bloody crystal in ? Geez........
You can't do it internally because of the BRMOD issue, and as CP
says the taps on the resistors
to inject current are too discrete steps.
Interpolation is fine averaged over time on lower freqs, but 115.2
won't tolerate the "jitter" you're
creating when you set the DCO.
The only way out is to approximate as fast as possible in setting the
DCO, and let it float for a bit.
At that moment there's no modulation or control on DCO freq. but a
"free-wheeling" DCO won't stay
on an accurate freq long enough.
3. RF unit.
Got a good idea.
Looked at your URL - let me guess :
It's running on a single frequency ? If that's so, forget it -
it won't happen anyway.
squirting 115,200 bps into baseband for RF implies a high throughput on
the RF link.
At those rates multipath effects are pretty devastating without several
countermeasures.
I see this all the time, you won't believe what incredibly crappy
"RF units" are out there....
High time to delegate my domain, and release a couple of RF small OEM
boards I've done :-)
I hope I'm wrong, but if it's along that line - I expect your
company in about 3-4 months to be
looking for another RF unit.
Hum - not the first one, not the last one.
Some people only learn the hard way.
Sounds like a pooh !
back to the drawing board.
Another example of poor system design - and SW. (in the DSP I mean)
If you are at liberty to discuss further, you can contact me off-line.
I'll give some input if you want Steve.
Have they checked if the RF is OK spur wise with an FPGA and a DSP in there ?
I just don't get it - If the MSP430 takes care of the RF link control and
it sleeps most of the time,
what the FPGA and DSP doing there ? a system like that and needing every uA is
an oxymoron.
Cheers Steve
Kris
----- Original Message -----
From: SBurck
To: msp430@msp4...
Sent: Monday, September 30, 2002 6:18 AM
Subject: [msp430] DCO Calibration
Lots of posts to comment on. First, one correction (and another to
follow): After the last round of cost cuts, we are using the F148,
not F149... and the main reasons we are probably sticking with it
are:
a) Our RAM usage is already around 1800 bytes.
b) Our Flash usage, including the FPGA data, is close to 40K.
FPGA data is about 80% of that, and we may never need the full
amount, but we still go back to point (a).
If I were to mention the brand or frequency of the RF unit, I'd
probably be saying more than the company I'm outsourcing for would
like, so I'll decline. The RF unit from the MCU's point of view is
simply : run the clock, turn on the power, tell the FPGA that it's
running, and every now and again shove some values into some
registers when the DSP wants to move it. It does what it's told,
and for me is pretty much a black box (I'm not doing either the FPGA
or the DSP code here).
If communications were all I had to do, then power management would
be very simple, however, the communication with the external device
only happens when I'm not busy slaving to the DSP, mostly involving
A/D and RF control, plus bookkeeping. When the DSP is happy and
asks to be put to sleep, then I shut him down (and the RF, if he
asks me to), and handle the communication tasks.
I looked at the code sample, it might work for us. We have Timer_A
operating continuously anyway, and could run this routine before
starting a protocol session. The question is, would it remain
stable for several seconds while using it as a baud rate generator?
The device I am talking to is very fussy, and missed bytes can cause
minor to serious problems (it has been known to freeze up when
confronted with certain protocol errors - something we cannot
allow). Kris's speculation of once a second is a problem - we
sometimes maintain protocol sessions for 5 or 6 seconds.
Our current MCLK is the uncalibrated DCO in one version, the
4.098MHz in another. We're trying them both for now. One possible
problem and my second correction - the uncalibrated DCO version - I
mentioned in a previous post that it is operating at ~1.3MHz - in
fact, it is operating at ~3.2MHz, which is at the top of the range
for the DCO (DCOCTL set to 111xxxxx, RSEL = 7). This won't work
with the algorithm shown in the code sample - it must be a lower
value that can be gotten to from both sides. I haven't done any
performance testing yet, but our code has never run at a slower
clock than the 3.2MHz.
I think I've covered most of the comments made by Kris and CP...
I'll look into the performance issue, and get back in a couple of
weeks (unless more posts come before then).
Steve
.
Lots of posts to comment on. First, one correction (and another to
follow): After the last round of cost cuts, we are using the F148,
not F149... and the main reasons we are probably sticking with it
are:
a) Our RAM usage is already around 1800 bytes.
b) Our Flash usage, including the FPGA data, is close to 40K.
FPGA data is about 80% of that, and we may never need the full
amount, but we still go back to point (a).
If I were to mention the brand or frequency of the RF unit, I'd
probably be saying more than the company I'm outsourcing for would
like, so I'll decline. The RF unit from the MCU's point of view is
simply : run the clock, turn on the power, tell the FPGA that it's
running, and every now and again shove some values into some
registers when the DSP wants to move it. It does what it's told,
and for me is pretty much a black box (I'm not doing either the FPGA
or the DSP code here).
If communications were all I had to do, then power management would
be very simple, however, the communication with the external device
only happens when I'm not busy slaving to the DSP, mostly involving
A/D and RF control, plus bookkeeping. When the DSP is happy and
asks to be put to sleep, then I shut him down (and the RF, if he
asks me to), and handle the communication tasks.
I looked at the code sample, it might work for us. We have Timer_A
operating continuously anyway, and could run this routine before
starting a protocol session. The question is, would it remain
stable for several seconds while using it as a baud rate generator?
The device I am talking to is very fussy, and missed bytes can cause
minor to serious problems (it has been known to freeze up when
confronted with certain protocol errors - something we cannot
allow). Kris's speculation of once a second is a problem - we
sometimes maintain protocol sessions for 5 or 6 seconds.
Our current MCLK is the uncalibrated DCO in one version, the
4.098MHz in another. We're trying them both for now. One possible
problem and my second correction - the uncalibrated DCO version - I
mentioned in a previous post that it is operating at ~1.3MHz - in
fact, it is operating at ~3.2MHz, which is at the top of the range
for the DCO (DCOCTL set to 111xxxxx, RSEL = 7). This won't work
with the algorithm shown in the code sample - it must be a lower
value that can be gotten to from both sides. I haven't done any
performance testing yet, but our code has never run at a slower
clock than the 3.2MHz.
I think I've covered most of the comments made by Kris and CP...
I'll look into the performance issue, and get back in a couple of
weeks (unless more posts come before then).
Steve
Reply by CP●September 29, 20022002-09-29
Yes I continue to agree with you Kris. The idea of running 115.2k on
the DCO bothers me in many way ... I think even in the new F4XX parts
that the H/W FLL would be hard pressed to do this accurately (it
comes down to the resolution of the DCO taps.)
In one of his posts he mentions that all this effort is in the name
of cost-savings. But then why use a F149? It's the most expensive of
that series, and he doesn't mention a need for TimerB, a 2nd USART,
or H/W MULT. So moving to a smaller version could save dollars.
Then this whole clock issue is a wash since a crystal could then be
added which simplifies the firmware in many ways ... and still allows
for ACLK LPM3 sleep, DCO Ints (cal'd or uncal'd), and XT2 based
communications.
Your DCOR (Rosc) idea sounds interesting, I'm assuming you mean to
send MCLK (or SMCLK) out then back around into Rosc. I think the
loss of pins P5.4 P2.5, and the sensitivity of this feedback circuit
would make it not worth exploring. Most people would throw money at
the problem (either change micros, or add a crystal) rather than
start applied theory research in the middle of a project.
CP
--- In msp430@y..., "Kris De Vos" <microbit@c...> wrote:
> > What I didn't see in your desription was
the MCLK speed you are
using
> > and it's source. Are you using the DCO
uncalibrated or just
ACLK?
> > What power modes are you bouncing in and out
of?
>
> > Running the USART from the DCO at that speed (115.2k) will
require
> > some analysis.
>
> Exactly.
>
> The DCO will stay stable enough, and it won't require re-
calibration that "often"
> (but I would speculate at least every second).
>
> The main issue is/was the Baudrate vs. ACLK.
> 115,200 will need a "fractional" divisor.
>
> Now,
> You can think of it as an FLL, but it's not, nor a PLL
> Simply because there is no LOCK. (well, not in the F149)
>
> Using BRMOD relies on varying the integer setting between 2 values.
> The rate of N1 compared to the rate of N2 divisons creates
a "fractional"
> value, but it's still discretely stepping
between the 2.
>
> Unless you have averaging on it, the 32.768 kHz "multiplied up"
to
115,200
> (57.6 kHz in NRZ) will create bad BER.
> The only solution will be to "smooth it out", which you
can't do
because
> you're in digital domain and you want low
power.
>
> The way I would do it, I reckon, is to use DCOR - and inject
current thru a
> 2nd order low pass filter.
> The "lead lag" pulses would have to come from a XOR function
acting
as a phase
> comparator, thus creating a HW PLL.
>
> That might work ...
> You should look at the maths of it.
>
>
>
Reply by Kris De Vos●September 29, 20022002-09-29
> What I didn't see in your desription was the MCLK speed you are
using
> and it's source. Are you using the DCO
uncalibrated or just ACLK?
> What power modes are you bouncing in and out of?
> Running the USART from the DCO at that speed
(115.2k) will require
> some analysis.
Exactly.
The DCO will stay stable enough, and it won't require re-calibration that
"often"
(but I would speculate at least every second).
The main issue is/was the Baudrate vs. ACLK.
115,200 will need a "fractional" divisor.
Now,
You can think of it as an FLL, but it's not, nor a PLL
Simply because there is no LOCK. (well, not in the F149)
Using BRMOD relies on varying the integer setting between 2 values.
The rate of N1 compared to the rate of N2 divisons creates a
"fractional"
value, but it's still discretely stepping between the 2.
Unless you have averaging on it, the 32.768 kHz "multiplied up" to
115,200
(57.6 kHz in NRZ) will create bad BER.
The only solution will be to "smooth it out", which you can't do
because
you're in digital domain and you want low power.
The way I would do it, I reckon, is to use DCOR - and inject current thru a
2nd order low pass filter.
The "lead lag" pulses would have to come from a XOR function acting as
a phase
comparator, thus creating a HW PLL.
That might work ...
You should look at the maths of it.
Reply by CP●September 29, 20022002-09-29
The example I mentioned doesn't seem to be available, but on the TI
site there is another one available. Again it's for a 430x110, but
can changed to suit ... I think someone posted their own version
which uses TimerB a week or two ago.
http://focus.ti.com/docs/analog/catalog/announcements/brc.jhtml?
path=templatedata/cm/brc/data/20011114msp430codesamples&templateId=1
MSP-FET430X110 "C" Examples slac011x.zip (35k)
fet110_fll.c - BasicClock Implement Auto RSEL SW FLL
What I didn't see in your desription was the MCLK speed you are using
and it's source. Are you using the DCO uncalibrated or just ACLK?
What power modes are you bouncing in and out of?
Running the USART from the DCO at that speed (115.2k) will require
some analysis.
CP
Reply by Kris De Vos●September 29, 20022002-09-29
Oh,
one more thing.
When you mention "huge buffers", 2 things come to mind :
1. Don't tell me your using Mobitex ....... or CPDP etc.
2. Really confusing part is that your explanation implies that you transmit
event driven on a battery powered device, thus reducing power is a breeze.
- OR - that you're doing acquisition/logging and need high
processing power
I assume in such a system a proprietary FPGA IP core deals with major
issues
regarding "chewing over" Raw acquistion data - or some other
function with
Real time needs that call for HW.
3. Don't tell me your RF unit is a M***N, or one of those.
I've had a client in New Zealand that was using that kind of stuff.
You couldn't even change the channels, they were in OTP.
I ended up "massaging in" a new MCU and writing the whole
(commercial) transceiver's
firmware for the sake of someone needing slightly different channels.
I'm clearly not using TMs or brands, but boy, there's some
gutter-crap quality TRXs out there
for digital links.
People forced to pay big $$$ for shitty RF units.
The good quality ones use like 100-200 mA for short range and cost a
fortune.
It actually annoys me, bit like the EW430 "ransom" syndrome vs.
the new emerging wave
of lower cost C compilers on EW430.
What range do you need from the RF, what freq, what raw datarate ?
FHSS ? DHSS ?
Hybrid ?
If you need high datarates you no longer need to go for 8-chip expensive
"solutions"
Rgds
Kris
Reply by Kris De Vos●September 29, 20022002-09-29
Hi Steve,
Thank you in return for elaborating somewhat more, as your project seems very
similar to what I did a while ago (the principles, and challenges you face).
We all love a challenge, and when you talk RF and squeezing every uA out of your
system, I'm all ears.
I am very intrigued by your system description, and I think (hope) that some
further
discussion of concepts could benefit/inspire a few people here.
I hope you understand that it is hard for both parties on a forum to assess at
times.
After I posted, I wondered in hindsight whether you had some other form of
system
that had a "clock" (well - say a Colpitts oscillator etc) for analog
or RF,
since you mentioned buffering.
Your advice cuts through confusions, so I know I can offer some suggestions too
without
confusing you, as your overview demonstrates a thorough knowledge of your field.
Therefore I guess it's safe to assume :
1. Your "RF clock" is low in signal (probably a few 100 mV), hence
the "buffering" which
I couldn't figure out.
2. If (1) is true, then it indeed is very wise to derive MCU clock from that
oscillator.
and not the other way around, although it depends on the design of the RF
unit, its
BER, type of modulation, freq. band , freq. of IF etc etc (if you have
IF, sounds like you could be
doing IF processing in a DSP or something like that, if so, I'd like
to talk to you off line ! :-) :-)
3. This part is confusing :
The RF unit ->
The inability to change datarate is either the minimum throughput you
need -or- you're buying the
unit.
The bizarre thing is that the access to the "clock" of the RF
implies your own design, or semi-custom.
I assume you're in the USA, so no threat to my territory :-)
(Although I do get quite a few US companies
querying for RF design + MCU + SW etc.
Are you at liberty to discuss the RF unit ?
You mention "client", which implies you buy it, not your IP.
The average price for "black box" RF units from the US is like USD400,
and they are low in performance.
The high performance RF "Lego Blocks" tend to cost > USD 800-1,000.
Funny, because you can build them for about USD 30 - 50 with today's
stuff.....
Have you considered converting the async 115,200 stream to 8 bit strobed and
ACKed transfer in the FPGA ?
Surely there's some CLBs left ?
The notion of saving a few uA of course is silly when you have hungry DSPs and
FPGAs (mind you, Quick logic's
getting pretty low in power !), so I assume that the "UNIT" has to
account for unsolicited data ?
It's confusing, I can tell you have a good lateral approach, but are either
:
- Victim of a typical syndrome : Held to ransom with a "drop in" RF
solution (that's not really a solution)
- other possible complications that could compromise your IP.
It would be great if you could elaborate about the RF unit........
I'd certainly like to help - suggestions - whatever. not necessarity for
commercial gain
(Maybe just to show we're not stupid here in Australia....... :-)
----- Original Message -----
From: SBurck
To: msp430@msp4...
Sent: Monday, September 30, 2002 1:10 AM
Subject: [msp430] DCO calibration, elaborated
Kris -
Thanks for the response. I'll leave my message here in the open
forum, possibly get a few more ideas.
Our original prototype consists of a TI DSP, the MSP430F149 (I'll
call it the MCU from here on), a proprietary FPGA, and an RF unit.
The MCU was originally used for bookeeping functions, managing some
timekeeping, a slow UART to a debug message logger, a little A/D, a
little power management of the system, and that's it. It wasn't
even
breaking a sweat. The DSP was using a UART on the FPGA to
communicate with an external device at 115,200 - one which we are
clients of, and cannot control the baud rate.
When we did the first round of cost reduction, it was decided that if
we could implement the RS232 protocol on the MCU (not a small task,
as the original code we had used huge buffers) to speak to the
external device, then we would keep the F149, and shrink the FPGA.
If we could not, we would keep the FPGA, and go to a different
MSP430. The better choice was to use the F149 and the smaller FPGA,
on both power and cost grounds.
We did manage to implement the protocol on the MCU, so we made the
changes as stated (small FPGA, MSP430F149), and fed the BRGEN with
the RF's clock divided by 4.
Now, the RF is NOT active all the time - it may or may not be active
when we go to speak to the external device using RS232. The RF power
and clock is under MCU control, so we run the clock, leaving the
power to the RF alone, wait for the MCU to see that the clock is
stable, do a protocol session, and return the state to the previous
state (clock running or not as the case may be).
Removing the dividers and buffers is a small gain in parts and
power. Beyond that, we have a small savings to the connections to
the RF if we don't need to gate the clock to the MCU. Although we
are already in our spec for power, we are always trying to sqeeeze
out a few more uA if we can. We have tried using the DCO at 1.3MHz
and using the 4.098MHz as an on-demand SMCLK feeding the BRGEN, and
using the 4.098 as the MCLK, leaving the RF clock on all the time,
and they both work fine, the first solution being slightly better
over time for current drain. If we can eliminate the 4.098
altogether, and get a stable BRGEN from the DCO when we need it, then
the RF clock will be there only when we need the RF, and the DCO will
be the MCLK and the BRGEN.
Let me look at you conclusions and comment:
1. If the dutycycle of events that call for 115,200 is low,
you'll save massive amounts of current.
Your XT2 is down to a few uAH with high accuracy.
Correct, and it is either very low, or with small peaks of
activity.
2. If you are not able to anticipate the need for 115.2 kBps in
advance, then it's pointless anyway
because if you're eg. on RS232, your charge pumps won't be
ready anyway ...........
I am - I always begin and end protocol sessions. I power up
the RS232, communicate, get a response (or several responses), and
then end the session and pass the results to the DSP and FPGA.
3. I case of (2), just use an "integer divisible" XT2 freq for
your baudrate.
The penalty will be 50-100 uA extra for the clock itself.
I don't think your divider/4 will do that current !!!!!!!
Even if you went to Coolrunner CPLD - your quiescent's still
spec'd to max. 100 uA...........
The 4.098 is already there, we would not have room for
another crystal, nor a good reason to add it.
4. The presence of a 100% dutycycle 16.392 MHz indicates
something's sucking current anyway,
so what's the point ?
Not 100% dutycycle, as I stated above.
Now, it would not be the end of the world or the project if this was
impossible, but we're looking to save a few cents and a few uA's in
the next cycle of development.
BTW, one of the things we looked into was dividing by 2 rather than
4, saving a divider and buffer, but the MSP spec indicated a maximum
MCLK of 8000 decimal, and not 8.192MHz. When we asked TI, they said
it would probably work, but wouldn't commit, so we played it safe.
As the hardware is designed now, either the DSP or MCU can hold FPGA
code and load the FPGA. A faster MCU gives a faster FPGA load time
(assuming the FPGA code is on the MCU flash, which is something we
are playing with now).
That's about it. In any case, for future projects I would like to
see the IAR example mentioned, if anyone knows where it can be
found. I didn't see it on the IAR site or the TI site, in any of the
code examples.
Steve
.
Reply by SBurck●September 29, 20022002-09-29
Kris -
Thanks for the response. I'll leave my message here in the open
forum, possibly get a few more ideas.
Our original prototype consists of a TI DSP, the MSP430F149 (I'll
call it the MCU from here on), a proprietary FPGA, and an RF unit.
The MCU was originally used for bookeeping functions, managing some
timekeeping, a slow UART to a debug message logger, a little A/D, a
little power management of the system, and that's it. It wasn't even
breaking a sweat. The DSP was using a UART on the FPGA to
communicate with an external device at 115,200 - one which we are
clients of, and cannot control the baud rate.
When we did the first round of cost reduction, it was decided that if
we could implement the RS232 protocol on the MCU (not a small task,
as the original code we had used huge buffers) to speak to the
external device, then we would keep the F149, and shrink the FPGA.
If we could not, we would keep the FPGA, and go to a different
MSP430. The better choice was to use the F149 and the smaller FPGA,
on both power and cost grounds.
We did manage to implement the protocol on the MCU, so we made the
changes as stated (small FPGA, MSP430F149), and fed the BRGEN with
the RF's clock divided by 4.
Now, the RF is NOT active all the time - it may or may not be active
when we go to speak to the external device using RS232. The RF power
and clock is under MCU control, so we run the clock, leaving the
power to the RF alone, wait for the MCU to see that the clock is
stable, do a protocol session, and return the state to the previous
state (clock running or not as the case may be).
Removing the dividers and buffers is a small gain in parts and
power. Beyond that, we have a small savings to the connections to
the RF if we don't need to gate the clock to the MCU. Although we
are already in our spec for power, we are always trying to sqeeeze
out a few more uA if we can. We have tried using the DCO at 1.3MHz
and using the 4.098MHz as an on-demand SMCLK feeding the BRGEN, and
using the 4.098 as the MCLK, leaving the RF clock on all the time,
and they both work fine, the first solution being slightly better
over time for current drain. If we can eliminate the 4.098
altogether, and get a stable BRGEN from the DCO when we need it, then
the RF clock will be there only when we need the RF, and the DCO will
be the MCLK and the BRGEN.
Let me look at you conclusions and comment:
1. If the dutycycle of events that call for 115,200 is low,
you'll save massive amounts of current.
Your XT2 is down to a few uAH with high accuracy.
Correct, and it is either very low, or with small peaks of
activity.
2. If you are not able to anticipate the need for 115.2 kBps in
advance, then it's pointless anyway
because if you're eg. on RS232, your charge pumps won't be
ready anyway ...........
I am - I always begin and end protocol sessions. I power up
the RS232, communicate, get a response (or several responses), and
then end the session and pass the results to the DSP and FPGA.
3. I case of (2), just use an "integer divisible" XT2 freq for
your baudrate.
The penalty will be 50-100 uA extra for the clock itself.
I don't think your divider/4 will do that current !!!!!!!
Even if you went to Coolrunner CPLD - your quiescent's still
spec'd to max. 100 uA...........
The 4.098 is already there, we would not have room for
another crystal, nor a good reason to add it.
4. The presence of a 100% dutycycle 16.392 MHz indicates
something's sucking current anyway,
so what's the point ?
Not 100% dutycycle, as I stated above.
Now, it would not be the end of the world or the project if this was
impossible, but we're looking to save a few cents and a few uA's in
the next cycle of development.
BTW, one of the things we looked into was dividing by 2 rather than
4, saving a divider and buffer, but the MSP spec indicated a maximum
MCLK of 8000 decimal, and not 8.192MHz. When we asked TI, they said
it would probably work, but wouldn't commit, so we played it safe.
As the hardware is designed now, either the DSP or MCU can hold FPGA
code and load the FPGA. A faster MCU gives a faster FPGA load time
(assuming the FPGA code is on the MCU flash, which is something we
are playing with now).
That's about it. In any case, for future projects I would like to
see the IAR example mentioned, if anyone knows where it can be
found. I didn't see it on the IAR site or the TI site, in any of the
code examples.
Steve
Reply by Kris De Vos●September 29, 20022002-09-29
Hi Steve,
It's always hard to explain every detail about the design criteria, and
what's beneficial for it, and what's not, so I could very well be
missing
the point.
I don't really understand your concept at all :
1. I deem a clock suitable for baud rate generation if it is < +/- 2%
accuracy.
2. The only advantage I perceive in this scenario would be that you need an
accurate
Baud rate clock "on demand", and cannot afford the startup time
of a crystal due to its
very high Q.
In that case the DCO calibration defeats this purpose anyway - because :
a) It will take time to get the DCO to within +/- 2 %.
You could do that relatively quick by using more sophisticated
algorithms, but then you'll need
higher CPU throughput (hence current), so you're back where you
started.
b) The DCO might use little current (a few uA), but assuming you are
stable on the DCO, you're still clocking
the USART with all its flip flops. That's were a ,ot of your
current will go regardless.
c) Forget your "BRGEN" concept at 115,200 bps.
It generates fractional division by switching between 2 div values,
like the way you would
interpolate or decimate.
That's all good at 2.4 kBps - but I expect high error rates at
115,200 Bps. (never tried too, and never will)
I don't see the point ?
3. Why try and squeeze out 50-60 uA, when you're at 115.2 kBps ?
The work devoted to wrap this up, (which you could, but tough one) cannot
possibly
justify a saving of 50 uA, when other parts of the system inherently will
use a lot more current ?
4. Can't you just hang a crystal on XT2 ?
MSP430 comes to the resque, as your shift clock can be up to 1/3 SYCCLK,
instead of the usual 16.
If current is important in order of uA, ramp your DCO up and down in
speed according to events.
If you anticipate that INTs will be needed, set your DCO at highest speed
so ISRs are "in and out"
quick. This scenario can easily be addressed with good system design.
You can choose to run XT2 continuous, or set it up so you know when
you'll need it, and turn on
XT2 a few mS before you need serial 115.2 kBps.
For example, as an idea - At 3.3 Volts a 6.144 MHz XT2 will cost you
about 100 uA.
Conclusion :
1. If the dutycycle of events that call for 115,200 is low, you'll save
massive amounts of current.
Your XT2 is down to a few uAH with high accuracy.
2. If you are not able to anticipate the need for 115.2 kBps in advance, then
it's pointless anyway
because if you're eg. on RS232, your charge pumps won't be
ready anyway ...........
3. I case of (2), just use an "integer divisible" XT2 freq for your
baudrate.
The penalty will be 50-100 uA extra for the clock itself.
I don't think your divider/4 will do that current !!!!!!!
Even if you went to Coolrunner CPLD - your quiescent's still
spec'd to max. 100 uA...........
4. The presence of a 100% dutycycle 16.392 MHz indicates something's
sucking current anyway,
so what's the point ?
Maybe elaborate a bit more, if you want off-line.
I just don't get it.
Seems like you're in of those sticky situations where "tacking
on" things debilitates system performance.
(I assume you're after low power, as you mentioned it as an issue)
I've commented on these kind of things before.
Do a very thorough system design, don't rush in - happens all the time
........
Then you end up with people claiming the MSP430 wasn't "up to
it", while
in actual fact they probably only deployed 10% of MSP430's power and
brilliance to
create incredibly low current systems with really high performance.
This comment is only valid when every uA is at stake.
You can do these things, believe me.
Maybe have a fresh perspective ? Look at global system level again - You might
me astounded
how much more you can actually do with the MSP430 than you thought !
When you're at that stage, you'll find it awesome (I did)
Happy MSP430 - ing !!!!!!!!!
Kris
----- Original Message -----
From: SBurck
To: msp430@msp4...
Sent: Sunday, September 29, 2002 9:40 PM
Subject: [msp430] DCO calibration
I'm on a project where we have an MSP430F149 - and a need to use a
UART at 115,200kpbs. To enable this, we took a 16,392MHz signal on
the board, divided it by 4, and gave it to the MCU as its MCLK or
SMCLK (depending on the version), and as its UART 0 clock, to get the
required baud rate. We do this, and it works fine. It is the ONLY
reason we need the 4MHz signal - we have a 32KHz crystal as ACLK for
exact timing, and need a faster MCLK and UART clock - the DCO would
be fine if we could keep it calibrated...
I just saw CP's post regarding an FLL example for calibrating the DCO
enough to use it as a baud rate generator. Eliminating the dividers
and the power used to buffer would slightly help our part count, and
really help our power usage, if I could just use the DCO whenever I
needed to use the port.
I could not find the example mentioned, either at IAR nor at TI. If
anyone can give a clear pointer to its location, I would be very
grateful.
Thanks,
Steve Burck
Outsourcerers, Ltd.
.
Reply by SBurck●September 29, 20022002-09-29
I'm on a project where we have an MSP430F149 - and a need to use a
UART at 115,200kpbs. To enable this, we took a 16,392MHz signal on
the board, divided it by 4, and gave it to the MCU as its MCLK or
SMCLK (depending on the version), and as its UART 0 clock, to get the
required baud rate. We do this, and it works fine. It is the ONLY
reason we need the 4MHz signal - we have a 32KHz crystal as ACLK for
exact timing, and need a faster MCLK and UART clock - the DCO would
be fine if we could keep it calibrated...
I just saw CP's post regarding an FLL example for calibrating the DCO
enough to use it as a baud rate generator. Eliminating the dividers
and the power used to buffer would slightly help our part count, and
really help our power usage, if I could just use the DCO whenever I
needed to use the port.
I could not find the example mentioned, either at IAR nor at TI. If
anyone can give a clear pointer to its location, I would be very
grateful.
Thanks,
Steve Burck
Outsourcerers, Ltd.