EmbeddedRelated.com
Forums

UART without Start- and Stopbits?

Started by erikmagna October 9, 2002
Hello All,

Is it possible to use the UART without a start- and stopbit? So can I 
shift out bits continuously at 115k2 bps (a-synchronous). 

hope you can help me.

Erik


Beginning Microcontrollers with the MSP430

--- In msp430@y..., "erikmagna" <erikmagna@h...> wrote:
> Hello All,
> 
> Is it possible to use the UART without a start- and stopbit? So can I 
> shift out bits continuously at 115k2 bps (a-synchronous). 
> 

No, unless you create your own hardware protocol. For the UART it is not
possible to work right without start and stop bits, I think. They are needed for
synchronizing the clock on both sides. That is the receiving side will always
synchronize on the start bit.

If you want a continuos data stream you might want to look into SPI. There you
will need a clk line next to your data lines. SPI will allow for several Mbps
depending on the hardware used. It is a master slave system, but supports multi
master configs by adding a fourth line that is used for arbitration (first come
first serve).

Right now I am trying to implement SPI in the MSP430 myself, but am having quite
some difficulties with it (my slave does not answer at all). So I would be quite
willing to share any experiences in case you will have a go at SPI as well.

-Jean-

--------------------------------
htpp://randhahn.de



Erik

The hardware UART is dedicated to having a start and stob bit.  This 
is very convenient for keeping each byte and each bit in the byte 
syncronized.  However it is horribly inefficient. For every 8 bits of 
data sent you have to send at least 10 bits total, 11 if you use two 
stop bits. I don't think you would be able to use the hardware UART 
but you can create your own protocol and "bit bang" an I/O pin with 
software.  You still have to deal with keeping everything synced but 
if you're smart about it you can drop the overhead significantly.  An 
example protocol that TI came up with is in their ap note about their 
TRF6900 900 MHz RF transeiver.  It has a start condition and then 
sends a packet of data without sync bits around each byte. 

Hope this helps

Steve

--- In msp430@y..., "erikmagna" <erikmagna@h...> wrote:
> Hello All,
> 
> Is it possible to use the UART without a start- and stopbit? So can 
I 
> shift out bits continuously at 115k2 bps
(a-synchronous). 
> 
> hope you can help me.
> 
> Erik


In Message 266 Kris de Vos is talking about 1..1.5Mbps with UART0 for 
an RF link, Start and Stop bits are ineffient for an RF link so he is 
probably not using start and stop bits. So I wonder how he is doing 
this? 

Erik

--- In msp430@y..., "Steve" <monkeykartch@y...> wrote:
> Erik
> 
> The hardware UART is dedicated to having a start and stob bit.  
This 
> is very convenient for keeping each byte and each
bit in the byte 
> syncronized.  However it is horribly inefficient. For every 8 bits 
of 
> data sent you have to send at least 10 bits total,
11 if you use 
two 
> stop bits. I don't think you would be able to
use the hardware UART 
> but you can create your own protocol and "bit bang" an I/O pin
with 
> software.  You still have to deal with keeping everything synced 
but 
> if you're smart about it you can drop the
overhead significantly.  
An 
> example protocol that TI came up with is in their
ap note about 
their 
> TRF6900 900 MHz RF transeiver.  It has a start
condition and then 
> sends a packet of data without sync bits around each byte. 
> 
> Hope this helps
> 
> Steve
> 
> --- In msp430@y..., "erikmagna" <erikmagna@h...> wrote:
> > Hello All,
> > 
> > Is it possible to use the UART without a start- and stopbit? So 
can 
> I 
> > shift out bits continuously at 115k2 bps (a-synchronous). 
> > 
> > hope you can help me.
> > 
> > Erik


You could do away with the stop bit, but you will always need your start
bit.
If you deviate from that, the you've got the below suggested
"SPI"-ish setup but then you
have a Synchronous setup.

To implement 8 bits "RAW" asynchronously, you need other ways to be
able to recover the transmitted
symbol clock.
I've used various schemes like that - at up to 1 Mbps with MSP430, but it
always will need a CPLD or FPGA,
and it becomes very complex.

You're better off sticking with the 20% loss in Bandwidth from your
start-stop bits from a UART.
If there is another reason why you want it that way, let me know, see if I can
help.


  ----- Original Message ----- 
  From: jean_randhah 
  To: msp430@msp4... 
  Sent: Thursday, October 10, 2002 5:40 AM
  Subject: [msp430] Re: UART without Start- and Stopbits?


  --- In msp430@y..., "erikmagna" <erikmagna@h...> wrote:
  > Hello All,
  > 
  > Is it possible to use the UART without a start- and stopbit? So can I 
  > shift out bits continuously at 115k2 bps (a-synchronous). 
  > 

  No, unless you create your own hardware protocol. For the UART it is not
possible to work right without start and stop bits, I think. They are needed for
synchronizing the clock on both sides. That is the receiving side will always
synchronize on the start bit.

  If you want a continuos data stream you might want to look into SPI. There you
will need a clk line next to your data lines. SPI will allow for several Mbps
depending on the hardware used. It is a master slave system, but supports multi
master configs by adding a fourth line that is used for arbitration (first come
first serve).

  Right now I am trying to implement SPI in the MSP430 myself, but am having
quite some difficulties with it (my slave does not answer at all). So I would be
quite willing to share any experiences in case you will have a go at SPI as
well.

  -Jean-

  --------------------------------
  htpp://randhahn.de



        
             
              
       
       

  .



   






> but you can create your own protocol and "bit bang" an I/O
pin with 
> software.  You still have to deal with keeping
everything synced but 
> if you're smart about it you can drop the overhead significantly.  An 
> example protocol that TI came up with is in their ap note about their 
> TRF6900 900 MHz RF transeiver.  It has a start condition and then 
> sends a packet of data without sync bits around each byte. 

VERY bad idea.
Have a good think why.

Secondly, wiith a setup like that, you need to be able to "lock" on
packet level thus
you waste :

- preamble : anywhere from 20 to 100 bits.
- sync for FCS
- Frame Check Sequence of Header
- Then you're "hopefully" in sync.
- Even if so, you can't do that very long (a few hindred bit frames) and
then you need re-sync again.

Belive me, your 20% loss is much more acceptable - but you can get it down to 0%
loss, apart from 3 to 4 bits
for symbol clock recivery sync - but at the expense of a fair bit of VHDL in
external logic.





Chris,

I don't doubt that there are other ways to accomplish this.  In fact 
there are many that would be more elegant. A hardware circuit 
designed with an HDL would definitely fall into that category.   You 
may disagree, but "bit banging" a port with your own protocol is a 
simple solution.  Even Texas Instruments thinks so.  Admittedly all 
of the factors have to be looked at.  What would work for one 
application may not be the right choice for another.  Here is the 
link to the TI ap note that I was referring to. SLAA121

http://focus.ti.com/docs/apps/catalog/resources/appnoteabstract.jhtml?
abstractName=slaa121 is SLAA121.  

They start with a start sequence and then they send 32 bytes with no 
start or stop bits.  When you think about it, all the start and stop 
bits are for is to sync everything.  In this case they use a more 
complicated start sequence and then send a packet of 256 bits in a 
row.  Normal RS232 is a start stop bit sync sequence with an 8 bit 
packet.  It will correct itself quicker if it gets out of sync but 
the idea is the same. 

Steve 

--- In msp430@y..., "Kris De Vos" <microbit@c...> wrote:
> > but you can create your own protocol and
"bit bang" an I/O pin 
with 
> > software.  You still have to deal with
keeping everything synced 
but 
> > if you're smart about it you can drop
the overhead 
significantly.  An 
> > example protocol that TI came up with is in
their ap note about 
their 
> > TRF6900 900 MHz RF transeiver.  It has a
start condition and then 
> > sends a packet of data without sync bits around each byte. 
> 
> VERY bad idea.
> Have a good think why.
> 
> Secondly, wiith a setup like that, you need to be able to "lock"
on 
packet level thus
> you waste :
> 
> - preamble : anywhere from 20 to 100 bits.
> - sync for FCS
> - Frame Check Sequence of Header
> - Then you're "hopefully" in sync.
> - Even if so, you can't do that very long (a few hindred bit 
frames) and then you need re-sync again.
> 
> Belive me, your 20% loss is much more acceptable - but you can get 
it down to 0% loss, apart from 3 to 4 bits
> for symbol clock recivery sync - but at the
expense of a fair bit 
of VHDL in external logic.
> 
> 
> 


Don't get me wrong Steve,

I know exactly what you mean, and for many apps that will be
"acceptable".
I was referring to the topic in its actual context, ie. saving bandwidth.
About bit-banging on I/O with such a long dataframe, and TI thinking so, well,
not really.
I contracted for them 2 1/2 years ago (a TRF6900 + F135 + CODEC HW design) , and
a similar setup for 
wireless (voice) TDD gave them enormeous problems with maintaining SYNC, let
alone when it was lost - 
and then having to resync....
That's even without FHSS put in. (frequency hopping)
You need quite high demands of your clock accuracy and stability. Noise,
interference etc. throws the whole thing 
out too easily.
Also, 256 bits is a long time without re-sync :
If you use a standard 40 ppm crystal - at 115.2 kBps you will incur quite a bit
of phase shift after 256 bits !

When you do the sums, it turns out in practice that the extra start bit / byte
is not such a bad
trade off, also considering you can use character based CPU service, rather than
bit based.
Let alone if you have to service the bit banging with a so-so INT setup.

Basically to get back to the topic, the question was about a UART without a
start and stop bit for Async.
Of course, without start bit you cannot do async transfers at U(S)ART level in
an MCU.
You can only do this with the synchronous part of the USART, but you need the
symbol clock
information carried on a separate line (clock line).

There are many ways to skin a cat, and the topic is way too vast to elaborate
but if you're curious
about embedding symbol clocks and then recovering it on RX, I can offer some
suggestions.
The referred app note will also confuse one a bit (I haven't seen it yet)
because there are other
issues to deal with - such as preamble time (how many 1-0 bits for training)
versus the sample/hold time
on the receiver's dataslicer - the time constant of the integrator for
optimal slicing level on the TRF6900,
the preamble in that app note is also there to allow an internal AFC loop to
tune the on-board variable inductance
of the quadrature demodulator for optimal performance.

What I merely meant was that the app note offers a basic suggestion, but
specifically for RF applications.
That's all.
It will just offer more confusion than anything else.
There is so many companies out there who've thought they'd done 1-2-3
"magic" RF solutions, and there is no such thing, as they
invariably found out the hard way.
Sometimes a bit annoying, imagine your bread and butter is 100% writing
"C" Steve, and you constantly have customers
saying to you, "Hey I don't need you anymore - I bought one of those C
in 21 days books ".
Needless to say they all come crying back having lost $$, and you still help
them out, because it's your job.
Well, sometimes it's like that for me.

A bit frustrating, I think you can understand Steve.
I remember there was a post here of someone called Chris - that did RF stuff
too, probably can relate to what I mean.......

All the best Steve,
Kris


  ----- Original Message ----- 
  From: Steve 
  To: msp430@msp4... 
  Sent: Friday, October 11, 2002 6:57 AM
  Subject: [msp430] Re: UART without Start- and Stopbits?


  Chris,

  I don't doubt that there are other ways to accomplish this.  In fact 
  there are many that would be more elegant. A hardware circuit 
  designed with an HDL would definitely fall into that category.   You 
  may disagree, but "bit banging" a port with your own protocol is a 
  simple solution.  Even Texas Instruments thinks so.  Admittedly all 
  of the factors have to be looked at.  What would work for one 
  application may not be the right choice for another.  Here is the 
  link to the TI ap note that I was referring to. SLAA121

  http://focus.ti.com/docs/apps/catalog/resources/appnoteabstract.jhtml?
  abstractName=slaa121 is SLAA121.  

  They start with a start sequence and then they send 32 bytes with no 
  start or stop bits.  When you think about it, all the start and stop 
  bits are for is to sync everything.  In this case they use a more 
  complicated start sequence and then send a packet of 256 bits in a 
  row.  Normal RS232 is a start stop bit sync sequence with an 8 bit 
  packet.  It will correct itself quicker if it gets out of sync but 
  the idea is the same. 

  Steve 

  --- In msp430@y..., "Kris De Vos" <microbit@c...> wrote:
  > > but you can create your own protocol and "bit bang" an I/O
pin 
  with 
  > > software.  You still have to deal with keeping everything synced 
  but 
  > > if you're smart about it you can drop the overhead 
  significantly.  An 
  > > example protocol that TI came up with is in their ap note about 
  their 
  > > TRF6900 900 MHz RF transeiver.  It has a start condition and then 
  > > sends a packet of data without sync bits around each byte. 
  > 
  > VERY bad idea.
  > Have a good think why.
  > 
  > Secondly, wiith a setup like that, you need to be able to
"lock" on 
  packet level thus
  > you waste :
  > 
  > - preamble : anywhere from 20 to 100 bits.
  > - sync for FCS
  > - Frame Check Sequence of Header
  > - Then you're "hopefully" in sync.
  > - Even if so, you can't do that very long (a few hindred bit 
  frames) and then you need re-sync again.
  > 
  > Belive me, your 20% loss is much more acceptable - but you can get 
  it down to 0% loss, apart from 3 to 4 bits
  > for symbol clock recivery sync - but at the expense of a fair bit 
  of VHDL in external logic.
  > 
  > 
  > 


        
             
              
       
       

  .



   






Steve,

As Kris points out, there are ways to embed the clock info into the
data stream....

Use a biphase coding scheme, (one of the most common types is called 
Manchester coding) where the clock information is embedded as "extra"
transitions
in the data stream.  This allows the receiver can maintain
synchronization using 
only the data stream, no other clock line required. As Kris points out,
at the 
very beginning of the data stream, a bit sync pattern is required
(typically
a string of alternating "1"s & "0"s) which enables the
receiver to
establish 
the basic clock rate.  Sometimes a "Frame Sync" bit pattern is also
necessary
depending on whether the exact number of bit sync preamble bits is
reliably
monitored by the receiver.  After the sync pattern, the receiver
continuously 
uses the data stream bit transitions to maintain its local clock to the
exact 
value of the transmitter clock.

"Differential Manchester" has some advantages over basic
"Manchester"
biphase (since the data is encoded differentially), but in this
application
it probably doesn't matter.

The assemble language code on the receiver side is non-trivial, (I did
it long 
ago on a Mot 68xx uP, but for a much slower bit rate!).  Essentially,
when the 
data stream first starts, the receiver needs to measure the bit period
using the 
frame sync pattern and establish an adjustable bit clock (i.e. setup
Timer A to 
count up to the calculated count).  At each bit clock, sample the
incoming data,
then measure the time until the next transition on the bit clock.  If it
is off
by a non-trivial amount, adjust the up count value for Timer A for the
next bit 
period.  Note that you need to do some filtering so the clock adjustment
are damped
some... you don't want to adjust anything too dramatically or else
you'll
lose sync.

The code is emulating a digital phase lock loop, which extracts the
clock
information.

Again, this code requires some design and effort.  Also, I don't know
what the max bit 
rate the MSP430 can do - my humble opinion is that the MSP430 doesn't
have a chance of
doing 115,000+ bps Manchester stream; note a Manchester increases the
average "transition
frequency" of the data by 1.5 (on a random stream), but that is not a 
constant frequency, at times it is going at 2x the bit rate: meaning
with
a bit rate of 9,600 bps, the MSP430 has to do things at 19,200 bps rate
at times.

I hope this helps, if you really need a one wire synchronous serial
interface, 
without having precise crystal oscillators, do some research and testing
on 
using a Manchester coding scheme.  You may need to do the clock recovery
in an external
PLL... or as was suggested by Kris, an external HDLC circuit.

This is a non-trivial design ... experience helps.

Chris Roark
Blue Water Design

-----Original Message-----
From: Kris De Vos [mailto:microbit@micr...] 
Sent: Thursday, October 10, 2002 5:11 PM
To: msp430@msp4...
Subject: Re: [msp430] Re: UART without Start- and Stopbits?


Don't get me wrong Steve,

I know exactly what you mean, and for many apps that will be
"acceptable".
I was referring to the topic in its actual context, ie. saving
bandwidth.
About bit-banging on I/O with such a long dataframe, and TI thinking so,
well, not really.
I contracted for them 2 1/2 years ago (a TRF6900 + F135 + CODEC HW
design) , and a similar setup for 
wireless (voice) TDD gave them enormeous problems with maintaining SYNC,
let alone when it was lost - 
and then having to resync....
That's even without FHSS put in. (frequency hopping)
You need quite high demands of your clock accuracy and stability. Noise,
interference etc. throws the whole thing 
out too easily.
Also, 256 bits is a long time without re-sync :
If you use a standard 40 ppm crystal - at 115.2 kBps you will incur
quite a bit of phase shift after 256 bits !

When you do the sums, it turns out in practice that the extra start bit
/ byte is not such a bad
trade off, also considering you can use character based CPU service,
rather than bit based.
Let alone if you have to service the bit banging with a so-so INT setup.

Basically to get back to the topic, the question was about a UART
without a start and stop bit for Async.
Of course, without start bit you cannot do async transfers at U(S)ART
level in an MCU.
You can only do this with the synchronous part of the USART, but you
need the symbol clock
information carried on a separate line (clock line).

There are many ways to skin a cat, and the topic is way too vast to
elaborate but if you're curious
about embedding symbol clocks and then recovering it on RX, I can offer
some suggestions.
The referred app note will also confuse one a bit (I haven't seen it
yet) because there are other
issues to deal with - such as preamble time (how many 1-0 bits for
training) versus the sample/hold time
on the receiver's dataslicer - the time constant of the integrator for
optimal slicing level on the TRF6900,
the preamble in that app note is also there to allow an internal AFC
loop to tune the on-board variable inductance
of the quadrature demodulator for optimal performance.

What I merely meant was that the app note offers a basic suggestion, but
specifically for RF applications.
That's all.
It will just offer more confusion than anything else.
There is so many companies out there who've thought they'd done 1-2-3
"magic" RF solutions, and there is no such thing, as they
invariably found out the hard way.
Sometimes a bit annoying, imagine your bread and butter is 100% writing
"C" Steve, and you constantly have customers
saying to you, "Hey I don't need you anymore - I bought one of those C
in 21 days books ".
Needless to say they all come crying back having lost $$, and you still
help them out, because it's your job.
Well, sometimes it's like that for me.

A bit frustrating, I think you can understand Steve.
I remember there was a post here of someone called Chris - that did RF
stuff too, probably can relate to what I mean.......

All the best Steve,
Kris


  ----- Original Message ----- 
  From: Steve 
  To: msp430@msp4... 
  Sent: Friday, October 11, 2002 6:57 AM
  Subject: [msp430] Re: UART without Start- and Stopbits?


  Chris,

  I don't doubt that there are other ways to accomplish this.  In fact 
  there are many that would be more elegant. A hardware circuit 
  designed with an HDL would definitely fall into that category.   You 
  may disagree, but "bit banging" a port with your own protocol is a 
  simple solution.  Even Texas Instruments thinks so.  Admittedly all 
  of the factors have to be looked at.  What would work for one 
  application may not be the right choice for another.  Here is the 
  link to the TI ap note that I was referring to. SLAA121

  http://focus.ti.com/docs/apps/catalog/resources/appnoteabstract.jhtml?
  abstractName=slaa121 is SLAA121.  

  They start with a start sequence and then they send 32 bytes with no 
  start or stop bits.  When you think about it, all the start and stop 
  bits are for is to sync everything.  In this case they use a more 
  complicated start sequence and then send a packet of 256 bits in a 
  row.  Normal RS232 is a start stop bit sync sequence with an 8 bit 
  packet.  It will correct itself quicker if it gets out of sync but 
  the idea is the same. 

  Steve 

  --- In msp430@y..., "Kris De Vos" <microbit@c...> wrote:
  > > but you can create your own protocol and "bit bang" an I/O
pin 
  with 
  > > software.  You still have to deal with keeping everything synced 
  but 
  > > if you're smart about it you can drop the overhead 
  significantly.  An 
  > > example protocol that TI came up with is in their ap note about 
  their 
  > > TRF6900 900 MHz RF transeiver.  It has a start condition and then 
  > > sends a packet of data without sync bits around each byte. 
  > 
  > VERY bad idea.
  > Have a good think why.
  > 
  > Secondly, wiith a setup like that, you need to be able to
"lock" on 
  packet level thus
  > you waste :
  > 
  > - preamble : anywhere from 20 to 100 bits.
  > - sync for FCS
  > - Frame Check Sequence of Header
  > - Then you're "hopefully" in sync.
  > - Even if so, you can't do that very long (a few hindred bit 
  frames) and then you need re-sync again.
  > 
  > Belive me, your 20% loss is much more acceptable - but you can get 
  it down to 0% loss, apart from 3 to 4 bits
  > for symbol clock recivery sync - but at the expense of a fair bit 
  of VHDL in external logic.
  > 
  > 
  > 


        
             
              
       
       

  .



  ">http://docs.yahoo.com/info/terms/ 




Hi Chris,

Thanks for the suppporting comments.
Although the TRF690X family uses outside loop modulation, and thus can tolerate
NRZ (ie. long strings of ones or zeroes), most single chip transceivers
can't.

The example I had referred to was using an ML2722 with a 1.536 Mchips/sec rate.
Since I only used BFSK (2-FSK), the raw data rate was 1.536 Mbps.

The final version used standard Biphase, ie. Manchester coding.
The encoding is easy enough in HW, and the decoding worked extremely well
(and could in SW as well) by applying the following algorithm :

1.    The only problem you face with 1/2 Manchester is locking onto
"false" transitions, the rest
       is actually a breeze.
       I guess most on the list are aware that eg a 1-0 transition is
"1" and 0-1 transition in "0".
       The tricky part of course now is when you TX say 1-1-1. 
        
       The sequence would be : 1 - 0 - 1 - 0 - 1 - 0.

       Where are the transitions ?
       If you lock on the 1-0,  1-0,  1-0 then you will decode a proper 1-1-1.

       What if you lock on 1 symbol late ?(1 bit here in BPSK, 2 bits in QPSK
etc)
       You will see :
       0-1 - 0-1 - 0-......

       So you would decode :
       0 - 0 - ....... eventual violation !


2.    I did this as follows :

-      In H/W sample say with an 8 times symbol clock.
-     When you see a transition (easy), output a 1 or a 0 according to the edge
polarity.
-     IGNORE the transitions for 6 clocks.
-     start sampling again.

All you need now is a deliberate transmission of a sequence that would create a
manchester violation, and
voila, your SYNC'd in 3 bits absolute maximum.

This works very, very well -
as the transitions now can "jitter" up to + - 2 clocks about their
"center" position,
hence the BER is improved drastically.
The downside with Pihase schemes like that is that typically 50% of bandwidth is
wasted, although the signal-noise
ratio is improved a LOT on receive
(BTW media such as Ethernet use BiPhase, because of its so-called "DC
-free" properties (ie. you can send it thru a
transformer like in Ethernet)

You can still apply the same scheme and use 3/4 lookup tables, even is SW.
Think of it as "bit stuffing". for each 3 bits, you add an extra bit.

I came up with HDL code that implemented 100% throughput and SYNC within 3 bits
at 1.536 Mps, but that's propietary.
The basic units however naturally used 768 kBps (1/2 of 1.536 Mbps).

Now, back at the symbol clock issue.
When receiving, the edges in this case generate a "symbol clock".
All you do now is trigger something when you see an edge, after 3 bits the HW
decoder is outputting
very very reliable baseband data, and also outputs the "clock" for the
signal.

Manchester was one example, but it's not the most efficient way by far, but
it's very robust and ideal for high noise
environment.


Anyway, with TRF690X apps you can directly transmit NRZ, that's the beauty
of the TI RF chips.
(The NRZ transmission is only possible after proper post demod training etc etc)
I thought the app note would just invite a lot of confusion for users, as it
offers a very rough approach
to set up an RF link, which is not to be underestimated, especially not for a
new user of an MCU and no
RF knowledge - way too dangerous.

If time allows, and ther's anyone interested in these topics further,
I'd be glad to discuss and explain some issues,
I do that off-line occasionally with some friends I make her, but vis-a-vis all
the commercial
splashes here lately anyway, 
maybe I should also "throw out" some long term fishing lines too, you
never know.

Microbit Systems highly specialises in ultra low power, yet very high
performance,
high reliability ISM RF systems where lateral thinking and innovation provides
products
superior cost, performance, and size wise to many existing solutions.


Kris De Vos
Director
Microbit Systems P/L

- Member of Bluetooth SIG
- Texas Instruments RF / MSP430 endorsed consultant
- Atmel Inc. endorsed AVR consulttant
- Microchip Inc. endorsed consultant
- RF ACDC (Approved Chipcon Design Center)

Boronia VIC 3155
Australia
Tel : +613 9762 0100
Fax : +613 9762 2077
Cellular : +61 425 760 997
Email : microbit@micr...















----- Original Message ----- 
  From: Chris Roark 
  To: msp430@msp4... 
  Sent: Friday, October 11, 2002 3:07 PM
  Subject: RE: [msp430] Re: UART without Start- and Stopbits?


  Steve,

  As Kris points out, there are ways to embed the clock info into the
  data stream....

  Use a biphase coding scheme, (one of the most common types is called 
  Manchester coding) where the clock information is embedded as
"extra"
  transitions
  in the data stream.  This allows the receiver can maintain
  synchronization using 
  only the data stream, no other clock line required. As Kris points out,
  at the 
  very beginning of the data stream, a bit sync pattern is required
  (typically
  a string of alternating "1"s & "0"s) which enables the
receiver to
  establish 
  the basic clock rate.  Sometimes a "Frame Sync" bit pattern is also
  necessary
  depending on whether the exact number of bit sync preamble bits is
  reliably
  monitored by the receiver.  After the sync pattern, the receiver
  continuously 
  uses the data stream bit transitions to maintain its local clock to the
  exact 
  value of the transmitter clock.

  "Differential Manchester" has some advantages over basic
"Manchester"
  biphase (since the data is encoded differentially), but in this
  application
  it probably doesn't matter.

  The assemble language code on the receiver side is non-trivial, (I did
  it long 
  ago on a Mot 68xx uP, but for a much slower bit rate!).  Essentially,
  when the 
  data stream first starts, the receiver needs to measure the bit period
  using the 
  frame sync pattern and establish an adjustable bit clock (i.e. setup
  Timer A to 
  count up to the calculated count).  At each bit clock, sample the
  incoming data,
  then measure the time until the next transition on the bit clock.  If it
  is off
  by a non-trivial amount, adjust the up count value for Timer A for the
  next bit 
  period.  Note that you need to do some filtering so the clock adjustment
  are damped
  some... you don't want to adjust anything too dramatically or else
  you'll
  lose sync.

  The code is emulating a digital phase lock loop, which extracts the
  clock
  information.

  Again, this code requires some design and effort.  Also, I don't know
  what the max bit 
  rate the MSP430 can do - my humble opinion is that the MSP430 doesn't
  have a chance of
  doing 115,000+ bps Manchester stream; note a Manchester increases the
  average "transition
  frequency" of the data by 1.5 (on a random stream), but that is not a 
  constant frequency, at times it is going at 2x the bit rate: meaning
  with
  a bit rate of 9,600 bps, the MSP430 has to do things at 19,200 bps rate
  at times.

  I hope this helps, if you really need a one wire synchronous serial
  interface, 
  without having precise crystal oscillators, do some research and testing
  on 
  using a Manchester coding scheme.  You may need to do the clock recovery
  in an external
  PLL... or as was suggested by Kris, an external HDLC circuit.

  This is a non-trivial design ... experience helps.

  Chris Roark
  Blue Water Design

  -----Original Message-----
  From: Kris De Vos [mailto:microbit@micr...] 
  Sent: Thursday, October 10, 2002 5:11 PM
  To: msp430@msp4...
  Subject: Re: [msp430] Re: UART without Start- and Stopbits?


  Don't get me wrong Steve,

  I know exactly what you mean, and for many apps that will be
  "acceptable".
  I was referring to the topic in its actual context, ie. saving
  bandwidth.
  About bit-banging on I/O with such a long dataframe, and TI thinking so,
  well, not really.
  I contracted for them 2 1/2 years ago (a TRF6900 + F135 + CODEC HW
  design) , and a similar setup for 
  wireless (voice) TDD gave them enormeous problems with maintaining SYNC,
  let alone when it was lost - 
  and then having to resync....
  That's even without FHSS put in. (frequency hopping)
  You need quite high demands of your clock accuracy and stability. Noise,
  interference etc. throws the whole thing 
  out too easily.
  Also, 256 bits is a long time without re-sync :
  If you use a standard 40 ppm crystal - at 115.2 kBps you will incur
  quite a bit of phase shift after 256 bits !

  When you do the sums, it turns out in practice that the extra start bit
  / byte is not such a bad
  trade off, also considering you can use character based CPU service,
  rather than bit based.
  Let alone if you have to service the bit banging with a so-so INT setup.

  Basically to get back to the topic, the question was about a UART
  without a start and stop bit for Async.
  Of course, without start bit you cannot do async transfers at U(S)ART
  level in an MCU.
  You can only do this with the synchronous part of the USART, but you
  need the symbol clock
  information carried on a separate line (clock line).

  There are many ways to skin a cat, and the topic is way too vast to
  elaborate but if you're curious
  about embedding symbol clocks and then recovering it on RX, I can offer
  some suggestions.
  The referred app note will also confuse one a bit (I haven't seen it
  yet) because there are other
  issues to deal with - such as preamble time (how many 1-0 bits for
  training) versus the sample/hold time
  on the receiver's dataslicer - the time constant of the integrator for
  optimal slicing level on the TRF6900,
  the preamble in that app note is also there to allow an internal AFC
  loop to tune the on-board variable inductance
  of the quadrature demodulator for optimal performance.

  What I merely meant was that the app note offers a basic suggestion, but
  specifically for RF applications.
  That's all.
  It will just offer more confusion than anything else.
  There is so many companies out there who've thought they'd done
1-2-3
  "magic" RF solutions, and there is no such thing, as they
  invariably found out the hard way.
  Sometimes a bit annoying, imagine your bread and butter is 100% writing
  "C" Steve, and you constantly have customers
  saying to you, "Hey I don't need you anymore - I bought one of those
C
  in 21 days books ".
  Needless to say they all come crying back having lost $$, and you still
  help them out, because it's your job.
  Well, sometimes it's like that for me.

  A bit frustrating, I think you can understand Steve.
  I remember there was a post here of someone called Chris - that did RF
  stuff too, probably can relate to what I mean.......

  All the best Steve,
  Kris


    ----- Original Message ----- 
    From: Steve 
    To: msp430@msp4... 
    Sent: Friday, October 11, 2002 6:57 AM
    Subject: [msp430] Re: UART without Start- and Stopbits?


    Chris,

    I don't doubt that there are other ways to accomplish this.  In fact 
    there are many that would be more elegant. A hardware circuit 
    designed with an HDL would definitely fall into that category.   You 
    may disagree, but "bit banging" a port with your own protocol is a

    simple solution.  Even Texas Instruments thinks so.  Admittedly all 
    of the factors have to be looked at.  What would work for one 
    application may not be the right choice for another.  Here is the 
    link to the TI ap note that I was referring to. SLAA121

    http://focus.ti.com/docs/apps/catalog/resources/appnoteabstract.jhtml?
    abstractName=slaa121 is SLAA121.  

    They start with a start sequence and then they send 32 bytes with no 
    start or stop bits.  When you think about it, all the start and stop 
    bits are for is to sync everything.  In this case they use a more 
    complicated start sequence and then send a packet of 256 bits in a 
    row.  Normal RS232 is a start stop bit sync sequence with an 8 bit 
    packet.  It will correct itself quicker if it gets out of sync but 
    the idea is the same. 

    Steve 

    --- In msp430@y..., "Kris De Vos" <microbit@c...> wrote:
    > > but you can create your own protocol and "bit bang" an
I/O pin 
    with 
    > > software.  You still have to deal with keeping everything synced 
    but 
    > > if you're smart about it you can drop the overhead 
    significantly.  An 
    > > example protocol that TI came up with is in their ap note about 
    their 
    > > TRF6900 900 MHz RF transeiver.  It has a start condition and then 
    > > sends a packet of data without sync bits around each byte. 
    > 
    > VERY bad idea.
    > Have a good think why.
    > 
    > Secondly, wiith a setup like that, you need to be able to
"lock" on 
    packet level thus
    > you waste :
    > 
    > - preamble : anywhere from 20 to 100 bits.
    > - sync for FCS
    > - Frame Check Sequence of Header
    > - Then you're "hopefully" in sync.
    > - Even if so, you can't do that very long (a few hindred bit 
    frames) and then you need re-sync again.
    > 
    > Belive me, your 20% loss is much more acceptable - but you can get 
    it down to 0% loss, apart from 3 to 4 bits
    > for symbol clock recivery sync - but at the expense of a fair bit 
    of VHDL in external logic.
    > 
    > 
    > 


          
               
                
         
         

    .



    ">http://docs.yahoo.com/info/terms/ 




        
             
              
       
       

  .