SPI Slave question (LPC2106)

Started by Mark Butcher March 2, 2005

Hi All

I decided to try operating the LPC2106 in SPI slave mode for
communication with a 'weaker' CPU but have some strange results.

Here's the simple test: the other CPU sends 9 bytes to the LPC2106
at 100kb/s (using SSEL) [the sequence is 0x00, 0x01, 0x02, 0x04,
0x08, 0x10, 0x20, 0x40, 0x80] and the LPC2106 is running in a test
loop just waiting to react to these as follows: // set pins for SPI use
PINSEL0 &= ~0x0000ff00;
PINSEL0 |= 0x00005500;

SPCR = 0; // leave in slave mode
ulTemp = SPSR; // enable write by reading status register
SPDR = 0x55; // prime a first test transmission byte
SPCCR = 8; // only needed when master but set something (?)

while (1) {
while (!(SPSR & SPIF)) {}; // wait until we have something in RX
// we have received a byte (and sent our last Tx)
ucTest[iCntTst] = SPDR; // save received byte for checking
SPDR = ucTest[iCntTst++]; // echo received byte back

These are my observations (I tried different clock phase and
polarity settings and this is the only one which does anything
sensible so I assume they are correct):

1. The received bytes are always correct.
2. When the echo is removed, the master received the values [0x55,
0x00, 0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40] - that is, the first
primed transmission byte followed by the received bytes with one
byte delay - this means that the received byte is automatically
shifted out of the transmitter if the code doesn't performs write to
the Tx buffer (is also OK)
3. When the active echo is performed, the master (always) receives
the following bytes: [0x55, 0x00, 0x00, 0x03, 0x04, 0x08, 0x10,
0x20, 0x40]. The problem is that the first two bits in the byte seem
to have difficulties while the 6 upper bits seems always to be

Since the 0x55 at the beginning is always correct I can only presume
that it is some sort of timing problem, although the code
corresponds to the recommended sequence in the data sheet. I also
checked to see whether any other status bits were being set (slave
abort or something) but couldn't find any (resp. couldn't get test
code to trigger on them).

I did see that the SPI is quite basic and there are times when the
transmit buffer should not be written to but how can the slave know
when it shouldn't prepare a byte ready for transmission? If the
slave waits a while before writing the next byte it misses the next
SPI time slot (i.e. the master reads before the slave has put
anything in the buffer).

Conclusion - I'm a little confused. Can anyone explain this behavour?
In addition I am trying to work out a handshake sequence so that
both sides can send and receive as fast a possible without loosing
data. One Idea I have is for the slave to always actively send back
the inverse of a received byte so that the master can see whether
the last has just caused an overrun (slave was not ready to receive)
or not - this uses the fact that if the slave doesn't read, the
master receives an 'echo' of the last transmission as identified in

I am wondering whether SPI without DMA is a basis for such a task...

Any comments?


Mark Butcher

An Engineer's Guide to the LPC2100 Series