Reply by rtstofer December 14, 20122012-12-14
The Implementation Guide talks about baud rates around 9600 and 19200 although it allows for 115k. Again, we don't have much information.

As a practical matter, timings are further constrained by the capabilities of the controllers as they have to process the information between scans and there just isn't much time allowed.

http://www.modbus.org/docs/Modbus_over_serial_line_V1.pdf

Richard

An Engineer's Guide to the LPC2100 Series

Reply by Mike McIntyre December 14, 20122012-12-14
From the Modbus spec:
"Following the last transmitted character, a similar interval of at
least 3.5 character
times marks the end of the message."

That isn't really very practical at 115Kbaud. There's a couple of ways
around
it though, other than switching to DF1 ;-)

Maintain a 'running CRC' as the characters arrive. If the answer is 0,
then
there is a 65535/65536 chance that you've hit the end of the message.

OR if you feel like bluring the lines of the OSI model, you can
calculate the
length of a Modbus message from the first six received bytes. How you
calculate the length will depend on the function code (second byte) - as
sometimes
the message length is fixed, other times there is an element count,
other times
there is a byte count.

MM
Reply by rtstofer December 13, 20122012-12-13
> If it's MODBUS, there is no other way. That's the way it is on a serial
> line.

But the OP has never described the source of these packets (or I missed it!). If it is MODBUS using RTU framing then the blank interval is guaranteed. If ASCII framing is used then it's a lot easier.

The fact that the packet can be of arbitrary length is a problem. Some fields, for some small controllers, are limited to 256 bytes but I couldn't find where any maximum is stated. It would be a matter of calculating the size of the maximum transaction from the Maximum Prameters tables (starting on page 100).

http://modbus.org/docs/PI_MBUS_300.pdf

Clearly, 200 bytes is unlikely to be enough space.

Richard
Reply by Paul Curtis December 13, 20122012-12-13
> Is it seriously the case that the only way you can detect an end of frame
> is with an inter-record gap? That's a horrible protocol!

That's MODBUS.

> What if the sender sends half the message and then gets busy on something
> else before resuming.

That would be non-compliant for MODBUS and the gap interpreted at the end of
a message. You would code the transmitter to prevent this happening.

> You will detect it as a complete frame and that is
> unlikely to be the case. Cooperative multitasking systems like Windows
> have this behavior.
>
> I suppose if you can guarantee a message less than 200 uninterrupted
> characters this code could work but I would try for a better way to frame
> the message.

If it's MODBUS, there is no other way. That's the way it is on a serial
line.

--
Paul Curtis, Rowley Associates Ltd http://www.rowley.co.uk
SolderCore Development Platform http://www.soldercore.com

Reply by rtstofer December 13, 20122012-12-13
--- In l..., "skiddybird" wrote:
>
> Thank you, guys!
> According to the disscussion, I have changed the code as below.
>
> unsigned char rx_PC[200];
> unsigned char i_pad = 0;
> xSemaphoreHandle sFlagRX;
>
> vSemaphoreCreateBinary(sFlagRX); //this statement is contained in the function defition of prvSetupHardware()
>
> __arm void vSerialISR(void){
> switch(U0IIR & serINTERRUPT_SOURCE_MASK){
> case serSOURCE_RX:
> start_timer_for_serial_gap();
> rx_PC[i_pad++] = U0RBR;
> break;
> }
> VICVectAddr = serCLEAR_VIC_INTERRUPT;
> }

This code will jump up and bite you sooner or later! If the incoming frame has 201 characters, you will be off the end of the array and scrambling something else.

Is it seriously the case that the only way you can detect an end of frame is with an inter-record gap? That's a horrible protocol!

What if the sender sends half the message and then gets busy on something else before resuming. You will detect it as a complete frame and that is unlikely to be the case. Cooperative multitasking systems like Windows have this behavior.

I suppose if you can guarantee a message less than 200 uninterrupted characters this code could work but I would try for a better way to frame the message.

Richard
Reply by FreeRTOS Info December 13, 20122012-12-13
On 13/12/2012 12:29, skiddybird wrote:
<snip>
> For binary semaphore, is it normal to create it before starting
> scheduler? If yes, after its creation, why have to give it immediately?
> For a simple example, imagine the role of one specific binary semaphore
> is for synchronization between a task and an ISR, then no routine except
> that ISR is legitimate to give this semaphore, otherwise the task could
> not block to wait for the occurrence of that interrupt, because the
> semaphore has been given right after its creation, and the effect of
> this binary semaphore is void.
> In other words, should the above definition for vSemaphoreCreateBinary()
> be modified as the following one?
> #define vSemaphoreCreateBinary(xSemaphore) (xSemaphore) > xQueueGenericCreate((unsigned portBASE_TYPE)1,
> semSEMAPHORE_QUEUE_ITEM_LENGTH, queueQUEUE_TYPE_BINARY_SEMAPHORE)
>
> Correct me please.
>

There is nothing to correct - both ways are valid for different
scenarios because the state in which you want a binary semaphore to
exist before it is first used in anger is dependent on the application.
You give one example, but another example would be where the semaphore
is used more as a mutex (although mutex type semaphores are provided
too). In that case you want it to be available as its initial state.

There is nothing in the implementation that prevents you setting the
semaphore to whatever state you want. If your application needs the
semaphore to start in an 'empty' state then simply create it, then
immediately take it (you can use a block time of 0 as you know it is
available). If your application needs the semaphore to start in the
available state then just create it.

Regards,
Richard.
Reply by skiddybird December 13, 20122012-12-13
Thank you, people.
Yet my previous question remains unanswered. Allow me to repeat it.

The macro definition for vSemaphoreCreateBinary() is as below.



For binary semaphore, is it normal to create it before starting scheduler? If yes, after its creation, why have to give it immediately? For a simple example, imagine the role of one specific binary semaphore is for synchronization between a task and an ISR, then no routine except that ISR is legitimate to give this semaphore, otherwise the task could not block to wait for the occurrence of that interrupt, because the semaphore has been given right after its creation, and the effect of this binary semaphore is void.
In other words, should the above definition for vSemaphoreCreateBinary() be modified as the following one?
#define vSemaphoreCreateBinary(xSemaphore) (xSemaphore) = xQueueGenericCreate((unsigned portBASE_TYPE)1, semSEMAPHORE_QUEUE_ITEM_LENGTH, queueQUEUE_TYPE_BINARY_SEMAPHORE)

Correct me please.
Reply by FreeRTOS Info December 13, 20122012-12-13
On 13/12/2012 06:01, stevec wrote:
>
> My UART driver for FreeRTOS uses this technique (LPC 2xxx ARM7)
>
> Use FIFO in UART. The interrupt rate is reduced from one per char to on
> per 10 chars (or whatever you wish, 16 byte FIFO).
> At each interrupt, copy all bytes in the FIFO to buffer or queue, then
> dismiss the interrupt. Fast loop.
>
> My issue with FreeRTOS for this is that there are no ring buffers,
> lists, or variable size message queues. So that's where the overhead
> lies, in the ISR to task interface.

Ring buffers are the normal way of doing this. If you don't have DMA
then you are going to have to copy the bytes out one at a time anyway,
unless the hardware somehow lets you memcpy form registers (?), so a
ring buffer is not going to help efficiency in that case.

The latest demos released at Electronica recently (admittedly not on NXP
parts) includes a very efficient UART driver that sets up a DMA to
continuously receive data into a ring buffer. Practically no CPU
overhead at all.

How a ring buffer can be coded also depends on how it is filled - in the
same demo there are actually two implementations. One that is filled by
DMA, and another by interrupts (for a CDC device without DMA).

There is already a plan to extend the NXP FreeRTOS+IO demos to include a
DMA transfer mode. Interrupts filling queues are only acceptable for
very low throughput interfaces, such as a command console.

Regards,
Richard.

+ http://www.FreeRTOS.org
Designed for microcontrollers. More than 7000 downloads per month.

+ http://www.FreeRTOS.org/trace
15 interconnected trace views. An indispensable productivity tool.

Reply by stevec December 13, 20122012-12-13
My UART driver for FreeRTOS uses this technique (LPC 2xxx ARM7)

Use FIFO in UART. The interrupt rate is reduced from one per char to on per 10 chars (or whatever you wish, 16 byte FIFO).
At each interrupt, copy all bytes in the FIFO to buffer or queue, then dismiss the interrupt. Fast loop.

My issue with FreeRTOS for this is that there are no ring buffers, lists, or variable size message queues. So that's where the overhead lies, in the ISR to task interface.

None the less, my app uses the UART at 115200 baud with about 70% duty cycle on arriving data.
Reply by skiddybird December 11, 20122012-12-11
Thank you, guys!
According to the disscussion, I have changed the code as below.



The idea is simple. Whenever a character arrives via serial port, the USART RX ISR places it in the array rx_PC, and start a timer for observing a pause on the incoming stream. As long as the stream is transfering, a pause could never appear, and the timer count keeps being refreshed, thus no oveflow occures. At the end of the transmission, the timer will soon overflow of course, and the timer ISR gives a semaphore to wakeup the handler task for processing the received stream.
In such a situation, if no a single character arrives, the handler task should block when attempting to take the semaphore, and could never run to the statement process_data_in_rx_PC().
But this is not the case. After program startup, the handler task reaches process_data_in_rx_PC() immediately, without waiting for the arrival of any USART received characters. The symptom remains the same even if the serial cable between the target board and the sending device is disconnected.
After spending some effort on debugging, I got one finding.
The macro definition for vSemaphoreCreateBinary() is as below.



If it is changed to the below form, the aforementioned problem will get solved. The handler task blocks if no semaphore is given by the timer ISR, and unblocks if the timer ISR provides a semaphore.
#define vSemaphoreCreateBinary(xSemaphore) (xSemaphore) = xQueueGenericCreate((unsigned portBASE_TYPE)1, semSEMAPHORE_QUEUE_ITEM_LENGTH, queueQUEUE_TYPE_BINARY_SEMAPHORE);

So far, my question is, why give it on the creation of a binary semaphore? I've always thought, as a means of syncronization between a task and an ISR, or another task, only the ISR or another task is legitimate to give the binary semaphore. Have I holden a wrong concept for all the time?