Forums

Can an RTOS guarantee that interrupt latency will never exceed a predefined maximum?

Started by NewToFPGA January 19, 2008
Can an RTOS guarantee that interrupt latency will never exceed a
predefined maximum? if so, where do we define this value in the
programming?

thanks.
On 2008-01-20, NewToFPGA <hetzme@yahoo.com> wrote:

> Can an RTOS guarantee that interrupt latency will never exceed > a predefined maximum?
No. Not unless the user isn't allowed to disable interrupts or write interrupt service routines. Most RTOSes will quote the maximum latency that the RTOS causes due to it's ISRs and critical sections where interrupts are disabled, but the RTOS can't prevent the user from adding latency.
> if so, where do we define this value in the programming?
You don't find that value in the programming. You find it my measuring the execution time of ISRs and critical sections. -- Grant
> On 2008-01-20, NewToFPGA <hetzme@yahoo.com> wrote: > >> Can an RTOS guarantee that interrupt latency will never exceed a >> predefined maximum? >> > No. Not unless the user isn't allowed to disable interrupts or write > interrupt service routines. Most RTOSes will quote the maximum > latency that the RTOS causes due to it's ISRs and critical sections > where interrupts are disabled, but the RTOS can't prevent the user > from adding latency.
Most likely, with a lot of information and some calculation. It depends on the resource management protocol used. For example, many RTOSes use priority inheritance protocol for which the worst-case critical section time for each task is known.
> >> if so, where do we define this value in the programming? >> > You don't find that value in the programming. You find it my > measuring the execution time of ISRs and critical sections. >
Agreed. ---Matthew Hicks
On Sat, 19 Jan 2008 19:43:13 -0800, NewToFPGA wrote:

> Can an RTOS guarantee that interrupt latency will never exceed a > predefined maximum? if so, where do we define this value in the > programming? > > thanks.
No, but a competent programmer with a competently-written RTOS can. As Grant pointed out in his thread, the RTOS can only guarantee that it will respond to OS calls in some maximum time, and that it will only block interrupts for some maximum time. The RTOS writers have no control over what the system programmer does with interrupts or the RTOS, which is where the competent programmer comes in. -- Tim Wescott Control systems and communications consulting http://www.wescottdesign.com Need to learn how to apply control theory in your embedded system? "Applied Control Theory for Embedded Systems" by Tim Wescott Elsevier/Newnes, http://www.wescottdesign.com/actfes/actfes.html
>> Can an RTOS guarantee that interrupt latency will never exceed a >> predefined maximum? if so, where do we define this value in the >> programming? >> >> thanks.
>No, but a competent programmer with a competently-written RTOS can.
I might point out that this also requires guaranteed instruction timings. They can be long, but they need to be bounded. Caches and pipeline flushes make these hard to determine. And measuring ISR latency doesn't cut it, unless you measure these effects. Which is why you see more DSPs and RISCS and fewer pentia in RTOSs. -- mac the na&#2013265935;f
On Sun, 20 Jan 2008 14:30:37 +0000, Alex Colvin wrote:

>>> Can an RTOS guarantee that interrupt latency will never exceed a >>> predefined maximum? if so, where do we define this value in the >>> programming? >>> >>> thanks. > >>No, but a competent programmer with a competently-written RTOS can. > > I might point out that this also requires guaranteed instruction > timings. They can be long, but they need to be bounded. Caches and > pipeline flushes make these hard to determine. > > And measuring ISR latency doesn't cut it, unless you measure these > effects. > > Which is why you see more DSPs and RISCS and fewer pentia in RTOSs.
Too true -- but nearly all of the RISC chips, and some of the higher-end DSPs, feature pipelines and/or caches, so that's getting harder, too. In theory one could calculate this in a sensible way, but it would take deep knowledge of both the processor and RTOS at hand, and it would probably be tedious in the extreme. -- Tim Wescott Control systems and communications consulting http://www.wescottdesign.com Need to learn how to apply control theory in your embedded system? "Applied Control Theory for Embedded Systems" by Tim Wescott Elsevier/Newnes, http://www.wescottdesign.com/actfes/actfes.html
In Wikipedia it says that modern hardware implements interrupt rate
limiting that reduces the amount of time spent servicing interrupts,
allowing the processor to spend more time doing useful work. Does it
not make the system loose some of the data comming in and not get
chance to get processed?
NewToFPGA wrote:
> In Wikipedia it says that modern hardware implements interrupt rate > limiting that reduces the amount of time spent servicing interrupts, > allowing the processor to spend more time doing useful work. Does it > not make the system loose some of the data comming in and not get > chance to get processed?
Interrupt rate limiting is best used in combination with a FIFO in the device. Based on interrupt timing, the device can delay interrupts if they occur too close together, and save up the date in the FIFO. When the FIFO is getting full, it no longer delays the interrupt so the CPU can process all the data at once, reducing overhead.
On Jan 20, 4:38=A0pm, Arlet Ottens <usene...@c-scape.nl> wrote:
> NewToFPGA wrote: > > In Wikipedia it says that modern hardware implements interrupt rate > > limiting that reduces the amount of time spent servicing interrupts, > > allowing the processor to spend more time doing useful work. Does it > > not make the system loose some of the data comming in and not get > > chance to get processed? > > Interrupt rate limiting is best used in combination with a FIFO in the > device. Based on interrupt timing, the device can delay interrupts if > they occur too close together, and save up the date in the FIFO. When > the FIFO is getting full, it no longer delays the interrupt so the CPU > can process all the data at once, reducing overhead.
Is it safe to assume that if the expected rate of the interrupts are high and it is most likely that the FIFO going to get full then there is no advantage from Interrupt rate limiting?
On Sun, 20 Jan 2008 17:17:50 -0800, NewToFPGA wrote:

> On Jan 20, 4:38&nbsp;pm, Arlet Ottens <usene...@c-scape.nl> wrote: >> NewToFPGA wrote: >> > In Wikipedia it says that modern hardware implements interrupt rate >> > limiting that reduces the amount of time spent servicing interrupts, >> > allowing the processor to spend more time doing useful work. Does it >> > not make the system loose some of the data comming in and not get >> > chance to get processed? >> >> Interrupt rate limiting is best used in combination with a FIFO in the >> device. Based on interrupt timing, the device can delay interrupts if >> they occur too close together, and save up the date in the FIFO. When >> the FIFO is getting full, it no longer delays the interrupt so the CPU >> can process all the data at once, reducing overhead. > > Is it safe to assume that if the expected rate of the interrupts are > high and it is most likely that the FIFO going to get full then there is > no advantage from Interrupt rate limiting?
FIFO'ing data to reduce interrupt rates works on comm systems when each bit of data doesn't have to be responded to in an extremely snappy manner, and when the overhead of calling the interrupt is expensive compared to the cost of processing a byte of data - so the processor works on the data in small batches instead of dropping everything, doing one, picking everything up, dropping everything, etc. -- Tim Wescott Control systems and communications consulting http://www.wescottdesign.com Need to learn how to apply control theory in your embedded system? "Applied Control Theory for Embedded Systems" by Tim Wescott Elsevier/Newnes, http://www.wescottdesign.com/actfes/actfes.html