In article <ZnU9i.7141$lz3.1307@newsfe5-win.ntli.net>, Peter Dickerson
says...
> "Gerhard Fiedler" <gelists@gmail.com> wrote in message
> news:1f6z3zi5yco0w.dlg@gelists.gmail.com...
> > On 2007-06-06 23:11:47, Robert Adsett wrote:
> >
> >>>> Not necessarily. Many systems, maybe even most, have varying
> >>>> instruction execution times depending on the instruction, what memory
> >>>> it is executing from and what memory is being accessed. That's
> >>>> without considering instructions tha automatically disable interrupts
> >>>> for the next instruction or two.
> >>>
> >>> Just as a note. These latency effects are known as jitter. The
> >>> frequency of the the timer interrupt remains constant, but the time
> >>> between interrupts vary.
> >>
> >> Assuming as another poster reminded me that you don't reload the timer
> >> in the interrupt. Then you get a drift added to the jitter.
> >
> > You may or you may not. There are techniques that make it possible in many
> > cases to reload a timer without causing drift (add or subtract to the
> > current timer value).
>
> Doesn't this require the timer to be able to perform the add/subtract or the
> CPU to perform read-add-write with predictable timing?
And a timer that continues running after a match. I was thinking of the
other case but Gerhard raises a good point.
Robert
--
Posted via a free Usenet account from http://www.teranews.com
Reply by Gerhard Fiedler●June 7, 20072007-06-07
On 2007-06-07 11:13:13, Peter Dickerson wrote:
>>> Assuming as another poster reminded me that you don't reload the timer
>>> in the interrupt. Then you get a drift added to the jitter.
>>
>> You may or you may not. There are techniques that make it possible in many
>> cases to reload a timer without causing drift (add or subtract to the
>> current timer value).
>
> Doesn't this require the timer to be able to perform the add/subtract or the
> CPU to perform read-add-write with predictable timing?
Yes. The read-add/sub-write sequence often does have a predictable timing
or can be made to have it (with interrupts disabled -- which often will be
the case in this scenario).
Gerhard
Reply by Peter Dickerson●June 7, 20072007-06-07
"Gerhard Fiedler" <gelists@gmail.com> wrote in message
news:1f6z3zi5yco0w.dlg@gelists.gmail.com...
> On 2007-06-06 23:11:47, Robert Adsett wrote:
>
>>>> Not necessarily. Many systems, maybe even most, have varying
>>>> instruction execution times depending on the instruction, what memory
>>>> it is executing from and what memory is being accessed. That's
>>>> without considering instructions tha automatically disable interrupts
>>>> for the next instruction or two.
>>>
>>> Just as a note. These latency effects are known as jitter. The
>>> frequency of the the timer interrupt remains constant, but the time
>>> between interrupts vary.
>>
>> Assuming as another poster reminded me that you don't reload the timer
>> in the interrupt. Then you get a drift added to the jitter.
>
> You may or you may not. There are techniques that make it possible in many
> cases to reload a timer without causing drift (add or subtract to the
> current timer value).
Doesn't this require the timer to be able to perform the add/subtract or the
CPU to perform read-add-write with predictable timing?
--
Peter
Reply by Gerhard Fiedler●June 7, 20072007-06-07
On 2007-06-06 23:11:47, Robert Adsett wrote:
>>> Not necessarily. Many systems, maybe even most, have varying
>>> instruction execution times depending on the instruction, what memory
>>> it is executing from and what memory is being accessed. That's
>>> without considering instructions tha automatically disable interrupts
>>> for the next instruction or two.
>>
>> Just as a note. These latency effects are known as jitter. The
>> frequency of the the timer interrupt remains constant, but the time
>> between interrupts vary.
>
> Assuming as another poster reminded me that you don't reload the timer
> in the interrupt. Then you get a drift added to the jitter.
You may or you may not. There are techniques that make it possible in many
cases to reload a timer without causing drift (add or subtract to the
current timer value).
Gerhard
Reply by Robert Adsett●June 6, 20072007-06-06
In article <kvz9i.14639$%T3.10443@bignews8.bellsouth.net>, Michael N.
Moran says...
> Robert Adsett wrote:
> > In article <1180936642.272357.147260@q69g2000hsb.googlegroups.com>,
> > says...
> >> If your system does not have to process other (higher level)
> >> interrupts or has areas in the code where interrupts are disabled,
> >> then the time between 2 calls to timestamp() will be constant, as long
> >> as you have a timer interrupt that restart itself (as indicated by
> >> others as well).
> >
> > Not necessarily. Many systems, maybe even most, have varying
> > instruction execution times depending on the instruction, what memory it
> > is executing from and what memory is being accessed. That's without
> > considering instructions tha automatically disable interrupts for the
> > next instruction or two.
>
> Just as a note. These latency effects are known as jitter.
> The frequency of the the timer interrupt remains constant,
> but the time between interrupts vary.
Assuming as another poster reminded me that you don't reload the timer
in the interrupt. Then you get a drift added to the jitter.
Robert
--
Posted via a free Usenet account from http://www.teranews.com
Reply by Michael N. Moran●June 6, 20072007-06-06
Robert Adsett wrote:
> In article <1180936642.272357.147260@q69g2000hsb.googlegroups.com>,
> says...
>> If your system does not have to process other (higher level)
>> interrupts or has areas in the code where interrupts are disabled,
>> then the time between 2 calls to timestamp() will be constant, as long
>> as you have a timer interrupt that restart itself (as indicated by
>> others as well).
>
> Not necessarily. Many systems, maybe even most, have varying
> instruction execution times depending on the instruction, what memory it
> is executing from and what memory is being accessed. That's without
> considering instructions tha automatically disable interrupts for the
> next instruction or two.
Just as a note. These latency effects are known as jitter.
The frequency of the the timer interrupt remains constant,
but the time between interrupts vary. This assumes that
the total latency is less than the period of the interrupt.
--
Michael N. Moran (h) 770 516 7918
5009 Old Field Ct. (c) 678 521 5460
Kennesaw, GA, USA 30144 http://mnmoran.org
"So often times it happens, that we live our lives in chains
and we never even know we have the key."
The Eagles, "Already Gone"
The Beatles were wrong: 1 & 1 & 1 is 1
Reply by Robert Adsett●June 4, 20072007-06-04
In article <46639201$0$56949$892e0abb@auth.newsreader.octanews.com>,
Thad Smith says...
> wamba.kuete@gmail.com wrote:
> > Just a generic question regarding interrupt latency. I am using a
> > periodic timer which generates an interrupt at every clock tick. The
> > question I have is will the interrupt latency cause the timer
> > interrupt service routine to run at a reduced frequency. What I mean
> > here is as follows. Consider a simple isr routine as shown below,
> >
> > void isr()
> > {
> > t = timestamp();
> > }
> >
> > Assume the above code runs twice. The question i have is will the time
> > elapsed between 2 calls of timestamp() in this isr include the
> > interrupt latency value or will the time elapsed correspond to the
> > original timer frequency (i.e. time_elapsed = 1/timer_freq).
>
> That depends on the source of the interrupt. If it comes from a timer
> that continues to run after an interrupt occurs and you do not change
> the timer value, the interrupt rate should be fixed. If instead, you
> restart the timer within the interrupt, then the time between the timer
> interrupt and restarting is lost (although it may be partially
> compensated for).
Good point. I was assuming either an automatic reload or a free running
timer with an adjustable comaparator w/o any justification.
Robert
--
Posted via a free Usenet account from http://www.teranews.com
Reply by Robert Adsett●June 4, 20072007-06-04
In article <1180928828.313834.210960@j4g2000prf.googlegroups.com>,
says...
> On Jun 3, 8:30 pm, Robert Adsett <s...@aeolusdevelopment.com> wrote:
> >
> > Maybe. Over time it should but there is no a-priori reason for the
> > latentcy to be constant.
> >
> > Robert
> >
> Ok so its fair to say the following is not correct,
> interrupt latency = time_elapsed between time stamp value in the isr -
> timer_period (1/timer_freq)
Yes, it might give you the jitter in the perios though. I think that
would be simpler to measure on a storage oscilloscope though.
As has been pointed out, if you use a free running timer for your
timestamp and a comparator off of that same timer for the interrupt you
can measure the latency directly.
Robert
--
Posted via a free Usenet account from http://www.teranews.com
Reply by Robert Adsett●June 4, 20072007-06-04
In article <1180936642.272357.147260@q69g2000hsb.googlegroups.com>,
says...
> On Jun 4, 5:20 am, wamba.ku...@gmail.com wrote:
> > Just a generic question regarding interrupt latency. I am using a
> > periodic timer which generates an interrupt at every clock tick. The
> > question I have is will the interrupt latency cause the timer
> > interrupt service routine to run at a reduced frequency. What I mean
> > here is as follows. Consider a simple isr routine as shown below,
> >
> > void isr()
> > {
> > t = timestamp();
> >
> > }
> >
> > Assume the above code runs twice. The question i have is will the time
> > elapsed between 2 calls of timestamp() in this isr include the
> > interrupt latency value or will the time elapsed correspond to the
> > original timer frequency (i.e. time_elapsed = 1/timer_freq). It would
> > make sense that the isr still runs at the original timer frequency and
> > the interrupt latency only delays the calling of isr and not the
> > frequency at which the isr runs. However from the measurements it
> > doesn't look like the case. Any help is appreciated.
>
> If your system does not have to process other (higher level)
> interrupts or has areas in the code where interrupts are disabled,
> then the time between 2 calls to timestamp() will be constant, as long
> as you have a timer interrupt that restart itself (as indicated by
> others as well).
Not necessarily. Many systems, maybe even most, have varying
instruction execution times depending on the instruction, what memory it
is executing from and what memory is being accessed. That's without
considering instructions tha automatically disable interrupts for the
next instruction or two.
Robert
--
Posted via a free Usenet account from http://www.teranews.com
Reply by ●June 4, 20072007-06-04
On Jun 4, 5:20 am, wamba.ku...@gmail.com wrote:
> Just a generic question regarding interrupt latency. I am using a
> periodic timer which generates an interrupt at every clock tick. The
> question I have is will the interrupt latency cause the timer
> interrupt service routine to run at a reduced frequency. What I mean
> here is as follows. Consider a simple isr routine as shown below,
>
> void isr()
> {
> t = timestamp();
>
> }
>
> Assume the above code runs twice. The question i have is will the time
> elapsed between 2 calls of timestamp() in this isr include the
> interrupt latency value or will the time elapsed correspond to the
> original timer frequency (i.e. time_elapsed = 1/timer_freq). It would
> make sense that the isr still runs at the original timer frequency and
> the interrupt latency only delays the calling of isr and not the
> frequency at which the isr runs. However from the measurements it
> doesn't look like the case. Any help is appreciated.
If your system does not have to process other (higher level)
interrupts or has areas in the code where interrupts are disabled,
then the time between 2 calls to timestamp() will be constant, as long
as you have a timer interrupt that restart itself (as indicated by
others as well).
If you have other higher level interrupts and/or interrupt disabled
areas then the moment the timestamp is called can be deleayed by a
total of (time needed in interrupt disabled areas) + (total time of
all higher level ISR's).
So depending on the system configuration you may or may not see
variation in the time between two calls to timestamp.
Kind regards,
Johan Borkhuis