EmbeddedRelated.com
Forums
The 2024 Embedded Online Conference

Software real time clock using timer unit

Started by alex99 November 8, 2009
I am trying to implement a software real-time clock on ARM using
the timer module. 

I am pretty much unable to do it at this point and was hoping that someone
could point me in the right direction -- perhaps even coded examples that
could serve as a way to better understand it. I C language but am a newbie
in
micro-controllers. Have to display time using one of the GPIO ports.

Thanks in advance!

AJ	   
					
---------------------------------------		
This message was sent using the comp.arch.embedded web interface on
http://www.EmbeddedRelated.com
On Sun, 08 Nov 2009 09:35:23 -0600, alex99 wrote:

> I am trying to implement a software real-time clock on ARM using > the timer module. > > I am pretty much unable to do it at this point and was hoping that someone > could point me in the right direction -- perhaps even coded examples that > could serve as a way to better understand it. I C language but am a newbie > in micro-controllers. Have to display time using one of the GPIO ports.
It would help if you told which ARM controller you're using.
On Sun, 08 Nov 2009 09:35:23 -0600, "alex99" <alex.xander99@gmail.com>
wrote:

>I am trying to implement a software real-time clock on ARM using >the timer module. > >I am pretty much unable to do it at this point and was hoping that someone >could point me in the right direction -- perhaps even coded examples that >could serve as a way to better understand it. I C language but am a newbie >in >micro-controllers. Have to display time using one of the GPIO ports.
Decide on a useful interval. For a human-space display, a 1 Hz update is plenty. Then, given your device's clock, PLL setting, and peripheral clock divider, select a prescaler and match register pair that results in a 1 Hz event. Set the timer's control registers appropriately and, depending on the ARM, do the necessary magic to connect the timer match event to an interrupt. Keep the running counts of hours/minutes/seconds in the foreground process, with the timer interrupt service routine just setting a flag that "it happened." That keeps the interrupt nice and short. The foreground process resets the flag, does the arithmetic to update the counts, and displays the result. -- Rich Webb Norfolk, VA
Rich Webb wrote:
> Keep the running counts of hours/minutes/seconds in the foreground > process, with the timer interrupt service routine just setting a flag > that "it happened." That keeps the interrupt nice and short. The > foreground process resets the flag, does the arithmetic to update the > counts, and displays the result.
I would definitely recommend a different approach - do your clock accounting in interupt service. If you only set a flag, there is no need for an intterupt, as you may equally well test the timer event flag in your software and the sole function of int routine would be setting the software flag upon setting the hardware flag by timer - the extra software flag is not needed at all.
In article <hd94tg$iit$1@julia.coi.pw.edu.pl>, easy2find@the.net says...
> Rich Webb wrote: > > Keep the running counts of hours/minutes/seconds in the foreground > > process, with the timer interrupt service routine just setting a flag > > that "it happened." That keeps the interrupt nice and short. The > > foreground process resets the flag, does the arithmetic to update the > > counts, and displays the result. > > I would definitely recommend a different approach - do your clock > accounting in interupt service. If you only set a flag, there is no need > for an intterupt, as you may equally well test the timer event flag in > your software and the sole function of int routine would be setting the > software flag upon setting the hardware flag by timer - the extra > software flag is not needed at all. > >
That's the way I handle the software RTC. In my timer interrupt, I increment a tick count, which is a short integer. I generally have between 100 and 480 ticks per second. When the tick counter reaches the proper value, I increment a long integer which is the Unix seconds count and set the tick count to zero. That is the minimum timer interrupt handler--and generally takes only a few microseconds. If there are other operations that need to be done at the tick interval with low jitter, they may also be done in the interrupt handler. The foreground routine calls a function to retrieve the integer seconds and does all conversions between that long integer and various time and date displays using the standard C time conversion routines. The function to return the seconds has to disable interrupts when fetching the seconds, since that is not an atomic operation on the 16-bit MSP430. Mark Borgerson
On Mon, 09 Nov 2009 14:19:33 +0100, BlueDragon <easy2find@the.net>
wrote:

>Rich Webb wrote: >> Keep the running counts of hours/minutes/seconds in the foreground >> process, with the timer interrupt service routine just setting a flag >> that "it happened." That keeps the interrupt nice and short. The >> foreground process resets the flag, does the arithmetic to update the >> counts, and displays the result. > >I would definitely recommend a different approach - do your clock >accounting in interupt service. If you only set a flag, there is no need >for an intterupt, as you may equally well test the timer event flag in >your software and the sole function of int routine would be setting the >software flag upon setting the hardware flag by timer - the extra >software flag is not needed at all.
That's a fair cop, given the OP's minimalist requirements. As with Mark, I'm typically running a higher rate tick in the background but, yes, a simple 1 Hz ticker could be tested by examining (and resetting) the match flag. -- Rich Webb Norfolk, VA
On Sun, 08 Nov 2009 09:35:23 -0600, "alex99" <alex.xander99@gmail.com>
wrote:

>I am trying to implement a software real-time clock on ARM using >the timer module. > >I am pretty much unable to do it at this point and was hoping that someone >could point me in the right direction -- perhaps even coded examples that >could serve as a way to better understand it. I C language but am a newbie >in >micro-controllers. Have to display time using one of the GPIO ports.
One way of implementing this is assuming that a timer interrupt is available say at 1234 Hz, thus the average time between interrupts is 810372,77147487844408427876823339 ns. Just use an ISR, which adds 810373 to a 32 bit nanosecond counter. Each time the counter overflows (mod 1E9), add one to the second counter. If the hardware does not support div/mod instructions usable in ISRs, just use some base2 operations, which can be implemented with shifts and mask instructions. Paul
In article <99igf5pkf6kfbllsu4c5kjglcnj27kb2dd@4ax.com>, keinanen@sci.fi 
says...
> On Sun, 08 Nov 2009 09:35:23 -0600, "alex99" <alex.xander99@gmail.com> > wrote: > > >I am trying to implement a software real-time clock on ARM using > >the timer module. > > > >I am pretty much unable to do it at this point and was hoping that someone > >could point me in the right direction -- perhaps even coded examples that > >could serve as a way to better understand it. I C language but am a newbie > >in > >micro-controllers. Have to display time using one of the GPIO ports. > > One way of implementing this is assuming that a timer interrupt is > available say at 1234 Hz, thus the average time between interrupts is > 810372,77147487844408427876823339 ns.
Hmmm, do you think that interval is sufficiently precise? ;-) Given the tolerances and temperature coefficients of standard crystals, 6 significant figures ought to be enough.
> > Just use an ISR, which adds 810373 to a 32 bit nanosecond counter. > Each time the counter overflows (mod 1E9), add one to the second > counter.
Is this supposed to be a joke? Why not just add 1 to a static short integer in the ISR and increment the seconds count each time the variable 'rolls over' at 1234?
> > If the hardware does not support div/mod instructions usable in ISRs, > just use some base2 operations, which can be implemented with shifts > and mask instructions.
That sounds like a special-case software floating point function. I guess you can make a simple timing chore as complex as you like. If you keep this up, you'll get to the 18.2mSec tick interval of the IBM PC. ;-) Mark Borgerson
On Mon, 9 Nov 2009 16:49:25 -0800, Mark Borgerson
<mborgerson@comcast.net> wrote:

>In article <99igf5pkf6kfbllsu4c5kjglcnj27kb2dd@4ax.com>, keinanen@sci.fi >says... >> On Sun, 08 Nov 2009 09:35:23 -0600, "alex99" <alex.xander99@gmail.com> >> wrote: >> >> >I am trying to implement a software real-time clock on ARM using >> >the timer module. >> > >> >I am pretty much unable to do it at this point and was hoping that someone >> >could point me in the right direction -- perhaps even coded examples that >> >could serve as a way to better understand it. I C language but am a newbie >> >in >> >micro-controllers. Have to display time using one of the GPIO ports. >> >> One way of implementing this is assuming that a timer interrupt is >> available say at 1234 Hz, thus the average time between interrupts is >> 810372,77147487844408427876823339 ns. > >Hmmm, do you think that interval is sufficiently precise? ;-) >Given the tolerances and temperature coefficients of standard >crystals, 6 significant figures ought to be enough. >> >> Just use an ISR, which adds 810373 to a 32 bit nanosecond counter. >> Each time the counter overflows (mod 1E9), add one to the second >> counter. > >Is this supposed to be a joke? Why not just add 1 to a static >short integer in the ISR and increment the seconds count each time >the variable 'rolls over' at 1234?
What if the interrupt occurred at 1234.567890123456 Hz rate. That would require weeks to get a usable reading.
>> >> If the hardware does not support div/mod instructions usable in ISRs, >> just use some base2 operations, which can be implemented with shifts >> and mask instructions. > >That sounds like a special-case software floating point function. > >I guess you can make a simple timing chore as complex as you like. >If you keep this up, you'll get to the 18.2mSec tick interval of >the IBM PC. ;-)
Some operating systems (Windows NT at least) allowed to specify how many time units (e.g. 100 ns) occurred between each timer interrupt. Paul
On Nov 9, 5:31=A0pm, Mark Borgerson <mborger...@comcast.net> wrote:
> .... > The foreground routine calls a function to retrieve the integer seconds > and does all conversions between that long integer and various time > and date displays using the standard C time conversion routines. > The function to return the seconds has to disable interrupts when > fetching the seconds, since that is not an atomic operation on the > 16-bit MSP430.
If I understood you correctly, masking the interrupts may be unnecessary. This is pretty much how time is kept on the PPC (power architecture, as they now have it); two 32 bit register accessing a 64 bit free running counter. The way to do it is simple: read the MS-part, then read the LS part, then the MS again; if the two MS reads differ, repeat. Obviously the "correct" way to get the real time is the one you describe, just keep the time as simple as possible and calculate whatever is needed whenever it is needed. In DPS, I use the mentioned PPC timebase registers plus a known moment derived upon boot time and later each hour or so from some platform dependent RTC part (perhaps none... just using NTP to get the real time over the net then). Dimiter ------------------------------------------------------ Dimiter Popoff Transgalactic Instruments http://www.tgi-sci.com ------------------------------------------------------ http://www.flickr.com/photos/didi_tgi/sets/72157600228621276/ Original message: http://groups.google.com/group/comp.arch.embedded/msg/261= c3d6ac7a3ab75?dmode=3Dsource

The 2024 Embedded Online Conference