EmbeddedRelated.com
Blogs
The 2024 Embedded Online Conference

Real-time clocks: Does anybody really know what time it is?

Jason SachsMay 29, 20118 comments

We recently started writing software to make use of a real-time clock IC, and found to our chagrin that the chip was missing a rather useful function, namely elapsed time in seconds since the standard epoch (January 1, 1970, midnight UTC).

Let me back up a second.

A real-time clock/calendar (RTC) is a micropower chip that has an oscillator on it that keeps counting time, independent of main system power. Usually this is done with a lithium battery that can power the RTC for years, so that even when the rest of the system is powered down, there is still an accurate time reference.

PC's have had RTCs since the 1980s. One part from that era was the Motorola MC146818, which is no longer manufactured, although you can still look at the datasheet on Freescale's website.

The basic idea is this: You hook up a crystal oscillator that has a frequency such that 1 second is exactly 2N cycles for some value of N. The MC146818, for example, takes 32.768kHz crystals (N=15) as well as 1.048576MHz (N=20) or 4.194304MHz (N=24). Then the chip has a binary counter that divides down the clock frequency to 1Hz, and the chip also keeps track of a number of registers that your CPU can read and write, which store the current time in seconds, minutes, hours, days, months, and years.

Several manufacturers (Maxim / Dallas Semiconductor, ST, TI, Epson, Seiko, EM Microelectronic, etc.) make RTC chips today. Most of them use 32.768kHz crystals, because they're relatively smaller and lower-power than the higher frequency crystals. The RTC chips today are accessible via I2C or SPI buses rather than a parallel bus. But otherwise not much has changed. You read the registers and you know what time it is.

Sounds great, right?

Well, let's look at the things you might want to do in your embedded system that have to do with timekeeping.

For clarity, we'll define some terms. (These aren't standard terms; if anyone knows of better ones, let me know and I'll edit. The word "event" is more frequently used, but usually refers to a specific instant in time and a meaning associated with that instant. Here we're not talking about any meaning associated with a specific time.)

An instant is a reference to a moment in time: as an example, 3:32 AM UTC, on Thursday May 26, 2011.

A measured instant (more frequently known as a timestamp) is a reference to moment in time at which we check what time it is.

A scheduled instant is a reference to a moment in time which we choose (rather than measure), based either on a particular calendar date/time, or relative to another instant (measured or scheduled).

Got it?

So here are typical timekeeping tasks:

  • Measure the current time, creating a measured instant.
  • Store an instant for later use.
  • Display an instant in human-readable format.
  • Compute a scheduled instant based on a particular calendar date/time.
  • Compare instants (determine which one happens before the other).
  • Compute elapsed time between two instants. (T1 - T2 = 3.625 seconds)
  • Compute a new scheduled instant that is a specified time before or after another instant. (set T2 = T1 + 4 hours 3 minutes)

That pretty much covers anything you'd want to do with a processor that has access to a real-time clock.

What we haven't defined yet are two very important issues:

  • how an "instant" is stored in memory
  • how computations on calendar time are handled in the face of "local time" and other issues.

Timekeeping encoding format

There are two basic ways to encode an instant. One is as an offset from an epoch, which is a standard reference instant. The most well-known and commonly used epoch is the Unix epoch, or January 1, 1970, at midnight UTC.

The other way to encode an instant is as a data structure, with fields representing years, months, days, hours, minutes, seconds, and subseconds.

With a certain amount of computing energy, it is possible to translate an instant from the offset encoding to the structured time encoding. It's grungy code, and can be difficult to get right, and it may be hundreds or even a few thousand CPU cycles to complete, because of the way that different months have different days and the way that leap years work.

Let's do a thought experiment.

Suppose you have an embedded system which cannot afford to do the arithmetic needed to translate between the two encodings, either because there's not enough free computing time, or because your system doesn't have an operating system (most OS's have functions to translate between structure and offset encodings) and you don't have enough resources to properly implement it.

If the RTC encodes time instants as a structure, your system can perform all of the above timekeeping tasks except the last two (compute elapsed time between instants, and compute a new scheduled instant as an offset relative to another instant), because they involve interval arithmetic that requires the calculation of leap years.

If the RTC encodes time instants as an offset, your system can perform all of the above timekeeping tasks except the third and fourth (displaying an instant in human-readable format, and computing an instant based on a particular calendar date and time), which involve calendars and dealing with human beings. And here we can make one more assumption: almost any task involving interaction with human beings is infrequent (< 10Hz) and can therefore be slow (at least, slow relative to today's processor speeds).

To recap: If we can't translate between formats, with the structured time format, we can't easily do interval arithmetic between instants. With the offset format, the things we can't easily do are slow operations. So if the only reason we're not doing translation between formats is available processor time, offset format is something that doesn't need to be translated very often.

My suggestion, therefore, is that the proper encoding for timekeeping is offset encoding, and if your system needs to do those operations that interact with human beings, then and only then should time instants be converted to/from a calendar structure.

But when you look at the available RTC chips out there, almost all of them store time in a calendar structure. Very few store time as a relative offset, and none that I've found allow you to use both.

So we throw up our hands, and buy a real-time clock chip that stores time in a calendar structure, and write the grungy code (or use someone else's) to convert time to/from the relative offset format for dealing with intervals between instants.

Now we still have to read the current time from the RTC chip. To do this, we have to do The Dance.

The Dance

The Dance is the act of a processor communicating with a peripheral chip to get it to operate the way the system needs to operate. Sometimes the chip in question is designed well, and The Dance is simple and quick. Other times it's clumsy and awkward. An engineer writing software has to learn The Dance from the part's datasheet, sometimes with a lot of trial and error.

In the case of an RTC chip with calendar fields, The Dance involves reading out several fields of counter data which are being continuously updated. The best type of RTC chip would allow you to trigger a "snapshot" or "capture" operation, where the values in the counter are atomically copied to registers in the RTC, that can then be read out at the leisure of the processor. This is sort of like using a camera to take a picture of a clock: the time shown on the clock is changing, but a picture of the clock is constant.

I have not found an RTC which allows you to capture the current time and keep it indefinitely.

Some chips such as the NXP PCF2123, and the Maxim/Dallas DS3234, which are both SPI-based RTC chips, capture the current time at the beginning of the SPI transaction (when the chip select is lowered), and maintain this captured snapshot during the transaction (ending when the chip select is raised). This way, if you read all the fields of the current time in one SPI transaction, they're an atomically-consistent snapshot of the time when the chip select was lowered. This is great if you can do it. Sometimes it's difficult to get a processor to do this, because it has to do The Dance with other chips, and it doesn't have time to read 6 or 7 bytes in one shot.

Other chips, such as the MCP7941x series, do not have any automatic "snapshot" feature. This requires more care: if it's 11:59:59pm on Dec. 31, 2011, then you might accidentally read 12:59:59am Jan 1 2012, or 11:59:59pm Dec 1 2012, or something else, depending on the order the fields are read.

The usual way to handle this is to read the seconds field first, then the other fields, and read the seconds field again: if the seconds field matches, then you're OK, otherwise you need to read all the fields again until the seconds fields match.

What this doesn't give you is resolution: none of the RTCs I found gave read-access to the internal counters that increment faster than 1 second. So you don't know whether it's 11:59:59pm exactly, or 753 milliseconds after that. There's basically a 1 second window, and where within that 1 second is uncertain.

For some applications, that doesn't matter. If you're displaying the time on a screen, you usually don't care. But if you're trying to measure higher-precision events, you have a couple of options to maintain accuracy:

  • poll to determine the beginning of a 1-second interval: keep reading the RTC clock until the seconds field changes. (this lets you know the time is at the beginning of the next 1-second interval, at least within the time needed to read the RTC clock.)
  • interrupt on the beginning of a 1-second interval: if the RTC has a digital signal that updates at 1Hz (most seem to do this), you can feed this into an interrupt input pin on your processor.

If you want the time with higher resolution, you have to go through a more complex Dance: you need to somehow combine the knowledge of when each 1-second interval begins, with a higher-resolution timebase. A timer/counter in your processor may suffice; otherwise, if the RTC has a signal out that is the raw 32kHz waveform (some do this), you can feed that into a timer/counter pin in your processor.

In any case, it's not always that simple to do The Dance with a real-time clock, and if you're not careful you can get an incorrect time reading on rare occasions that can be difficult to detect and fix.

Time zones: does anyone really care?

Finally there's the issue of local time.

I strongly suggest that the right way to design a timekeeping system is to use Coordinated Universal Time (UTC) as a timebase, and then convert from there to local time when needed.

(And don't forget: not all time zones have integer hour offsets from UTC: Venezuela, India, Afghanistan, and Iran all have half-hour offsets, and Nepal and some islands of New Zealand have 45-minute offsets. So don't constrain your users to integer-hour time zone offsets.)

Otherwise it can get confusing: many areas observe a 1-hour shift in time for part of the year ("Daylight Savings Time" in the US). The exact date on which this occurs is determined by governments and varies between countries, and there is one hour during the year that "repeats": the local time when clocks are about to be set back during the fall is 2:58am is followed by 2:59am, and then at 3:00am it becomes 2:00am and 58 minutes later it's 2:58am again. Imagine that an important event is going to occur at 2:58am local time. Is that 2:58am local time before the clock adjustment, or after?

I use my GPS and camera for documentation purposes; both use local time rather than UTC, and both require me to manually change the time forward/backwards for Daylight Savings Time. Usually I forget until a month or two later, at which point I have a bunch of GPS waypoints and pictures that are 1 hour off. In some cases the 1 hour error is important. I really wish these consumer devices just used UTC internally, and then it wouldn't cause me all this hassle.

The real time nitpickers also have to worry about leap seconds: because the year isn't exactly 365.2425 days (1 leap year every 4 years, except we skip leap years once a century, in 1800 and 1900 and 2100 and whenever the year isn't a multiple of 400), we occasionally have leap seconds, where we skip a small instant.

What a headache.


My dream RTC clock

The perfect RTC clock for me (listen up, chip manufacturers!) would be one with these features:

  • time read from the RTC is encoded as an offset (# of seconds) since the 1970 epoch
  • there is a subseconds field (the 15 bits of counter that updates at 32.768kHz)
  • there is an explicit way to capture the current time so that I can read out the entire counter at my leisure, and have it be consistent.
  • alarms, if any, would be set using offset encoding.


That's it. Simple. I just have to store 32 bits of data for # of seconds since 1970 (which, if I use an unsigned 32-bit integer, keeps me going until about the year 2106 or so; if it's signed, it will overflow at the year 2038), and maybe another 16 bits if I care about higher-precision.

I'll do all of my calendar calculation, if you please, in my processor, during those rare occasions when I need it to interact with a human being.



The 2024 Embedded Online Conference
[ - ]
Comment by farnzSeptember 26, 2011
Your ideal "Real Time Clock" chip sounds to me like an Elapsed Time Counter chip with battery backup. You might want to look at Maxim's DS1374 and DS1672 devices for an idea of what's out there; the 1672 is a fairly dull 1Hz 32-bit elapsed time counter with battery backup support, while the 1374 also has a countdown timer built-in. None of them have sub-second counting, AFAICS, but they all store time as a 32-bit seconds count, referenced against an epoch of your choice. There are bound to be other chips out there you can consider, but they won't be described as RTCs.
[ - ]
Comment by ThomeeDecember 26, 2011
Just FYI, GMT is not equivalent to UTC; GMT participates in Summer Time, the UK's equivalent of Daylight Savings Time. I wholeheartedly agree with your assertion that it's best to keep things in UTC and convert to local time when necessary, just be aware that GMT doesn't accomplish that goal.
[ - ]
Comment by jms_nhDecember 26, 2011
@Thomee: you have a very good point -- such a detail is very important. I have changed all references from GMT to UTC in the article. Thanks for bringing it to my attention!
[ - ]
Comment by SbmeirowFebruary 13, 2012
The benefit of this method is that you don't have to worry about unstable register problems, rollover problems, or not knowing which second belongs to a 1HZ interrupt. #1 - Read RTC time. #2 - Config ALARM in RTC chip to occur in a few second from now. #3 Config ALARM in the RTC to cause interrupt. #4 Wait for interrupt. #5 You are now synced to the ALARM time.
[ - ]
Comment by cypherpunksApril 8, 2012
The Dance, simplified: Assuming you can guarantee to read the clock in less than 30 seconds elapsed: Read the seconds. Read everything else. Read the seconds again. If the seconds have not wrapped (i.e. have not decreased to a value less than the first read), then the higher-order registers are all good. If the seconds *have* wrapped, you know you have at least 60 seconds less the previous read time to read the higher-order registers before any more wraps occur, so read the higher-order registers again and you're done. (Either way, use the second seconds reading as the current time.) A variant of that is to read the high-order registers, then the seconds. If the seconds read as 0 (or less than the maximum possible read time), then a carry might just have occurred, and read the high-order registers again. While atomic read is convenient, it's honestly not a big deal to work around the lack.
[ - ]
Comment by jim fullerNovember 22, 2014
Here's another headache... GPS time within the system is not changed for leap seconds. It is roughly UTC circa 1980. The timing feedback of the terrestrial and space born components can't handle leap second updates so it is frozen. Question is, who compensates and who doesn't in their end products? Turns out a lot of systems don't compensate for leaps, LORAN, GLOBALSTAR, etc.
[ - ]
Comment by jim fullerNovember 22, 2014
[ - ]
Comment by Bruno SaraivaFebruary 7, 2016
I agree with your concerns. The worst thing is that, inside the IC, they probably have the time_t style information BEFORE they waste energy converting to a whole bunch of separate registers...
At least these chip vendors could allow us to read either the separate fields or simply the unix time! Only extra consideration is that I would not design my chip for a 32bit output, but rather for a signed 64bit variable, so that it wouldn't be born with the Year2038 bug...

To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments.

Please login (on the right) if you already have an account on this platform.

Otherwise, please use this form to register (free) an join one of the largest online community for Electrical/Embedded/DSP/FPGA/ML engineers: