EmbeddedRelated.com
Forums
Memfault Beyond the Launch

clock synchronization over IP

Started by Unknown August 24, 2018
On Saturday, September 22, 2018 at 9:35:45 AM UTC+12, Chris wrote:
> On 09/21/18 22:15, George Neuner wrote: > > > Moreover, simply broadcasting the time from a master won't work[*] ... > > in order to synchronize the clocks youu need to know for each node (to > > a good approximation) the time it takes for the network to deliver the > > time packet. > > > > [*] except in a dedicated setup like a DGPS timing network. The > > problem with such setups is that all the connections from the master > > to the slaves need to be exactly the same length (bit delay). That is > > very hard to achieve in most installations. > > > > George > > > There is cable delay, but that's in the order a few nS per foot at most, > where the requirement here is in mS, so that can probably be ignored. > As for the switch or hub delay, that could be modelled or even subject > to testing to find suitable products. > > Ideally, I would use gps to sync, as it has all the outputs required to > within a few nS over a wide area, but you need a clear view of the sky > to make that work. > > Still think it would be worth looking at broadcast methods, as it could > be made low cost at client and server ends, could be run on modest h/w > without use of an OS, even without a full tcp/ip stack and is probably > good enough for the application... > > Chris
Thanks for all the replies. I really appreciate all the suggestions and the level of expertise here. The software is running on embedded Linux and every node will have a Moxa switch because we need redundant paths. If NTP "favours" a packet with the shortest round trip time then that should work fairly well. The micro is an ARM9 LPC3250. I'm curious to know something. When an NTP packet arrives somewhere, is the NTP processing and response all done in the ethernet chip without the need for the operating system to switch threads to process the message. We also need to synchronize time between the ARM micro and a K64 micro running MQX that is on the same board. The ARM micro will be the master. Any ideas for the best way to get the synch time from the ARM to the K64?
On 09/21/18 23:16, gp.kiwi@gmail.com wrote:

> > Thanks for all the replies. I really appreciate all the suggestions and the level of expertise here. > > The software is running on embedded Linux and every node will have a Moxa switch because we need redundant paths.
If NTP "favours" a packet with the shortest round trip time then that should work fairly well.
> > The micro is an ARM9 LPC3250. I'm curious to know something. When an NTP packet arrives somewhere, is the NTP
processing and response all done in the ethernet chip without the need for the operating system to switch threads to process the message. Don't want to appear rude, but if you have to ask questions like that, perhaps you need some engineers on the project who know what they are doing ?. But yes, multiple levels of the stack may have to be traversed to get the ntp stuff processed, including the ntp client daemon. Also, embedded Linux may or may not be suitable, as it was never designed for hard real time work of that type and may have significant latency, though there may be versions optimised in some way.
> > We also need to synchronize time between the ARM micro and a K64 micro running MQX that is on the same board.
The ARM micro will be the master. Any ideas for the best way to get the synch time from the ARM to the K64?
>
Sounds really complex, so what is this project trying to do overall ?... Chris
On Saturday, September 22, 2018 at 11:11:44 AM UTC+12, Chris wrote:
> On 09/21/18 23:16, gp....@gmail.com wrote: > > > > > Thanks for all the replies. I really appreciate all the suggestions and the level of expertise here. > > > > The software is running on embedded Linux and every node will have a Moxa switch because we need redundant paths. > > If NTP "favours" a packet with the shortest round trip time then that > should work fairly well. > > > > The micro is an ARM9 LPC3250. I'm curious to know something. When an NTP packet arrives somewhere, is the NTP > > processing and response all done in the ethernet chip without the need > for the operating system to switch > > threads to process the message. > > Don't want to appear rude, but if you have to ask questions like that, > perhaps you need some engineers on the project who know what they are > doing ?. But yes, multiple levels of the stack may have to be traversed > to get the ntp stuff processed, including the ntp client daemon. Also, > embedded Linux may or may not be suitable, as it was never > designed for hard real time work of that type and may have significant > latency, though there may be versions optimised in some way. > > > > > We also need to synchronize time between the ARM micro and a K64 micro running MQX that is on the same board. > > The ARM micro will be the master. Any ideas for the best way to get the > synch time from the ARM to the K64? > > > > Sounds really complex, so what is this project trying to do overall ?... > > Chris
It's not rude. I was wondering if someone would say that. We do have a kind of IP expert whose opinion is that NTP will be fine. I find I can learn a lot by just asking questions so I take the risk of making myself look stupid in the hope of learning something. e.g. someone mentioned that NTP favours a packet with the shortest round trip time. I suspect I could read for an hour or more about NTP and not discover that. Also, nothing I've read gives me any clue about how an NTP packet is processed and how it avoids queueing delays at each end. The project isn't really especially complex. I can't really say anything more specific about it. The extra micro is to take some of the work-load and real-time stuff off the Linux micro.
On Fri, 21 Sep 2018 22:35:40 +0100, Chris <xxx.syseng.yyy@gfsys.co.uk>
wrote:

>On 09/21/18 22:15, George Neuner wrote: > >> Moreover, simply broadcasting the time from a master won't work[*] ... >> in order to synchronize the clocks youu need to know for each node (to >> a good approximation) the time it takes for the network to deliver the >> time packet. >> >> [*] except in a dedicated setup like a DGPS timing network. The >> problem with such setups is that all the connections from the master >> to the slaves need to be exactly the same length (bit delay). That is >> very hard to achieve in most installations. >> >> George > > >There is cable delay, but that's in the order a few nS per foot at most, >where the requirement here is in mS, so that can probably be ignored. >As for the switch or hub delay, that could be modelled or even subject >to testing to find suitable products.
But the cable length can be up to 100m for twisted pair, 200m for thin cable, and 500m for thick cable. And up to 4 repeaters (5 cables) are permitted between any pair of stations, provided the total CSMA/CD collision sense time does not exceed the maximum allowed (which varies by transmission speed). Remember also that for twisted pair, switches establish new collision domains for each switched port, but repeating hubs and L2 bridges do not. Even for twisted pair, you *still* can have up to 4 repeaters between the central switch and any station. And yes, there still are uses for non-switched repeater hubs. George
On Saturday, September 22, 2018 at 11:11:44 AM UTC+12, Chris wrote:
> On 09/21/18 23:16, gp....@gmail.com wrote: > > > > > Thanks for all the replies. I really appreciate all the suggestions and the level of expertise here. > > > > The software is running on embedded Linux and every node will have a Moxa switch because we need redundant paths. > > If NTP "favours" a packet with the shortest round trip time then that > should work fairly well. > > > > The micro is an ARM9 LPC3250. I'm curious to know something. When an NTP packet arrives somewhere, is the NTP > > processing and response all done in the ethernet chip without the need > for the operating system to switch > > threads to process the message. > > Don't want to appear rude, but if you have to ask questions like that, > perhaps you need some engineers on the project who know what they are > doing ?. But yes, multiple levels of the stack may have to be traversed > to get the ntp stuff processed, including the ntp client daemon. Also, > embedded Linux may or may not be suitable, as it was never > designed for hard real time work of that type and may have significant > latency, though there may be versions optimised in some way. > > > > > We also need to synchronize time between the ARM micro and a K64 micro running MQX that is on the same board. > > The ARM micro will be the master. Any ideas for the best way to get the > synch time from the ARM to the K64? > > > > Sounds really complex, so what is this project trying to do overall ?... > > Chris
Also, if I knew the throughput of IP messages between any two nodes was an average of, say, at least 200 messages per second, I would be arguing to forget NTP because if the average round trip time is 5 milliseconds then all we have to do is keep sending a home-grown synch packet until it returns in under 5 milliseconds, then notify the far end to use that packet synch time. Same with the internal ethernet links. We can also manage the failure of a clock master ourselves instead of configuring multiple NTP clock masters.
On Tue, 18 Sep 2018 04:23:37 -0700 (PDT), graeme.prentice@gmail.com
wrote:

>On Tuesday, September 18, 2018 at 2:11:06 PM UTC+12, George Neuner wrote: >> On Mon, 17 Sep 2018 15:04:58 -0700 (PDT), gp....@gmail.com wrote: >> >> >On Tuesday, September 18, 2018 at 3:30:50 AM UTC+12, George Neuner wrote: >> >> >> >> I think you said you needed +-20ms ... you didn't say how large or >> >> complex the network, but if the cross section is under 10 milliseconds >> >> (a pretty large wired net), you should be easily able to achieve your >> >> desired resolution just with software PTP. >> > >> >Thanks. It's for synching audio so +-20ms is ok. There can be up to >> >64 nodes. We can assume a dedicated network but we may have some >> >VOIP traffic with up to six people on a phone conference call. We >> >were intending to use NTP in hierarchical mode because it seems >> >easier to configure. We need the synch to keep working if there's a >> >break in the network or a node is offline. >> >> Are these nodes some kind of networkable speakers? >> >> +-20ms is no good for audio. Ears are a lot more sensitive to >> syncronization artifacts than are eyes. Even "tin" ears will perceive >> some artifacts at +-20ms. Depending on content [e.g., people tolerate >> hiccups in voice better than in music] stereo streams need to be >> sync'd within +-8ms to be acceptible to most people. Under optimal >> listening conditions, audiophiles with really good ears can hear sync >> artifacts even down to +-3ms.
At the speed of sound, that is a location variation of +/- 1 m. Should be easy with NTP on a local network. I have often measured less than 1 ms transaction times with half duplex protocols on a 100baseT switch. At least the full NTP should have no problems maintaining submillisecond accuracy, possibly also SNTP (Simple-NTP) implementations. You have a high speed but (slowly drifting) local oscillator (such as the Time Stamp Counter TSC in x86 or even the 48/96/192 kHz sample clock). For each NTP transaction, compare the received time at the local clock time. Due to variable network delays, there will be some jitter in the received times. compared to local clock. Some NTP samples are early and some late compared to the local clock. When there are about equal number of early as well as late samples, the local clock is at correct time and frequency. If there is a early or late bias between samples, adjust local clock so that there is equal amount of early and late samples. Low pass filtering the clock differences and sooner or later, you a very good local clock accuracy. For a microphone network, the local clocks should be within a sample clock period i.e. 20, 10 or 5 us.
>> >> That said, 64 nodes is not all that many - cable lengths permitting >> they all could be on one switch. Even with a handful of switches, the >> cross section of the LAN should be < ~3ms. Unless you have a >> congested network or serious drift problems with your nodes, NTP can >> be about as accurate as the network cross section. >> >> Software PTP can do better than NTP - if you can afford the protocol >> traffic it can get down to the accuracy of the timekeeper clock. But >> if you want reliable sub-millisecond synchronization with minimum >> traffic then you are going to need PTP hardware. >> >> YMMV, >> George > >Yes, it's a network of speakers. It sounds like NTP should be ok then. We're likely to use a hierarchical arrangement. Do you know if there's any way to use either PTP or NTP without configuring lots of static IP addresses at every node.
OK, so it is a speaker network, so no need for sample clock synchronization, since the sound data i self clocked. Thus a millisecond accuracy is more than sufficient.
On Fri, 21 Sep 2018 15:00:51 +0100, Chris <xxx.syseng.yyy@gfsys.co.uk>
wrote:

>On 08/25/18 01:54, gp.kiwi@gmail.com wrote: >> We need to synchronize clocks (so we can synchronize audio) on physically separated controllers that are connected via ethernet that may or may not be connected to the internet. The controllers have ARM9 LPC3250 SOM and embedded Linux. We need to synch with no more than 50 ms difference but preferably less. Synchronization needs to continue working between any two controllers regardless of the failure of any other controller. >> >> Does anyone have an idea what is the easiest way to do this? > >Read all the replies about ntp etc, but perhaps >look at this another way ?. Rather than each client >*requesting* time, how about a server / node >broadcasting time to the net, with each client >locking on to that ?. Not sure if there are any >standards for it, but may be be worth having >a look.
The traditional method is to use some IRIG protocol variant. Of course this requires a dedicated network, but standard CAT5 cabling and even 10baseT hubs might be usable. Unfortunately IRIG devices can be quite expensive.
> >That would get rid of any client / server request >delays altogether, with sync offsets determined >entirely by the client software, probably down >to microseconds... > >Chris
On 09/22/18 03:17, George Neuner wrote:

> But the cable length can be up to 100m for twisted pair, 200m for thin > cable, and 500m for thick cable. And up to 4 repeaters (5 cables) are > permitted between any pair of stations, provided the total CSMA/CD > collision sense time does not exceed the maximum allowed (which varies > by transmission speed). > > Remember also that for twisted pair, switches establish new collision > domains for each switched port, but repeating hubs and L2 bridges do > not. Even for twisted pair, you *still* can have up to 4 repeaters > between the central switch and any station. > > > And yes, there still are uses for non-switched repeater hubs. > > George
That's the spec limits, but good design matches the solution to the requirements, which should be conservative and not stretching limits. Remember, this is engineering, not academia and you are always working with the three conflicting goals: performance, cost and reliability, pick any two :-)... Chris
On 09/22/18 00:35, gp.kiwi wrote:

> > It's not rude. I was wondering if someone would say that. We do have a kind of IP expert whose opinion is that NTP will be fine. I find I can learn a lot by just asking questions so I take the risk of making myself look stupid in the hope of learning something. e.g. someone mentioned that NTP favours a packet with the shortest round trip time. I suspect I could read for an hour or more about NTP and not discover that. Also, nothing I've read gives me any clue about how an NTP packet is processed and how it avoids queueing delays at each end. > > The project isn't really especially complex. I can't really say anything more specific about it. The extra micro is to take some of the work-load and real-time stuff off the Linux micro.
It was a bit off the cuff, but seems to me such a project needs a more thorough analysis of requirements in terms of latency. Then, identify all the components in the data path and work out worst case for them all to see if the spec can be met. If not, add / swap stuff around until it can. ntp may be a hammer, but not everything is a nail, right ?. I prefer lightweight solutions, keep things simple as possible to reduce development time, cost and ongoing maintenance. The latter being significant if the design is right on the edge, or if there's been not enough analysis up front. Anyway, suggest have a look again at the FTSP page http://tinyos.stanford.edu/tinyos-wiki/index.php/FTSP which looks like it might be ideal for the mS latency requirements you have. Never used it, but such timing specs must a be a very common thing across industry and there must be laods of solutions out there, so suggest cast the net a bit wider... Chris
On Friday, September 21, 2018 at 10:00:55 AM UTC-4, Chris wrote:
> On 08/25/18 01:54, gp.kiwi@gmail.com wrote: > > We need to synchronize clocks (so we can synchronize audio) on physically separated controllers that are connected via ethernet that may or may not be connected to the internet. The controllers have ARM9 LPC3250 SOM and embedded Linux. We need to synch with no more than 50 ms difference but preferably less. Synchronization needs to continue working between any two controllers regardless of the failure of any other controller. > > > > Does anyone have an idea what is the easiest way to do this? > > Read all the replies about ntp etc, but perhaps > look at this another way ?. Rather than each client > *requesting* time, how about a server / node > broadcasting time to the net, with each client > locking on to that ?. Not sure if there are any > standards for it, but may be be worth having > a look. > > That would get rid of any client / server request > delays altogether, with sync offsets determined > entirely by the client software, probably down > to microseconds...
ntp is a protocol. The message routing times are compared to find the estimated delay times. Without a round trip there is no way of doing this. Otherwise you have to guess. How do you expect the "client software" to do this? Rick C.

Memfault Beyond the Launch