EmbeddedRelated.com
Forums

A timer driver for Cortex-M0+... it rarely doesn't work

Started by pozz April 26, 2017
>>> First you need to make ticks_high volatile. It doesn't make any sense >>> to me why you'd cast it volatile in one execution context but not the >>> other. If it's shared between two execution contexts, the compiler >>> needs to know that. > > "volatile" does not mean that a variable is shared between two execution > contexts - it is neither necessary nor sufficient to make such sharing work.
So true, David. I noticed that a variable shared between two asynchronous execution contexts can benefit by volatile declaration. I would instinctively declare it volatile because it's the right thing to do in this case. It won't solve the OP's problem though. JJS
On 27/04/17 19:24, John Speth wrote:
>>>> First you need to make ticks_high volatile. It doesn't make any sense >>>> to me why you'd cast it volatile in one execution context but not the >>>> other. If it's shared between two execution contexts, the compiler >>>> needs to know that. >> >> "volatile" does not mean that a variable is shared between two execution >> contexts - it is neither necessary nor sufficient to make such sharing >> work. > > So true, David. I noticed that a variable shared between two > asynchronous execution contexts can benefit by volatile declaration. I > would instinctively declare it volatile because it's the right thing to > do in this case. It won't solve the OP's problem though. >
You say you agree with me - then it looks like you completely /disagree/. "Instinctively declaring it volatile" is the /wrong/ thing to do when you share a variable between two contexts. It is wrong, because it is often not needed, but hinders optimisation. It is wrong, because it is often not enough to make it volatile. And it is wrong, because "instinctively" suggests you make it volatile without thought, rather than properly considering the situation. It is certainly the case that making a shared variable volatile can often be part of the solution - but no more than that.
Il 27/04/2017 15:39, David Brown ha scritto:
[...]
> I cannot say for sure that this can happen. But unless I can say for > sure that it /cannot/ happen, I prefer to assume the worst is possible.
Yes, you're right. One of my colleague would have said: "put on metal underwear, just to be sure" :-) [...]
>> Yes, of course. My point here is that you have to **remember** that the >> timers you are using can roll-over at any time in the future, so they >> can change from "not expired" to "expired". > > True. But perhaps that can be baked into your "Set" and "Expired" > functions, or otherwise made "unforgettable". The aim is to simplify > the code that /may/ have rare race conditions into something that cannot > possibly have such problems - even if it means other code is bigger or > less efficient.
Yes, I know. Indeed I will abandon my first approach to put together hw and sw counter, joint in ISR code. It is a technique learned from this newsgroup... it's a pity the original author isn't reading (I remember Don Y added some personal ideas to this approach and he read this ng in the past days). [...]
>> I liked the idea to use a 64-bits counter for ticks that will never >> roll-over during the entire lifetime of the device. > > If you can make your hardware timer function run every millisecond (or > whatever accuracy you need), then use this: > > extern volatile uint64_t tickCounter_; > > static inline uint64_t ticks(void) { > uint64_t a = tickCounter_; > while (true) { > uint64_t b = tickCounter_; > if (a == b) return a; > a = b; > } > } > > void TC0_Handler(void) { > if (TC0->COUNT32.INTFLAG.reg & TC_INTFLAG_OVF) { > tickCounter++; > TC0->COUNT32.INTFLAG.reg = TC_INTFLAG_OVF; > } > } > > If ticks never accesses the timer hardware register, there cannot be a > problem with synchronisation. There is no need to use the timer > peripherals complicated synchronisation and locking mechanism, nor any > concern about interrupt delays. Re-reading the 64-bit value until you > have two identical reads is sufficient to ensure that you have a > consistent value even if a timer interrupt occurs in the middle of the > 64-bit read.
Yes, it is a solution. There's a small drawback: you have a frequent interrupt (1ms). Maybe there's another solution to fix the first approach. The problem was that hw counter can roll over and the "rolled" value can be read, while the sw counter (my _ticks_high) is at the "old" (not incremented) value yet. The idea is to configure the timer to stop when it reaches TOP 0xFFFFFFFF value (one-shot timer). It can be restarted in ISR, together with incrementing _ticks_high. There's another drawback, a small drawback. The hw counter is the clock of the machine. When it reaches the TOP value, it stops for a short time. So the system time appears frozen for this short time. However this happens every 2^32 * Counter_Freq (in my case, every 1h and 21'). From this story, I learned another important thing. Why did I missed this bug? Because it could appear only every 1h and 21'. It /could/ appear, because it is random, so it could appear after 1000 times 1h21' (i.e. after 2 months!!!!) In the future I will avoid to use so long time. In my case, I don't really need the full 64bits. If I use a smaller 16-bits hw counter and the full 32-bits sw counter, I will have a 48-bits system tick (in my case, a periodicity of 10 years). In this case, a potential bug is related to the shorter period of the 16-bits hw counter, only 75ms. There is a much greater possibility to see the problem in my lab during testing and not in the user hands.
On 28/04/17 09:24, pozz wrote:
> Il 27/04/2017 15:39, David Brown ha scritto: > [...] >> I cannot say for sure that this can happen. But unless I can say for >> sure that it /cannot/ happen, I prefer to assume the worst is possible. > > Yes, you're right. One of my colleague would have said: "put on metal > underwear, just to be sure" :-) > > [...] >>> Yes, of course. My point here is that you have to **remember** that the >>> timers you are using can roll-over at any time in the future, so they >>> can change from "not expired" to "expired". >> >> True. But perhaps that can be baked into your "Set" and "Expired" >> functions, or otherwise made "unforgettable". The aim is to simplify >> the code that /may/ have rare race conditions into something that cannot >> possibly have such problems - even if it means other code is bigger or >> less efficient. > > Yes, I know. Indeed I will abandon my first approach to put together hw > and sw counter, joint in ISR code. > It is a technique learned from this newsgroup... it's a pity the > original author isn't reading (I remember Don Y added some personal > ideas to this approach and he read this ng in the past days). >
Don Y is around and reading and contributing to this group. I expect he has read this thread too, and will post if he has something to say.
> [...] >>> I liked the idea to use a 64-bits counter for ticks that will never >>> roll-over during the entire lifetime of the device. >> >> If you can make your hardware timer function run every millisecond (or >> whatever accuracy you need), then use this: >> >> extern volatile uint64_t tickCounter_; >> >> static inline uint64_t ticks(void) { >> uint64_t a = tickCounter_; >> while (true) { >> uint64_t b = tickCounter_; >> if (a == b) return a; >> a = b; >> } >> } >> >> void TC0_Handler(void) { >> if (TC0->COUNT32.INTFLAG.reg & TC_INTFLAG_OVF) { >> tickCounter++; >> TC0->COUNT32.INTFLAG.reg = TC_INTFLAG_OVF; >> } >> } >> >> If ticks never accesses the timer hardware register, there cannot be a >> problem with synchronisation. There is no need to use the timer >> peripherals complicated synchronisation and locking mechanism, nor any >> concern about interrupt delays. Re-reading the 64-bit value until you >> have two identical reads is sufficient to ensure that you have a >> consistent value even if a timer interrupt occurs in the middle of the >> 64-bit read. > > Yes, it is a solution. There's a small drawback: you have a frequent > interrupt (1ms).
In my experience, that is not much of a drawback unless you are making a very low power system that spends a long time sleeping. I usually have lots of little tasks hanging off a 1 ms software timer.
> > > Maybe there's another solution to fix the first approach.
Yes, there is - I posted it earlier. You can check the overflow flag in ticks().
> The problem > was that hw counter can roll over and the "rolled" value can be read, > while the sw counter (my _ticks_high) is at the "old" (not incremented) > value yet. > The idea is to configure the timer to stop when it reaches TOP > 0xFFFFFFFF value (one-shot timer). It can be restarted in ISR, together > with incrementing _ticks_high.
You are going to get inaccuracies that build up over time if you do that. Maybe that's fine for your application, of course - in which case it is a perfectly workable idea.
> > There's another drawback, a small drawback. The hw counter is the clock > of the machine. When it reaches the TOP value, it stops for a short > time. So the system time appears frozen for this short time. > However this happens every 2^32 * Counter_Freq (in my case, every 1h and > 21'). > > > From this story, I learned another important thing. Why did I missed > this bug? Because it could appear only every 1h and 21'. It /could/ > appear, because it is random, so it could appear after 1000 times 1h21' > (i.e. after 2 months!!!!)
Yes. You can use testing to show the presence of bugs - but you cannot use testing to show their absence. You have to think these things through very carefully. Or switch to a chip family like the Kinetis that have multiple 32-bit timers that can be chained together in hardware :-)
> > In the future I will avoid to use so long time. In my case, I don't > really need the full 64bits. If I use a smaller 16-bits hw counter and > the full 32-bits sw counter, I will have a 48-bits system tick (in my > case, a periodicity of 10 years). > In this case, a potential bug is related to the shorter period of the > 16-bits hw counter, only 75ms. There is a much greater possibility to > see the problem in my lab during testing and not in the user hands. > >
Il 28/04/2017 11:05, David Brown ha scritto:
[...]
>> From this story, I learned another important thing. Why did I missed >> this bug? Because it could appear only every 1h and 21'. It /could/ >> appear, because it is random, so it could appear after 1000 times 1h21' >> (i.e. after 2 months!!!!) > > Yes. You can use testing to show the presence of bugs - but you cannot > use testing to show their absence. You have to think these things > through very carefully. > > Or switch to a chip family like the Kinetis that have multiple 32-bit > timers that can be chained together in hardware :-)
Yes, you should use all the chips to select the best for your needs. But noone as so long time to test all chips. I am a fan of 8-bits AVR from Atmel (mostly when compared with PICs from Microchip... it's funny to think that now they are the same vendor), so I naturally started with Cortex-M SAM devices. Apart the big monster named ASF (Atmel Software Framework), the libraries written by Atmel folks to help beginners start writing software with minimal efforts, SAM devices are good to me. I initially invested some time to understand datasheet and abandon ASF and write my own low-level drivers. I'm happy with this approach now. What I don't like of SAM devices is the register syncronization mess. You have to write always sync waiting loops, before/after reading/writing some peripherals registers. Even when you simply want to read the value of a hw counter (as my story explained). They have Cortex-M devices that works at 5V and this is a big plus. Moreover they are mostly pin-to-pin compatible and they have a good pin multiplexing scheme (you have an UART almost on every pin). Atmel Studio is slow, but it works well. It sometimes crashes, mainly during debug, but you can live with them. I don't like Eclipse too much. They have a nice Event System peripheral that connects an output event of a peripheral with an input event of another peripheral. For example, you can start *automatically* an ADC conversion when a timer oveflows. You can also connect the overflow event of a 32-bits timer to a count event of another 32-bits timer to have a 64-bits timer/counter. However I don't know if this event mechanism introduces some delays, so I don't want to use it to solve my original problem.
On 28/04/17 12:18, pozz wrote:
> Il 28/04/2017 11:05, David Brown ha scritto: > [...] >>> From this story, I learned another important thing. Why did I missed >>> this bug? Because it could appear only every 1h and 21'. It /could/ >>> appear, because it is random, so it could appear after 1000 times 1h21' >>> (i.e. after 2 months!!!!) >> >> Yes. You can use testing to show the presence of bugs - but you cannot >> use testing to show their absence. You have to think these things >> through very carefully. >> >> Or switch to a chip family like the Kinetis that have multiple 32-bit >> timers that can be chained together in hardware :-) > > Yes, you should use all the chips to select the best for your needs. But > noone as so long time to test all chips.
My comment was not particularly serious - there are a great many reasons for picking a particular microcontroller, and there are /always/ things you dislike about them.
> > I am a fan of 8-bits AVR from Atmel (mostly when compared with PICs from > Microchip... it's funny to think that now they are the same vendor), so > I naturally started with Cortex-M SAM devices. > Apart the big monster named ASF (Atmel Software Framework), the > libraries written by Atmel folks to help beginners start writing > software with minimal efforts, SAM devices are good to me. > I initially invested some time to understand datasheet and abandon ASF > and write my own low-level drivers. I'm happy with this approach now.
<off-topic-rant> Why is it that vendors write such poor quality software for these sorts of frameworks or SDK's? I have seen a great many in my years, and /all/ of them are full of poor code. They are typically bloated, lasagne programming (i.e., it takes 6 layers of functions calling other functions to do something that requires a single assembly instruction), break when you change optimisation settings, have dozens of nested conditional compilation sections to handle devices that went out of production decades ago, spit piles of warnings when "-Wall" is enabled, and so on. </off-topic-rant>
> > What I don't like of SAM devices is the register syncronization mess. > You have to write always sync waiting loops, before/after > reading/writing some peripherals registers. Even when you simply want > to read the value of a hw counter (as my story explained). >
Sounds messy - this sort of thing can be hidden in the hardware even when peripheral clocks are asynchronous.
> They have Cortex-M devices that works at 5V and this is a big plus.
So do the Kinetis family.
> Moreover they are mostly pin-to-pin compatible and they have a good pin > multiplexing scheme (you have an UART almost on every pin). > Atmel Studio is slow, but it works well. It sometimes crashes, mainly > during debug, but you can live with them. I don't like Eclipse too much.
These things are a matter of taste, which is often a matter of what you are used to. I don't like MSVS at all, and therefore dislike Atmel Studio. (I wonder if they will migrate to a Netbeans IDE, which is what Microchip uses?). Of course, I use Linux for most of my development work, which makes me biased against Windows-only tools!
> > They have a nice Event System peripheral that connects an output event > of a peripheral with an input event of another peripheral. For example, > you can start *automatically* an ADC conversion when a timer oveflows. >
Kinetis devices have some of that, but it is not as advanced as the event system in the newer AVRs, if the Atmel ARM devices are similar.
> You can also connect the overflow event of a 32-bits timer to a count > event of another 32-bits timer to have a 64-bits timer/counter. However > I don't know if this event mechanism introduces some delays, so I don't > want to use it to solve my original problem.
In this particular case, the Kinetis has 4 programmable interrupt timers at 32-bits each, with configurable top counts. So on a 120 MHz core I set the first to count to 120 and trigger the second on overflow, with the second counting to 0xffffffff and triggering the third on overflow. This means I have a nice regular microsecond counter at 32-bit or 64-bit as needed, with easy synchronisation.
On 2017-04-27 pozz wrote in comp.arch.embedded:
> > Atmel SAM TCx peripherals are 16-bits counters/timers, but they can be > chained in a couple to have a 32-bits counter/timer. I already coupled > TC0 with TC1 to have a 32-bits hw counter. I can't chain TC0/TC1 with > TC2/TC3 to have a hardware 64-bits counter/timer.
Just out of curiosity, I had a look at the SAM C21 Family datasheet. It's been a long time since I used Atmel ARM controllers (SAM7). In the discription of the TC, I see no fixed 16 bit width and coupling of timers. Only that any TC channel can be configured in 8, 16 or 32 bit mode. Am I looking at the wrong datasheet or section? If the timers are indeed 8, 16 or 32 bit configurable, that could be a way to speed up your testing. Just set your timer to 8 or 16 bit (and add some code to set the other bits valid) and speed up overflows with a factor of 2^24 or 2^16. -- Stef (remove caps, dashes and .invalid from e-mail address to reply by mail) Beer -- it's not just for breakfast anymore.
Il 01/05/2017 10:09, Stef ha scritto:
> On 2017-04-27 pozz wrote in comp.arch.embedded: >> >> Atmel SAM TCx peripherals are 16-bits counters/timers, but they can be >> chained in a couple to have a 32-bits counter/timer. I already coupled >> TC0 with TC1 to have a 32-bits hw counter. I can't chain TC0/TC1 with >> TC2/TC3 to have a hardware 64-bits counter/timer. > > Just out of curiosity, I had a look at the SAM C21 Family datasheet. It's > been a long time since I used Atmel ARM controllers (SAM7). > > In the discription of the TC, I see no fixed 16 bit width and coupling of > timers. Only that any TC channel can be configured in 8, 16 or 32 bit mode. > Am I looking at the wrong datasheet or section?
When you use a TC in 8- or 16-bits, your are using a single TC peripheral. When you configured TC0 in 32-bits, you are automatically using TC1 too, that works in "slave" mode: The counter mode is selected by the Mode bit group in the Control A register (CTRLA.MODE). By default, the counter is enabled in the 16-bit counter resolution. Three counter resolutions are available: [...] &bull; COUNT32: This mode is achieved by pairing two 16-bit TC peripherals. TC0 is paired with TC1, and TC2 is paired with TC3. TC4 does not support 32-bit resolution. [...] IMHO this means TC is a 16-bits counter.
> If the timers are indeed 8, 16 or 32 bit configurable, that could be a way > to speed up your testing. Just set your timer to 8 or 16 bit (and add some > code to set the other bits valid) and speed up overflows with a factor of > 2^24 or 2^16.
Oh yes, if you read one of my previous post, I made exactly this to speed-up the raise of the bug. I discovered it was due to the lack of a sync wait loop after writing the read command to CTRLB register.
Il 28/04/2017 13:10, David Brown ha scritto:
 >> [...]
>> Moreover they are mostly pin-to-pin compatible and they have a good pin >> multiplexing scheme (you have an UART almost on every pin). >> Atmel Studio is slow, but it works well. It sometimes crashes, mainly >> during debug, but you can live with them. I don't like Eclipse too much. > > These things are a matter of taste, which is often a matter of what you > are used to. I don't like MSVS at all, and therefore dislike Atmel > Studio. (I wonder if they will migrate to a Netbeans IDE, which is what > Microchip uses?). Of course, I use Linux for most of my development > work, which makes me biased against Windows-only tools!
Don't think I'm a M$ fan. However I think it's much more fast to develop under Windows, because you have all the tools already configured and working under M$. Atmel Studio is not that bad as a self-contained IDE, except it is very slow. Of course, if you usually create your own Makefile to manage your build process, it's another story. In the past I tried to create/use an ARM toolchain with a custom Makefile and using whatever text-editor to change source code. Of course it worked... but the debug was a problem. So I want to ask you a question: how do you debug your projects under Linux? Maybe Kinetis IDE are Eclipse-based (I don't know), so I think it works well under Linux, from coding to debugging. Is the ARM debuggers/probes (J-Link, manufacturer specific devies) good under Linux? In the past I tried to configure Code::Blocks IDE, that it's very nice and fast for me, for Atmel ARM devices. It runs under Windows and Linux, because it is wxWidgets-based. Unfortunately the problem is always the same: debugging. I can't think to develop an application without the plus to break the programm, watches variables values, run the next instruction and so on. Moreover, manufacturer IDE usually gives other functionalities. For example, Atmel Studio gives the possibility to see core registers, peripherals' registers (well organized), Flash and RAM content and so on. I think you lost all those info with a "neutral" IDE during debugging.
On 02/05/17 09:14, pozz wrote:
> Il 28/04/2017 13:10, David Brown ha scritto: >>> [...] >>> Moreover they are mostly pin-to-pin compatible and they have a good pin >>> multiplexing scheme (you have an UART almost on every pin). >>> Atmel Studio is slow, but it works well. It sometimes crashes, mainly >>> during debug, but you can live with them. I don't like Eclipse too much. >> >> These things are a matter of taste, which is often a matter of what you >> are used to. I don't like MSVS at all, and therefore dislike Atmel >> Studio. (I wonder if they will migrate to a Netbeans IDE, which is what >> Microchip uses?). Of course, I use Linux for most of my development >> work, which makes me biased against Windows-only tools! > > Don't think I'm a M$ fan. However I think it's much more fast to develop > under Windows, because you have all the tools already configured and > working under M$. >
I'd say it is faster to develop under Linux, because you have all the tools ready and working - and many of them work much faster on Linux than Windows. But of course, that depends on the tools you want to use :-)
> Atmel Studio is not that bad as a self-contained IDE, except it is very > slow. Of course, if you usually create your own Makefile to manage your > build process, it's another story. > > In the past I tried to create/use an ARM toolchain with a custom > Makefile and using whatever text-editor to change source code. Of > course it worked... but the debug was a problem. > > So I want to ask you a question: how do you debug your projects under > Linux? Maybe Kinetis IDE are Eclipse-based (I don't know), so I think it > works well under Linux, from coding to debugging. Is the ARM > debuggers/probes (J-Link, manufacturer specific devies) good under Linux? >
For serious projects, I /always/ use my own Makefiles. But I often use the manufacturer's IDE, precisely because in many cases it makes debugging easier. So for programming on the Kinetis, I use the "Kinetis Design Studio" IDE. It is a perfectly reasonable Eclipse IDE (assuming you are happy with Eclipse), with the plugins and stuff for debugging. I use my own Makefile, but run it from within the IDE. I use a slightly newer version of gcc (from GNU Arm Embedded) than the version that comes with KDS. But I do my debugging directly from within the IDE. I have the same setup on Windows /and/ Linux, and can use either. That means I need some msys2/mingw-64 stuff installed on Windows to make it look like a real OS with standard utilities (make, sed, cp, mv, etc.), but that's a one-time job when you configure a new Windows system. The build process is significantly faster on Linux than Windows on comparable hardware, but the key point for me is that it all works and is system independent. For debugging, there are basically two ways to interact with hardware. You can use OpenOCD, which is open source, or you can use propriety devices and software. P&E Micro, for example, is usually handled by propriety software - but tools like KDS support it on Linux as well as Windows. Seggar J-Link work fine on Windows and Linux. And OpenOCD works fine Windows, but even better on Linux, and supports a vast range of hardware devices from high-end debuggers with Ethernet and trace, to home-made devices with an FTDI chip and a couple of passive components. The only Atmel devices I have used are AVRs, and I haven't had much use of them for a long time. It is even longer since I have used a debugger with them. But I have happily used Eclipse and an Atmel JTAG ICE debugger on Linux - though you need to do a little reading on the net to see how to set it up.
> In the past I tried to configure Code::Blocks IDE, that it's very nice > and fast for me, for Atmel ARM devices. It runs under Windows and Linux, > because it is wxWidgets-based. Unfortunately the problem is always the > same: debugging. > I can't think to develop an application without the plus to break the > programm, watches variables values, run the next instruction and so on. >
I agree - usually debugging is handy, especially early on in a project. Later on it can get impractical except perhaps for post-mortem debugging. You don't really want your motor driver to keep stopping at breakpoints...
> Moreover, manufacturer IDE usually gives other functionalities. For > example, Atmel Studio gives the possibility to see core registers, > peripherals' registers (well organized), Flash and RAM content and so > on. I think you lost all those info with a "neutral" IDE during debugging. >
Some manufacturer IDEs give a lot of useful extra features, others are less useful. And sometimes you can get much of the effect from a generic IDE. If the device headers for a chip define an array of structs "TIMER[4]" with good struct definitions for the timers, then you can just add a generic "watch" expression for TIMER[0] and expand it, to view the contents of the TIMER[0] registers. That may screw up registers that have volatile effects on read, but it usually works quite well - often as good as the manufacturers' own add-ons. For the ARM, however, there is a large project: <http://gnuarmeclipse.github.io/> Most ARM microcontroller manufacturers, with Atmel being the one notable exception, make their IDEs from Eclipse with the extensions from this project - possibly with their own small modifications and additional extensions. You can put together a neutral IDE with off-the-shelf Eclipse and these extensions that gives you pretty much everything you get from a manufacturer's IDE, except for their Wizards, Project Generators, Chip Configuration Tools, etc.