Reply by Bernd Linsel February 13, 20202020-02-13
Rick C wrote:
> On Wednesday, February 12, 2020 at 4:44:13 PM UTC-5, robert...@yahoo.com wrote: > > Ok, this is more clear now. Wikipedia explains LL/SC pretty well. This is actually for multiple CPUs as much as multitasking. While you can just disable interrupts (assuming you can live with the interrupt latency issues) to make this work with a single CPU, if you are sharing the data structure with other CPUs the bus requires locking while these multiple transactions are happening. I assume the CPU has a signal to indicate a locked operation is happening to prevent other accesses from getting in and mucking up the works. > > Is there a way to emulate this locking using semaphores? Someone I know is a big fan of Propeller CPUs which share memory and I don't know if they have such an instruction. They share memory by interleaving access. > > > Custom stack processor, related to the Forth VM. When designing FPGAs I want a CPU will deterministic timing, so 1 instruction = 1 clock cycle works well. Interrupt latency is zero or one depending on how you count it. Next cycle after an unmasked interrupt is asserted fetches the first instruction of the IRQ routine. > > The CPU is not pipelined but the registers are aligned through the architecture to make it decode-execute/fetch rather than fetch-decode-execute. The fetch only depends on flags and instruction decode so it happens in parallel with the execute as far as timing is concerned. Someone insisted this was pipelined design because of these parallel parts. > > It's nothing special, YAMC (Yet Another MISC CPU). I've never spent the time to optimize the design for speed. Instead I did some work to trying to hybridize the stack design with register-like access to the stack to minimize stack juggling. Once that happened, the number of instructions for the test case I was using (an IRQ for DDS calculations) dropped by either a third or half, I forget which. The big stumbling block for me is coming up with software to help write code for it. lol >
One should mention that, at least in ARM and MIPS architectures, LL and SC are not implemented with a global lock signal, but instead using cache snooping (for uni- and multiprocessing systems). LL just performs a simple load and additionally locks the (L1) data cache line of that address (so that it cannot be replaced until SC or another LL). SC checks if data in that cache line has been modified since the last LL; if so, it fails, otherwise it succeeds and writes the datum (whether write-through or write-back is depended on CPU cache configuration and the virtual address). An SC instruction targeting an address that hasn't been a LL source before always fails and invalidates all LL atomic flags, so that their corresponding SC's will fail. Thus, an SC to a dummy address is exploited to implement synchronization barriers (in addition to cache sync instructions). The possible number of concurrent LL/SC pairs depends on the CPU model, most support only 1 pending SC after a LL, some allow up to 8 parallel LL/SC pairs (from different cache lines). Finally, an example: Emulated CAS on a MIPS32 CPU, works independed of number of processors in the system: // compare_and_swap // input: a0 = unsigned *p, a1 = unsigned old, a2 = unsigned new // returns: v0 = 1 (success) | 0 (failure), v1 = old value from *p .set nomips16, nomicromips, noreorder, nomacro compare_and_swap: 1: ll v1, 0(a0) // load linked from a0+0 in v1 bne v1, a1, 9f // if v1 != a1 (old), // branch forward to label 9 move v0, zero // branch delay slot: load result 0 // executed "while" taking the branch move v0, a2 // load a copy of a2 (new) into v0 sc v0, 0(a0) // store conditionally into a0+0 beq v0, zero, 1b // if unsuccessful (v0 == 0) // retry at label 1 nop // branch delay slot: nothing to do 9: jr ra // else (v0 == 1) return v0 nop // jump delay slot: nothing to do Ann.: This example could be further optimized for speed trading program space, reordering the opcodes so that the preferred case (successful CAS) executes linearly and branchless, forward branches likely not taken, and backward branches likely taken (usable branch prediction has only been introduced at MIPS R8). L1 cache latency is usually 1 clock, when executing linear code, it is hidden by prefetch and pipeline. A L1 cache line is typically 64 bytes (16 words) wide, i.e. if the CPU supports parallel LL/SCs, they must be at least 16 words apart, otherwise the SC to the address of the first LL will always fail. Regards, Bernd
Reply by George Neuner February 13, 20202020-02-13
On Sun, 9 Feb 2020 12:17:42 +0100, David Brown
<david.brown@hesbynett.no> wrote:

>On 09/02/2020 07:35, upsidedown@downunder.com wrote: >> On Sat, 8 Feb 2020 19:57:48 +0100, David Brown >> <david.brown@hesbynett.no> wrote: >> >>>> Never used NT, but I used W2k and it was great! W2k was widely >>>> pirated so MS started a phone home type of licensing with XP which >>>> was initially not well received, but over time became accepted. Now >>>> people reminisce about the halcyon days of XP. >>> >>> Did you not use NT 4.0 ? It was quite solid. W2K was also good, but XP >>> took a few service packs before it became reliable enough for serious use. >> >> NT 4.0 solid ?? >> >> NT4 moved graphical functions to kernel mode to speed up window >> updates. > >Yes. And that meant bugs in the graphics drivers could kill the whole >system, unlike in NT 3.x. And bugs in the graphics drivers were >certainly not unknown. However, with a little care it could run >reliably for long times. I don't remember ever having a software or OS >related crash or halt on our little NT 4 server.
Ditto. I spent ~7 years in a small company as acting network admin in addition to my regular development work. I watched over a pair of NT4 servers, a dozen NT4 workstations, and a handful of Win98 machines. The NT servers never gave any problems. They ran 24/7 and were rebooted only to replace a disk or install new software. We didn't install all the service packs, so sometimes the servers would run for more than a year without a reboot. The workstations only rarely had problems despite being exposed to software that was being developed on them. The machines ran 24/7 - backups done after hours and on weekends. I can speak only to my own experience as a developer: my workstation took a fair amount of abuse from crashing and otherwise misbehaving software, but generally it was rock solid and would run for months without something happening that required a reboot to fix.
>> In general, each NT4 service pack introduced new bugs and soon the >> next SP was released to correct the bugs introduced by the previous >> SP. Thus every other SPs were actually usable. >> >> Even NT5 beta was more stable than NT4 with most recent SP. NT5 beta >> was renamed Windows 2000 before final release. >> > >I certainly liked W2K, and found it quite reliable. But I still >remember NT 4.0 as good too.
In my experience, W2K was a bit flaky until SP2. After that, it generally was stable. Poster "upsidedown" (sorry, don't know your name) was right though about the NT4 service packs. In my own experience: - the initial OS release was a bit flaky - SP1 was stable (at least for English speakers) - SP2 was really flaky - SP3 was stable - SP4 was stable - SP5 was a bit flaky - SP6 was stable I have been using Windows since 3.0 (which still ran DOS underneath). I was quite happy with the reliability of NT4. I have had far more problems with "more modern" versions: XP, Win7, and now Win10. YMMV, George
Reply by David Brown February 13, 20202020-02-13
On 13/02/2020 01:44, Rick C wrote:

> Ok, this is more clear now. Wikipedia explains LL/SC pretty well. > This is actually for multiple CPUs as much as multitasking. While > you can just disable interrupts (assuming you can live with the > interrupt latency issues) to make this work with a single CPU, if you > are sharing the data structure with other CPUs the bus requires > locking while these multiple transactions are happening. I assume > the CPU has a signal to indicate a locked operation is happening to > prevent other accesses from getting in and mucking up the works. >
Yes, cpus with CAS and other locked instructions (like atomic read-modify-write sequences) need bus lock signals. These are quite easy to work with from the software viewpoint, and a real PITA to implement efficiently in hardware in a multi-core system with caches. Thus you get them in architectures like x86 that are designed to be easy to program, but not in RISC systems that are designed for fast and efficient implementations. CAS can be useful even on a single cpu, if you have multiple masters (DMA, for example). And CAS or LL/SC can be useful on a single cpu if you have pre-emptive multi-tasking and don't want to (or can't) disable interrupts. On a small processor like yours, disabling interrupts around critical regions is almost certainly the easiest and most efficient solution. (If I were making a cpu, I'd like to have a "temporary interrupt disable" counter as well as a global interrupt disable flag. I'd have an instruction to set this counter to perhaps 3 to 7 counts. That's enough time to make a CAS, or an atomic read-modify-write.)
> Is there a way to emulate this locking using semaphores? Someone I > know is a big fan of Propeller CPUs which share memory and I don't > know if they have such an instruction. They share memory by > interleaving access. >
There is a whole field of possibilities with locking, synchronisation mechanisms, and lock-free algorithms. Generally speaking, once you have one synchronisation primitive, you can emulate any others using it - but the efficiency can vary enormously.
Reply by Robert Wessel February 12, 20202020-02-12
On Wed, 12 Feb 2020 16:44:12 -0800 (PST), Rick C
<gnuarm.deletethisbit@gmail.com> wrote:

>On Wednesday, February 12, 2020 at 4:44:13 PM UTC-5, robert...@yahoo.com wrote:
>> No, the idea is to not update the word in memory unless it hasn't been >> changed. The classic example is using CAS to add an item to a linked >> list. You read the head pointer (that has to happen atomically, but >> on most CPUs that just requires that it be aligned), construct the new >> first element (most crucially the next pointer), and then if the head >> pointer is unchanged, you can replace it with a pointer to the new >> first item. >> >> If the values are not equal, you don't want to update the head pointer >> or you'll trash the linked list. In that case you retry the insertion >> operation using the new head pointer. >> >> CAS is intended to be safe to use to make that update, as it's atomic >> - the read of the value in memory, the compare to the old value, and >> the conditional update form an atomic block, and can't be interrupted >> or messed with by other CPUs in the system. >> >> CAS is pretty easy to simulate with LL/SC. In some cases you'd be >> better off adjusting the algorithm to better use LL/SC. In this case >> it depends on how you're accessing the low word of the timer. If you >> have only a single threaded of execution, you can fake CAS by >> disabling interrupts. > >Ok, this is more clear now. Wikipedia explains LL/SC pretty well. This is actually for multiple CPUs as much as multitasking. While you can just disable interrupts (assuming you can live with the interrupt latency issues) to make this work with a single CPU, if you are sharing the data structure with other CPUs the bus requires locking while these multiple transactions are happening. I assume the CPU has a signal to indicate a locked operation is happening to prevent other accesses from getting in and mucking up the works. > >Is there a way to emulate this locking using semaphores? Someone I know is a big fan of Propeller CPUs which share memory and I don't know if they have such an instruction. They share memory by interleaving access.
In the algorithm I suggested, you could just put a mutex around the sequence that emulates the CAS. That's safe, since the extension word is never updated from inside an interrupt handler (unless you actually intend for that to be possible, such as you were reading the extended time value from inside and ISR). Even if that's slow, it's on a leg of the code that will happen only rarely. You still need the atomic read of the extension word (although that's typically a non-issue, especially on a single hardware thread system).
Reply by Rick C February 12, 20202020-02-12
On Wednesday, February 12, 2020 at 4:44:13 PM UTC-5, robert...@yahoo.com wrote:
> On Wed, 12 Feb 2020 13:14:58 -0800 (PST), Rick C > <gnuarm.deletethisbit@gmail.com> wrote: > > >On Wednesday, February 12, 2020 at 12:49:20 PM UTC-5, David Brown wrote: > >> On 12/02/2020 16:26, Rick C wrote: > >> > On Monday, February 10, 2020 at 7:40:10 PM UTC-5, robert...@yahoo.com wrote: > >> >> > >> >> As I mentioned elsewhere in the thread, if you have an atomic 32-bit > >> >> read, and a 32-bit CAS, you can do this without locks pretty simply. > >> > > >> > I did a search but it didn't turn up. What's a CAS??? > >> > > >> > >> Compare-and-swap. It is a common instruction for use in multi-threading > >> systems as a building block for atomic accesses and lock-free algorithms > >> (and for implementing locks): > >> > >> <https://en.wikipedia.org/wiki/Compare-and-swap> > >> > >> > >> It corresponds roughly to the C code, executed atomically : > >> > >> bool cas(uint32_t * p, uint32_t old, uint32_t new) { > >> if (*p == old) { > >> *p = new; > >> return true; > >> } else { > >> return false; > >> } > >> } > >> > >> It is useful, but has its limits (the wikipedia page describes some, if > >> you are interested). In cases like this, it could be useful. > >> > >> However, the OP is using an ARM - and like most (but not all) RISC cpus, > >> ARM does not have a CAS instruction. Instead, it has a load-link and > >> store-conditional pair, which is more powerful and flexible than CAS but > >> a little harder to use. > > > >Someone was on my case about a self designed CPU not having some instruction that is essential for multitasking. Would this be the instruction? I'm not sure I understand. When you say *p == old, where is old kept? Is there really a stored value of old or is this a way of saying *p /= new??? In that case the code could be... > > > >bool cas(uint32_t * p, uint32_t old, uint32_t new) { > > if (*p == old) { > > *p = new; > > return true; > > } else { > > *p = new; > > return false; > > } > >} > > > >I write this because in my basic architecture memory is read/written on opposite phases of the CPU clock and all instructions are one clock cycle. The write is predetermined in the first phase of the clock, so the CPU can't have a RMW cycle. It can have a W/R cycle where the read data is the old data before the write. As long at the write is always done it can do the above in a single, non interruptible cycle... not that I'm contemplating performing multitasking. The code is more complex than warranted for a 600 LUT CPU. Just add another CPU. lol > > > >Giving what you wrote more thought it seems pretty clear it has to be implemented the way you have it written. > > > >I should it look up and learn something, lol. > > > No, the idea is to not update the word in memory unless it hasn't been > changed. The classic example is using CAS to add an item to a linked > list. You read the head pointer (that has to happen atomically, but > on most CPUs that just requires that it be aligned), construct the new > first element (most crucially the next pointer), and then if the head > pointer is unchanged, you can replace it with a pointer to the new > first item. > > If the values are not equal, you don't want to update the head pointer > or you'll trash the linked list. In that case you retry the insertion > operation using the new head pointer. > > CAS is intended to be safe to use to make that update, as it's atomic > - the read of the value in memory, the compare to the old value, and > the conditional update form an atomic block, and can't be interrupted > or messed with by other CPUs in the system. > > CAS is pretty easy to simulate with LL/SC. In some cases you'd be > better off adjusting the algorithm to better use LL/SC. In this case > it depends on how you're accessing the low word of the timer. If you > have only a single threaded of execution, you can fake CAS by > disabling interrupts.
Ok, this is more clear now. Wikipedia explains LL/SC pretty well. This is actually for multiple CPUs as much as multitasking. While you can just disable interrupts (assuming you can live with the interrupt latency issues) to make this work with a single CPU, if you are sharing the data structure with other CPUs the bus requires locking while these multiple transactions are happening. I assume the CPU has a signal to indicate a locked operation is happening to prevent other accesses from getting in and mucking up the works. Is there a way to emulate this locking using semaphores? Someone I know is a big fan of Propeller CPUs which share memory and I don't know if they have such an instruction. They share memory by interleaving access.
> What ISA is this for?
Custom stack processor, related to the Forth VM. When designing FPGAs I want a CPU will deterministic timing, so 1 instruction = 1 clock cycle works well. Interrupt latency is zero or one depending on how you count it. Next cycle after an unmasked interrupt is asserted fetches the first instruction of the IRQ routine. The CPU is not pipelined but the registers are aligned through the architecture to make it decode-execute/fetch rather than fetch-decode-execute. The fetch only depends on flags and instruction decode so it happens in parallel with the execute as far as timing is concerned. Someone insisted this was pipelined design because of these parallel parts. It's nothing special, YAMC (Yet Another MISC CPU). I've never spent the time to optimize the design for speed. Instead I did some work to trying to hybridize the stack design with register-like access to the stack to minimize stack juggling. Once that happened, the number of instructions for the test case I was using (an IRQ for DDS calculations) dropped by either a third or half, I forget which. The big stumbling block for me is coming up with software to help write code for it. lol -- Rick C. -++ Get 1,000 miles of free Supercharging -++ Tesla referral code - https://ts.la/richard11209
Reply by Robert Wessel February 12, 20202020-02-12
On Wed, 12 Feb 2020 13:14:58 -0800 (PST), Rick C
<gnuarm.deletethisbit@gmail.com> wrote:

>On Wednesday, February 12, 2020 at 12:49:20 PM UTC-5, David Brown wrote: >> On 12/02/2020 16:26, Rick C wrote: >> > On Monday, February 10, 2020 at 7:40:10 PM UTC-5, robert...@yahoo.com wrote: >> >> >> >> As I mentioned elsewhere in the thread, if you have an atomic 32-bit >> >> read, and a 32-bit CAS, you can do this without locks pretty simply. >> > >> > I did a search but it didn't turn up. What's a CAS??? >> > >> >> Compare-and-swap. It is a common instruction for use in multi-threading >> systems as a building block for atomic accesses and lock-free algorithms >> (and for implementing locks): >> >> <https://en.wikipedia.org/wiki/Compare-and-swap> >> >> >> It corresponds roughly to the C code, executed atomically : >> >> bool cas(uint32_t * p, uint32_t old, uint32_t new) { >> if (*p == old) { >> *p = new; >> return true; >> } else { >> return false; >> } >> } >> >> It is useful, but has its limits (the wikipedia page describes some, if >> you are interested). In cases like this, it could be useful. >> >> However, the OP is using an ARM - and like most (but not all) RISC cpus, >> ARM does not have a CAS instruction. Instead, it has a load-link and >> store-conditional pair, which is more powerful and flexible than CAS but >> a little harder to use. > >Someone was on my case about a self designed CPU not having some instruction that is essential for multitasking. Would this be the instruction? I'm not sure I understand. When you say *p == old, where is old kept? Is there really a stored value of old or is this a way of saying *p /= new??? In that case the code could be... > >bool cas(uint32_t * p, uint32_t old, uint32_t new) { > if (*p == old) { > *p = new; > return true; > } else { > *p = new; > return false; > } >} > >I write this because in my basic architecture memory is read/written on opposite phases of the CPU clock and all instructions are one clock cycle. The write is predetermined in the first phase of the clock, so the CPU can't have a RMW cycle. It can have a W/R cycle where the read data is the old data before the write. As long at the write is always done it can do the above in a single, non interruptible cycle... not that I'm contemplating performing multitasking. The code is more complex than warranted for a 600 LUT CPU. Just add another CPU. lol > >Giving what you wrote more thought it seems pretty clear it has to be implemented the way you have it written. > >I should it look up and learn something, lol.
No, the idea is to not update the word in memory unless it hasn't been changed. The classic example is using CAS to add an item to a linked list. You read the head pointer (that has to happen atomically, but on most CPUs that just requires that it be aligned), construct the new first element (most crucially the next pointer), and then if the head pointer is unchanged, you can replace it with a pointer to the new first item. If the values are not equal, you don't want to update the head pointer or you'll trash the linked list. In that case you retry the insertion operation using the new head pointer. CAS is intended to be safe to use to make that update, as it's atomic - the read of the value in memory, the compare to the old value, and the conditional update form an atomic block, and can't be interrupted or messed with by other CPUs in the system. CAS is pretty easy to simulate with LL/SC. In some cases you'd be better off adjusting the algorithm to better use LL/SC. In this case it depends on how you're accessing the low word of the timer. If you have only a single threaded of execution, you can fake CAS by disabling interrupts. What ISA is this for?
Reply by Rick C February 12, 20202020-02-12
On Wednesday, February 12, 2020 at 12:49:20 PM UTC-5, David Brown wrote:
> On 12/02/2020 16:26, Rick C wrote: > > On Monday, February 10, 2020 at 7:40:10 PM UTC-5, robert...@yahoo.com wrote: > >> > >> As I mentioned elsewhere in the thread, if you have an atomic 32-bit > >> read, and a 32-bit CAS, you can do this without locks pretty simply. > > > > I did a search but it didn't turn up. What's a CAS??? > > > > Compare-and-swap. It is a common instruction for use in multi-threading > systems as a building block for atomic accesses and lock-free algorithms > (and for implementing locks): > > <https://en.wikipedia.org/wiki/Compare-and-swap> > > > It corresponds roughly to the C code, executed atomically : > > bool cas(uint32_t * p, uint32_t old, uint32_t new) { > if (*p == old) { > *p = new; > return true; > } else { > return false; > } > } > > It is useful, but has its limits (the wikipedia page describes some, if > you are interested). In cases like this, it could be useful. > > However, the OP is using an ARM - and like most (but not all) RISC cpus, > ARM does not have a CAS instruction. Instead, it has a load-link and > store-conditional pair, which is more powerful and flexible than CAS but > a little harder to use.
Someone was on my case about a self designed CPU not having some instruction that is essential for multitasking. Would this be the instruction? I'm not sure I understand. When you say *p == old, where is old kept? Is there really a stored value of old or is this a way of saying *p /= new??? In that case the code could be... bool cas(uint32_t * p, uint32_t old, uint32_t new) { if (*p == old) { *p = new; return true; } else { *p = new; return false; } } I write this because in my basic architecture memory is read/written on opposite phases of the CPU clock and all instructions are one clock cycle. The write is predetermined in the first phase of the clock, so the CPU can't have a RMW cycle. It can have a W/R cycle where the read data is the old data before the write. As long at the write is always done it can do the above in a single, non interruptible cycle... not that I'm contemplating performing multitasking. The code is more complex than warranted for a 600 LUT CPU. Just add another CPU. lol Giving what you wrote more thought it seems pretty clear it has to be implemented the way you have it written. I should it look up and learn something, lol. -- Rick C. -+- Get 1,000 miles of free Supercharging -+- Tesla referral code - https://ts.la/richard11209
Reply by David Brown February 12, 20202020-02-12
On 12/02/2020 16:26, Rick C wrote:
> On Monday, February 10, 2020 at 7:40:10 PM UTC-5, robert...@yahoo.com wrote: >> >> As I mentioned elsewhere in the thread, if you have an atomic 32-bit >> read, and a 32-bit CAS, you can do this without locks pretty simply. > > I did a search but it didn't turn up. What's a CAS??? >
Compare-and-swap. It is a common instruction for use in multi-threading systems as a building block for atomic accesses and lock-free algorithms (and for implementing locks): <https://en.wikipedia.org/wiki/Compare-and-swap> It corresponds roughly to the C code, executed atomically : bool cas(uint32_t * p, uint32_t old, uint32_t new) { if (*p == old) { *p = new; return true; } else { return false; } } It is useful, but has its limits (the wikipedia page describes some, if you are interested). In cases like this, it could be useful. However, the OP is using an ARM - and like most (but not all) RISC cpus, ARM does not have a CAS instruction. Instead, it has a load-link and store-conditional pair, which is more powerful and flexible than CAS but a little harder to use.
Reply by Rick C February 12, 20202020-02-12
On Monday, February 10, 2020 at 7:40:10 PM UTC-5, robert...@yahoo.com wrote:
> > As I mentioned elsewhere in the thread, if you have an atomic 32-bit > read, and a 32-bit CAS, you can do this without locks pretty simply.
I did a search but it didn't turn up. What's a CAS??? -- Rick C. --+ Get 1,000 miles of free Supercharging --+ Tesla referral code - https://ts.la/richard11209
Reply by Dimiter_Popoff February 11, 20202020-02-11
On 2/11/2020 2:40, Robert Wessel wrote:
> On Mon, 10 Feb 2020 00:55:13 +0100, pozz <pozzugno@gmail.com> wrote: > >> Il 08/02/2020 18:03, Kent Dickey ha scritto: >>> [...] >>> Unfortunately, with this design, I believe it is not possible to implement >>> a GetTick() function which does not sometimes fail to return a correct time. >>> There is a fundamental race between the interrupt and the timer value rolling >>> to 0 which software cannot account for. >> >> Good point, Kent. Thank you for your post that helps to fix some >> critical bugs. >> >> You're right, ISRs aren't executed immediately after the relative event >> occurred. We should think ISR code runs after many cycles the interrupt >> event. >> >> >>> 1) Have a single GetTick() routine, which is single-tasking (by >>> disabling interrupts, or a mutex if there are multiple processors). >>> This requires something to call GetTick() at least once every 49 days >>> (worst case). This is basically the Rich C./David Brown solution, but >>> they don't mention that you need to remove the interrupt on 32-bit overflow. >> >> I think you mentioned to disable interrupts to avoid any preemption from >> RTOS scheduler, effectively blocking scheduler at all. >> However I know it's a bad idea to enable/disable interrupts "manually" >> with an RTOS. >> Maybe the mutex for GetTick() is a better idea, something similar to this: >> >> uint64_t >> GetTick(void) >> { >> mutex_take(); >> >> static uint32_t ticks_high; >> uint32_t ticks_hw = hwcnt_get(); >> static uint32_t ticks_last; >> >> if (ticks_last > ticks_hw) ticks_high++; >> ticks_last = ticks_hw; >> mutex_give(); >> >> return ((uint64_t)ticks_high << 32) | ticks_hw; >> } >> >>> 2) Use a higher interrupt rate. For instance, if we can take the interrupt >>> when read_low32() has carry from bit 28 to bit 29, then we can piece together >>> code which can work as long as GetTick() isn't delayed by more than 3-4 days. >>> This require GetTick() to change using code given under #4 below. >>> >>> 3) Forget the hardware counter: just take an interrupt every 1ms, and >>> increment a global variable uint64_t ticks64 on each interrupt, and then >>> GetTick just returns ticks64. This only works if the CPU hardware supports >>> atomic 64-bit accesses. It's not generally possible to write C code for a >>> 32-bit processor which can guarantee 64-bit atomic ops, so it's best to have >>> the interrupt handler deal with two 32-bit variables ticks_low and >>> ticks_high, and then you still need the GetTicks() to have a while loop to >>> read the two variables. >> >> What about? >> >> static volatile uint64_t ticks64; >> void timer_isr(void) { >> ticks64++; >> } >> uint64_t GetTick(void) { >> uint64_t t1 = ticks64; >> uint64_t t2; >> while((t2 = ticks64) - t1 > 100) { >> t1 = t2; >> } >> return t2; >> } >> >> If dangerous things happen (ISR executes during GetTick), t2-t1 is a >> very big number. 100ms represent the worst case max duration of >> ISRs/tasks that could preempt/interrupt GetTick. We could increase 100 >> even more. > > > As I mentioned elsewhere in the thread, if you have an atomic 32-bit > read, and a 32-bit CAS, you can do this without locks pretty simply. >
And I replied without having understood what you meant :-). Sorry about that. Dimiter