EmbeddedRelated.com
Forums

Atmel releasing FLASH AVR32 ?

Started by -jg March 19, 2007
Jim Granville wrote:
> Wilco Dijkstra wrote: >> "Jim Granville" <no.spam@designtools.maps.co.nz> wrote >>> Wilco Dijkstra wrote: >>> >>>> It is impossible to run code at a predictable speed, so you're >>>> screwed no matter whether you use a cache or not. >>> >>> ?! - what ? Or are you talking only within the ARM subset of >>> the CPU universe here ? >> >> I guess you haven't heard about interrupts, wait states, cycle >> stealing DMA and other niceties then. Some of use live in the >> real world... > > Which has nothing to do with the false, sweeping claim you made > above. > > Not only is is possible to run code a predictable speeds, > a large number of designs out there are doing this on a daily basis.
I think you are talking about a minimum service time, while Wilco is talking about entirely predictable times. You can't have the latter when essentially random asynchronous events steal processing time. We can control the net effect by adding timer interrupts. -- Chuck F (cbfalconer at maineline dot net) Available for consulting/temporary embedded and systems. <http://cbfalconer.home.att.net> -- Posted via a free Usenet account from http://www.teranews.com
On Wed, 21 Mar 2007 21:29:17 -0500, CBFalconer <cbfalconer@yahoo.com>
wrote:

>Jim Granville wrote: >> Wilco Dijkstra wrote: >>> "Jim Granville" <no.spam@designtools.maps.co.nz> wrote >>>> Wilco Dijkstra wrote: >>>> >>>>> It is impossible to run code at a predictable speed, so you're >>>>> screwed no matter whether you use a cache or not. >>>> >>>> ?! - what ? Or are you talking only within the ARM subset of >>>> the CPU universe here ? >>> >>> I guess you haven't heard about interrupts, wait states, cycle >>> stealing DMA and other niceties then. Some of use live in the >>> real world... >> >> Which has nothing to do with the false, sweeping claim you made >> above. >> >> Not only is is possible to run code a predictable speeds, >> a large number of designs out there are doing this on a daily basis. > >I think you are talking about a minimum service time, while Wilco >is talking about entirely predictable times. You can't have the >latter when essentially random asynchronous events steal processing >time. We can control the net effect by adding timer interrupts.
I wrote an application where cycle by cycle exact counts are precisely required, with repeatability of less than 8ns variation of signal observed externally with a high speed scope. It's crucial because the timing is the divisor in some vital calculations where error cannot be well tolerated and I have no external feedback about its actual value so I need to know it, a priori from the crafted design. The only interrupts present in the system are those from a timer, which is set to interrupt only when I happen to _know_ that there is free time to tolerate the interruption. Even the serial ports operation is synchronized to an available window of time. The operation of external hardware by the software must be extremely precisely controlled and there are multiple lines to control in certain sequences, driven by zero-overhead loops that the DSP supports. Entirely predictable times, known to the cycle. Like clockwork. But then, I carefully crafted the entire chain of timing sequences and the asynchronous events to occur exactly when I could afford them to occur. Jon
CBFalconer wrote:

> Jim Granville wrote: > >>Wilco Dijkstra wrote: >> >>>"Jim Granville" <no.spam@designtools.maps.co.nz> wrote >>> >>>>Wilco Dijkstra wrote: >>>> >>>> >>>>>It is impossible to run code at a predictable speed, so you're >>>>>screwed no matter whether you use a cache or not. >>>> >>>>?! - what ? Or are you talking only within the ARM subset of >>>>the CPU universe here ? >>> >>>I guess you haven't heard about interrupts, wait states, cycle >>>stealing DMA and other niceties then. Some of use live in the >>>real world... >> >>Which has nothing to do with the false, sweeping claim you made >>above. >> >>Not only is is possible to run code a predictable speeds, >>a large number of designs out there are doing this on a daily basis. > > > I think you are talking about a minimum service time, while Wilco > is talking about entirely predictable times. You can't have the > latter when essentially random asynchronous events steal processing > time. We can control the net effect by adding timer interrupts.
Wilko inhabits the world Planet-ARM, whilst I live on Planet-Microcontroller, but his statement did not say 'entirely predictable times', it said: "It is impossible to run code at a predictable speed,..", !? which is simply nonsense, but has some merit as a good example of a "rash generalisation" :) Jon has shipping examples, so do we, and many others. Some uC have jitter free interrupts, others have interrupts that can be made jitter free, with the right design skills. There was a earlier thread about the merits of designing a core with a fixed INT latency, even if that meant inserting delays on the faster paths. The silicon cost of this is quite low. What you gain is a drop in jitter from multiple cycles, to clock-edge levels - that can be a 100:1 improvement. We have also routinely branch-delay mapped code, to get phase-error free output, and one design was a PAL signal generator, where you certainly DO notice any jitter, and if "It is impossible to run code at a predictable speed,.." were true, we could not have built this in SW. -jg
Jonathan Kirwan wrote:
>
... snip ...
> > Entirely predictable times, known to the cycle. Like clockwork. > But then, I carefully crafted the entire chain of timing sequences > and the asynchronous events to occur exactly when I could afford > them to occur.
I haven't done that for about 25 years, when I built a cheap timer for swimming meets. I didn't have any good touchpads though. The timer was built around an 8080, and needed careful construction to make all paths through routines take constant time. -- Chuck F (cbfalconer at maineline dot net) Available for consulting/temporary embedded and systems. <http://cbfalconer.home.att.net> -- Posted via a free Usenet account from http://www.teranews.com
> >> A multithreaded CPU running at 400 MHz can do the task of 40 CPUs >> running at 10 MHz. > > And a non-multithreaded CPU running at 400MHz can do the task of 40 > CPUs running at 10MHz. Multithreading doesn't enter the picture at all...
Since you iinsist on not understanding: Try this example: spi_task(unsigned char *mbox); { while(1); data = 0; waitfor(!CS); for(i = 0; i < 8; i++) { waitfor(SCK); data = (data << 1) | (MOSI); waitfor(!SCK); } send(mbox,data); waitfor(CS); } } You have 40 tasks, each running a S/W slave SPI. All SPIs have to run at the *same* frequency but are otherwise independent.. I.E: It may be the case that all clocks toggle at exactly the same time or no clock toggles at the same time as another clock. All SPIs must be handled concurrently. What is the maximum fixed frequency you can accept, with or without multithreading. Maybe now you get it?
> > Wilco >
-- Best Regards, Ulf Samuelsson This is intended to be my personal opinion which may, or may not be shared by my employer Atmel Nordic AB
"Jim Granville" <no.spam@designtools.maps.co.nz> wrote in message 
news:46021e47$1@clear.net.nz...
> CBFalconer wrote:
>> I think you are talking about a minimum service time, while Wilco >> is talking about entirely predictable times. You can't have the >> latter when essentially random asynchronous events steal processing >> time. We can control the net effect by adding timer interrupts.
Indeed.
> Wilko inhabits the world Planet-ARM, whilst I live on Planet-Microcontroller,
I was talking about current micro controllers, and that includes ARM and many others.
>but his statement did not say 'entirely predictable times', it said: "It is impossible to >run code at a predictable speed,..", !? which is simply nonsense, but has some > merit as a good example of a "rash generalisation" :)
I did indeed mean entirely predictable, this was clear from the context - which you left out. We were talking about caches and Ulf mentioned that if you have a cache hit execution becomes unpredictable. Ie. code runs too fast rather than failing to meet a realtime deadline! I stand by my claim that it is impossible to make code run with a fixed timing on current micro controllers (just to make it 100% clear, I mean non-trivial code, and dealing with realtime events). Microcontrollers typically have different memory timings for the different memories, there are data-dependent instruction timings to worry about, so you need to write everything in assembler and carefully balance the timings of if/then statements. If you pass pointers then you'd need to the memory timing into account whereever the pointers are used. Then there is the interrupt problem. If you do service (asynchronous) interrupts, then only the highest priority interrupt could run with a fixed execution time - assuming the controller has a fixed interrupt latency, which is rarely true. If you use polling to avoid this then you have a different interrupt latency problem as you can only poll once in a while, so asynchronous events cannot be handled in a fixed time.
> Jon has shipping examples, so do we, and many others.
Of trivial programs, yes. In the original post mobile phones, WLAN, GPS, Bluetooth were mentioned - could you do any of that? Your example of a PAL generator proves my point, it can't react to anything else while you're emitting a frame. Wilco
> Then there is the interrupt problem. If you do service (asynchronous) > interrupts, then only the highest priority interrupt could run with a > fixed > execution time - assuming the controller has a fixed interrupt latency, > which is rarely true. If you use polling to avoid this then you have a > different interrupt latency problem as you can only poll once in a while, > so asynchronous events cannot be handled in a fixed time. >
In a multithreaded core, if you have a thread allocated to that event you can guarantee a response time. See previous SPI slave example. You need to guarantee that the thread reads the input pin before the SPI master toggles the clock. You need one instruction to read that input pin as fast as possible, the rest of the thread can execute at any time. In a single threaded core, you would have problems due to overhead in interrupt entry/exit. -- Best Regards, Ulf Samuelsson This is intended to be my personal opinion which may, or may not be shared by my employer Atmel Nordic AB
"Ulf Samuelsson" <ulf@a-t-m-e-l.com> wrote in message news:ettfd5$836$1@aioe.org...

> spi_task(unsigned char *mbox); > { > while(1); > data = 0; > waitfor(!CS); > for(i = 0; i < 8; i++) { > waitfor(SCK); > data = (data << 1) | (MOSI); > waitfor(!SCK); > } > send(mbox,data); > waitfor(CS); > } > } > > You have 40 tasks, each running a S/W slave SPI. > All SPIs have to run at the *same* frequency but are otherwise independent.. > I.E: It may be the case that all clocks toggle at exactly > the same time or no clock toggles at the same time as another clock. > > All SPIs must be handled concurrently. > > > What is the maximum fixed frequency you can accept, > with or without multithreading.
Using polling in both cases would result in about the same max frequency. Assuming all ports run at the same frequency and are active then amount of code that needs to execute to receive 40 8-bit values is the same, whether multithreaded or not. If not all ports are active then multithreading has much lower CPU utilization (as only a few threads are running). Using interrupts in both cases would result in about the same max frequency. The maximum frequency is lower compared to polling (due to the interrupt latency overhead - twice as slow is possible in a worst case scenario). Multithreading will have a similar interrupt latency as taking an interrupt is virtually identical to starting a new thread (some CPUs even switch to a different set of registers). The advantage of using interrupts is that CPU utilization is much lower if only a few SPI ports are active. Peripherals typically have some buffering to reduce interrupt rate so the overhead is minimal (this is a little extra hardware, far less than hardware multithreading needs) . Therefore the advantage of polling when all devices are active is pretty small. So there is little difference between multithreaded polling and non-multithreaded interrupts. If you're claiming that polling has lower CPU utilization in a multithreaded environment then I agree. If you're claiming that interrupts have a large overhead if do you very little work per interrupt (ie. no buffering), then I agree. But I still don't see any advantage inherent to multithreading. Wilco
"Ulf Samuelsson" <ulf@a-t-m-e-l.com> wrote in message news:ettrol$35v$1@aioe.org...

> In a multithreaded core, if you have a thread allocated > to that event you can guarantee a response time. > > See previous SPI slave example.
Please see my reply...
> You need one instruction to read that input pin as fast as possible, > the rest of the thread can execute at any time. > > In a single threaded core, you would have problems > due to overhead in interrupt entry/exit.
Starting a thread on an event is just as complex as handling an interrupt. Wilco
"Wilco Dijkstra" <Wilco_dot_Dijkstra@ntlworld.com> skrev i meddelandet 
news:C2wMh.3352$5c2.86@newsfe3-win.ntli.net...
> > "Ulf Samuelsson" <ulf@a-t-m-e-l.com> wrote in message > news:ettrol$35v$1@aioe.org... > >> In a multithreaded core, if you have a thread allocated >> to that event you can guarantee a response time. >> >> See previous SPI slave example. > > Please see my reply... > >> You need one instruction to read that input pin as fast as possible, >> the rest of the thread can execute at any time. >> >> In a single threaded core, you would have problems >> due to overhead in interrupt entry/exit. > > Starting a thread on an event is just as complex as handling an > interrupt. > > Wilco >
No, you start a thread containing a loop and in the beginning of the loop, you wait for an event. Once that event occurs, the thread becomes computable and you can read data on the next CPU cycle. -- Best Regards, Ulf Samuelsson This is intended to be my personal opinion which may, or may not be shared by my employer Atmel Nordic AB