EmbeddedRelated.com
Forums
The 2024 Embedded Online Conference

Atmel releasing FLASH AVR32 ?

Started by -jg March 19, 2007
"Jim Granville" <no.spam@designtools.maps.co.nz> wrote in message 
news:461042b5@clear.net.nz...
> Data sheets and info on Eval PCB, etc, are now up at > > http://www.atmel.com/dyn/general/updates.asp?cboDocType=0&cboFamily=0&btnSubmit=Submit > > -jg >
...... and of coarse the FreeRTOS.org port to go along with it :o) http://www.freertos.org/portAVR32.html [direct link - without menu frame (horror)] -- Regards, Richard. + http://www.FreeRTOS.org A free real time kernel for 8, 16 and 32bit systems. + http://www.SafeRTOS.com An IEC 61508 compliant real time kernel for safety related systems.
FreeRTOS.org wrote:
> "Jim Granville" <no.spam@designtools.maps.co.nz> wrote in message > news:461042b5@clear.net.nz... > >>Data sheets and info on Eval PCB, etc, are now up at >> >>http://www.atmel.com/dyn/general/updates.asp?cboDocType=0&cboFamily=0&btnSubmit=Submit >> >>-jg >> > > > ...... and of coarse the FreeRTOS.org port to go along with it :o) > > http://www.freertos.org/portAVR32.html
:) Did you try the AVR32 Studio ? - any comments ? -jg
"Jim Granville" <no.spam@designtools.maps.co.nz> wrote in message 
news:4610c10a$1@clear.net.nz...
> FreeRTOS.org wrote: >> "Jim Granville" <no.spam@designtools.maps.co.nz> wrote in message >> news:461042b5@clear.net.nz... >> >>>Data sheets and info on Eval PCB, etc, are now up at >>> >>>http://www.atmel.com/dyn/general/updates.asp?cboDocType=0&cboFamily=0&btnSubmit=Submit >>> >>>-jg >>> >> >> >> ...... and of coarse the FreeRTOS.org port to go along with it :o) >> >> http://www.freertos.org/portAVR32.html > > :) > > Did you try the AVR32 Studio ? - any comments ? > > -jg
All I've done with it is start it up and note that it was Eclipse. I did not use it in anger. I suppose I'm going to have to get into Eclipse (old dog, new tricks), but so far have not found a way of creating a project in Eclipse that permits files to be included using a relative path (below the project directory). If its as good as the 8bit AVRStudio version then it will be a very useful tool. I don't know if the 8bit version will be getting migrated over to Eclipse too? -- Regards, Richard. + http://www.FreeRTOS.org A free real time kernel for 8, 16 and 32bit systems. + http://www.SafeRTOS.com An IEC 61508 compliant real time kernel for safety related systems.
"Jim Granville" <no.spam@designtools.maps.co.nz> wrote in message 
news:460af1e0$1@clear.net.nz...

> On the subject of Multiple cores, and multiple threads, news today > shows this is advancing quite quickly. Intel does not seem to > think it is a 'waste of die area'.....
If you read what I wrote then you'd know that on a high end CPU it takes far less area than on a low end CPU. However Intel must still think it is a waste of die area, otherwise all their CPUs would have it... It is required now as 8 cores on a single chip use so much bandwidth that most cores are waiting for external memory most of the time (despite the huge L2 and L3 caches). Switching to a different thread on a cache miss makes sense in this case.
> Eight cores and 16 threads (probably they mean per-core?) is impressive > for what sound like fairly mainstream cores.
It clearly says 2 threads per core. Any more would be a waste. Wilco
> >> On the subject of Multiple cores, and multiple threads, news today >> shows this is advancing quite quickly. Intel does not seem to >> think it is a 'waste of die area'..... > > If you read what I wrote then you'd know that on a high end CPU it > takes far less area than on a low end CPU. However Intel must still > think it is a waste of die area, otherwise all their CPUs would have it... >
Multithreading on a high end general purpose CPU gives problem on their own. Especially with cache trashing. With an embedded core where you use tightly coupled high bandwidth memory for most of the threads you do not have that problem Note I am not advocating symmetric multiprocessing. I think it is eminently useful for assymetric multiprocessing where you have some dedicated tasks to do which are best implemented in a separate CPU to avoid real time response conflicts and can be implemented in a low end 32 bitter. I think you need to stop trying to explain why a single CPU is better than a multiththreaded CPU, because noone is using a single CPU for implementing two simulaneously operating software MACs. If you continue, that just proves that you are either ignorant or not listening The issues is replacing multiple CPUs/Memory Subsystems with a single multithreaded CPU addressing a memory subsystem&#4294967295; consisting of internal TCM memory, internal loosely coupled memory (flash?) and external memory.
> It is required now as 8 cores on a single chip use so much > bandwidth that most cores are waiting for external memory most > of the time (despite the huge L2 and L3 caches). Switching to a > different thread on a cache miss makes sense in this case. > >> Eight cores and 16 threads (probably they mean per-core?) is impressive >> for what sound like fairly mainstream cores. > > It clearly says 2 threads per core. Any more would be a waste. >
Look at Sun and UltraSparc T1, they certainly do not see the boundaries that you see. I do not think that they are limited by Intels vision... Also I pointed you at the new MIPS Multithreading core. They certainly do not agree with You!
> Wilco
-- Best Regards, Ulf Samuelsson This is intended to be my personal opinion which may, or may not be shared by my employer Atmel Nordic AB
"Ulf Samuelsson" <ulf@a-t-m-e-l.com> wrote in message news:ev2i1h$qhh$1@aioe.org...

> Multithreading on a high end general purpose CPU gives problem on their own. > Especially with cache trashing.
Absolutely. The "solution" is to add more cache...
> With an embedded core where you use tightly coupled high bandwidth memory > for most of the threads you do not have that problem
Same solution: more fast on-chip memory.
> I think it is eminently useful for assymetric multiprocessing where > you have some dedicated tasks to do which are best implemented > in a separate CPU to avoid real time response conflicts and can > be implemented in a low end 32 bitter.
I'm not quite sure what you're saying here. Are you advocating asymmetric multiprocessing or asymmetric multithreading?
> I think you need to stop trying to explain why a single CPU > is better than a multiththreaded CPU, because noone is > using a single CPU for implementing two simulaneously > operating software MACs.
First of all, you're the one that claims one CPU is better than 2... I believe 2 CPUs is better in many cases - multicore is the future. However if you do move to a single (faster) CPU then it doesn't make much difference in terms of realtime response whether that CPU is multithreaded or not. You seem to believe that threads are somehow much better than interrupts - but as I've shown they are equivalent concepts.
> If you continue, that just proves that you are either ignorant or not listening
That kind of response is not helping your case. If you believe I'm wrong, then why not prove me wrong with some hard facts and data?
> The issues is replacing multiple CPUs/Memory Subsystems > with a single multithreaded CPU addressing a memory subsystem&#4294967295; > consisting of internal TCM memory, internal loosely coupled > memory (flash?) and external memory.
Most realtime CPUs have some form of fast internal memory, this is not relevant to multithreading.
>>> Eight cores and 16 threads (probably they mean per-core?) is impressive >>> for what sound like fairly mainstream cores. >> >> It clearly says 2 threads per core. Any more would be a waste. >> > > Look at Sun and UltraSparc T1, they certainly do not see the boundaries that you see.
The T1 has tiny caches and stalls on a cachemiss unlike any other high-end out-of-order CPU, so they require more threads to keep going if one thread stalls. It is also designed for highly multithreaded workloads, so having more thread contexts means fewer context switches in software, which can be a big win on workloads running on UNIX/Windows (realtime OSes are far better at these things).
> I do not think that they are limited by Intels vision... > Also I pointed you at the new MIPS Multithreading core. > They certainly do not agree with You!
If you do not understand the differences between cores like Itanium-2, Pentium-4, Nehalem, Power5, Power6 (all 2-way multithreaded), and cores like the T1, MIPS34K and Ubicom (8+ -way threaded), then you're not the expert on multithreading you claim to be. Wilco
"Wilco Dijkstra" <Wilco_dot_Dijkstra@ntlworld.com> skrev i meddelandet 
news:e8eRh.2250$gr2.1244@newsfe4-gui.ntli.net...
> > "Ulf Samuelsson" <ulf@a-t-m-e-l.com> wrote in message > news:ev2i1h$qhh$1@aioe.org... > >> Multithreading on a high end general purpose CPU gives problem on their >> own. >> Especially with cache trashing. > > Absolutely. The "solution" is to add more cache...
No, the solution is to have more associativity in the cache. Having 4GB of direct mapped cache will not help you when two threads start using the same cache line.
>> With an embedded core where you use tightly coupled high bandwidth memory >> for most of the threads you do not have that problem > > Same solution: more fast on-chip memory.
If you want to solve the problem, general purpose for symmetric multiprocessing by putting the application memory on the chip, you are going to run into significant problems. You are beginning to get out of touch with reality, my dear friend.
>> I think it is eminently useful for assymetric multiprocessing where >> you have some dedicated tasks to do which are best implemented >> in a separate CPU to avoid real time response conflicts and can >> be implemented in a low end 32 bitter. > > I'm not quite sure what you're saying here. Are you advocating > asymmetric multiprocessing or asymmetric multithreading? >
I am saysing that it is cheaper to use asymmetric multithreading than asymmetric multiprocessing..
>> I think you need to stop trying to explain why a single CPU >> is better than a multiththreaded CPU, because noone is >> using a single CPU for implementing two simulaneously >> operating software MACs. > > First of all, you're the one that claims one CPU is better than 2... > I believe 2 CPUs is better in many cases - multicore is the future. > However if you do move to a single (faster) CPU then it doesn't > make much difference in terms of realtime response whether that > CPU is multithreaded or not. You seem to believe that threads are > somehow much better than interrupts - but as I've shown they are > equivalent concepts.
In order for interrupts to be equivalent to multithreading, where you can select a new executing an instruction from an interrupt every new clock cycle, you have to add additional constraint to your "interrupt" system. You have to have multiple register files and multiple program counters in the system. You have to add additional hardware to dynamically raise/lower priorities in order to distribute instructions among the different interrupts. Your "interrupt" driven system is likely to be mistaken for a multithreading system. Your way of discussion is way off , you ignore ALL arguments and requests to prove your point, in favour of continued rambling... You need to show that the given example (Multiple SPI slaves) can be handled equally well by an *existing* interrupt driven system as well as how it can be handled by an *existing* multithreaded system like the zero context switch cost MIPS processor, I now put the flip on the shoulder, can you concentrate to that instead of rambling?
> >> If you continue, that just proves that you are either ignorant or not >> listening > > That kind of response is not helping your case. If you believe I'm wrong, > then why not prove me wrong with some hard facts and data? >
I already did. I showed that there exist zero context switch cost MIPS processor. You have not shown that there exist zero cost interrupts. If go back to the example. You have a fixed clock. This is used by a number of SPI masters to provide data to your chip. Your chip implements SPI slaves and each SPI slave should run in a separate task/thread or whatever. The communication on each SPI slave channels is totally different and should be developed by two teams which do not communicate between each other and they are not aware of each other. once per byte, the SPI data is written to memory and an event flag register private to the thread/interrupt is written. They are aware of the execution environment, which in the interrupt case is the RTOS and how interrupts are handled Using one multithreaded and one interrupt processor, with frequency scaled so the top level of MIPS is equivalent, show that you can implement the SPI slave.
>> The issues is replacing multiple CPUs/Memory Subsystems >> with a single multithreaded CPU addressing a memory subsystem&#4294967295; >> consisting of internal TCM memory, internal loosely coupled >> memory (flash?) and external memory. > > Most realtime CPUs have some form of fast internal memory, > this is not relevant to multithreading. > >>>> Eight cores and 16 threads (probably they mean per-core?) is impressive >>>> for what sound like fairly mainstream cores. >>> >>> It clearly says 2 threads per core. Any more would be a waste. >>> >> >> Look at Sun and UltraSparc T1, they certainly do not see the boundaries >> that you see. > > The T1 has tiny caches and stalls on a cachemiss unlike any other > high-end out-of-order CPU, so they require more threads to keep going > if one thread stalls. It is also designed for highly multithreaded > workloads, > so having more thread contexts means fewer context switches in software, > which can be a big win on workloads running on UNIX/Windows (realtime > OSes are far better at these things).
It is the other way around. *Because* you have many threads you CAN stall a thread on a cache miss, without affecting the total throughput of the CPU. It is very likely that the T1 shoves more instructions per clock cycle than a "high end, branch prediction, out of order" single or dual thread CPU.
> >> I do not think that they are limited by Intels vision... >> Also I pointed you at the new MIPS Multithreading core. >> They certainly do not agree with You! > > If you do not understand the differences between cores like Itanium-2, > Pentium-4, Nehalem, Power5, Power6 (all 2-way multithreaded), > and cores like the T1, MIPS34K and Ubicom (8+ -way threaded), > then you're not the expert on multithreading you claim to be. >
You seems to want to slip into a discussion which type of CPU will exhibit the highest MIPS rate for a single thread. That is trying to force open an already open door.
> Wilco >
-- Best Regards, Ulf Samuelsson This is intended to be my personal opinion which may, or may not be shared by my employer Atmel Nordic AB
On Wed, 04 Apr 2007 23:49:48 GMT, "Wilco Dijkstra"
<Wilco_dot_Dijkstra@ntlworld.com> wrote:

> >"Jim Granville" <no.spam@designtools.maps.co.nz> wrote in message >news:460af1e0$1@clear.net.nz... > >> On the subject of Multiple cores, and multiple threads, news today >> shows this is advancing quite quickly. Intel does not seem to >> think it is a 'waste of die area'..... > >If you read what I wrote then you'd know that on a high end CPU it >takes far less area than on a low end CPU. However Intel must still >think it is a waste of die area, otherwise all their CPUs would have it... > >It is required now as 8 cores on a single chip use so much >bandwidth that most cores are waiting for external memory most >of the time (despite the huge L2 and L3 caches). Switching to a >different thread on a cache miss makes sense in this case. > >> Eight cores and 16 threads (probably they mean per-core?) is impressive >> for what sound like fairly mainstream cores. > >It clearly says 2 threads per core. Any more would be a waste.
The IP3000 from Ubicom supports 8 threads in hardware. Their solution seems to me to be a very good solution for multithreading in hardware, where one needs deterministic response from all threads. It looks like they essentially switch between instruction streams in hardware such that from a software point of view each thread runs as if it is the only thread, but running on a CPU with only a percentage of the total speed. Regards Anton Erasmus
"Ulf Samuelsson" <ulf@a-t-m-e-l.com> wrote in message news:ev4t4i$d4c$1@aioe.org...
> "Wilco Dijkstra" <Wilco_dot_Dijkstra@ntlworld.com> skrev i meddelandet > news:e8eRh.2250$gr2.1244@newsfe4-gui.ntli.net... >> >> "Ulf Samuelsson" <ulf@a-t-m-e-l.com> wrote in message news:ev2i1h$qhh$1@aioe.org... >> >>> Multithreading on a high end general purpose CPU gives problem on their own. >>> Especially with cache trashing. >> >> Absolutely. The "solution" is to add more cache... > > No, the solution is to have more associativity in the cache. > Having 4GB of direct mapped cache will not help you when > two threads start using the same cache line.
No. If you switch between threads in a finegrained way you need to ensure that the working set of each thread stays in the cache. This means the cache needs to be large enough to hold the code and data from several threads. The problem is that L1 caches are often too small even for a single thread... Associativity is not an issue at all, most caches are already 4 or 8-way set associative. If it were feasible, a 4GB direct mapped cache would not thrash at all as no threads would ever use the same line.
>>> With an embedded core where you use tightly coupled high bandwidth memory >>> for most of the threads you do not have that problem >> >> Same solution: more fast on-chip memory. > > If you want to solve the problem, general purpose for symmetric multiprocessing > by putting the application memory on the chip, you are going to run into significant > problems. > You are beginning to get out of touch with reality, my dear friend.
The current trend is clear: more on-chip memory either as caches or tightly coupled memory. And FYI there are no problems with symmetric multiprocessing, people have been doing it for many years. Cache coherency is a well understood problem, even high-end ARMs have it.
> In order for interrupts to be equivalent to multithreading, > where you can select a new executing an instruction from > an interrupt every new clock cycle, you have to add > additional constraint to your "interrupt" system. > > You have to have multiple register files and multiple program counters in the system. > You have to add additional hardware to dynamically raise/lower priorities > in order to distribute instructions among the different interrupts. > Your "interrupt" driven system is likely to be mistaken for a multithreading system.
Is it really that difficult to understand? Let me explain it in a different way. Start with the MIPS 34k core, and assign 1 thread to the main task and the others to one interrupt each. Set the thread priority of the interrupt threads to infinite. At this point the CPU behaves exactly like an interrupt driven core that uses special registers on an interrupt (many do so, including ARM). If you can only ever run one thread, you can't mistake this for a multithreaded core. From the other perspective, in an interrupt drive core you typically associate a function with each interrupt. There is *nothing* that prevents a CPU from prefetching the first few instructions of some or all interrupt routines. In combination with the use of special registers to avoid save/restore overhead, this can significantly reduce interrupt latency. Now tell me what the difference is between the above 2 cases. Do you still believe interrupts and threads are not closely related?
> Your way of discussion is way off , you ignore ALL arguments > and requests to prove your point, in favour of continued rambling... > > You need to show that the given example (Multiple SPI slaves) > can be handled equally well by an *existing* interrupt driven system > as well as how it can be handled by an *existing* multithreaded > system like the zero context switch cost MIPS processor,
Done, that, please reread my old posts. I have also shown that any zero-cost context switch multithreaded CPU (if it exists) can behave like a zero-cost interrupt based CPU. However you haven't shown a 40-thread CPU capable of running your example. Without one thread for each interrupt you need to use traditional interrupt handling rather than polling for events. Most embedded systems need more than the 8 interrupts/threads MIPS could handle, especially when combining 2 or more existing cores into 1 as you suggest.
>>> If you continue, that just proves that you are either ignorant or not listening >> >> That kind of response is not helping your case. If you believe I'm wrong, >> then why not prove me wrong with some hard facts and data? >> > I already did. > I showed that there exist zero context switch cost MIPS processor.
No you didn't. The MIPS core can switch between threads on every cycle, but that doesn't imply zero cost context switch on an interrupt.
> You have not shown that there exist zero cost interrupts.
There is no such thing as zero-cost interrupt. There are a few CPUs that can respond extremely quickly (eg. Transputer, Forth chips). However there is a tradeoff between the need for fast execution of normal code and fast interrupt response time.
> If go back to the example. > > You have a fixed clock. > This is used by a number of SPI masters to provide data to your chip. > Your chip implements SPI slaves and each SPI slave should run > in a separate task/thread or whatever. > The communication on each SPI slave channels is totally different > and should be developed by two teams which do not communicate > between each other and they are not aware of each other. > once per byte, the SPI data is written to memory and > an event flag register private to the thread/interrupt is written. > > They are aware of the execution environment, which in the interrupt case > is the RTOS and how interrupts are handled > > Using one multithreaded and one interrupt processor, with frequency scaled so the top > level > of MIPS is equivalent, show that you can implement the SPI slave.
I've already described 2 ways of doing it, reread my old posts. If you think it is not possible, please explain why exactly you think that, then I'll explain the fallacy in your argument.
>> The T1 has tiny caches and stalls on a cachemiss unlike any other >> high-end out-of-order CPU, so they require more threads to keep going >> if one thread stalls. It is also designed for highly multithreaded workloads, >> so having more thread contexts means fewer context switches in software, >> which can be a big win on workloads running on UNIX/Windows (realtime >> OSes are far better at these things). > > It is the other way around. *Because* you have many threads you CAN > stall a thread on a cache miss, without affecting the total throughput > of the CPU.
For the same amount of hardware, more threads means less space for caches, so more cache misses. More cache misses means you need more threads. Typical chicken and egg situation...
>It is very likely that the T1 shoves more instructions > per clock cycle than a "high end, branch prediction, out of order" single > or dual thread CPU.
Actually T1 benchmarks are very disappointing: with twice the number of cores and 8 times the number of threads the T1 does not even get close to Opteron or Woodcrest on heavily multithreaded benchmarks... It doesn't mean the whole idea is bad, I think the next generation will do much better (and so will AMD/Intel). However claiming that an in-order multithreaded CPU will easily outperform an out-of-order CPU on total work done is total rubbish.
>>> I do not think that they are limited by Intels vision... >>> Also I pointed you at the new MIPS Multithreading core. >>> They certainly do not agree with You! >> >> If you do not understand the differences between cores like Itanium-2, >> Pentium-4, Nehalem, Power5, Power6 (all 2-way multithreaded), >> and cores like the T1, MIPS34K and Ubicom (8+ -way threaded), >> then you're not the expert on multithreading you claim to be. >> > You seems to want to slip into a discussion which type > of CPU will exhibit the highest MIPS rate for a single thread. > That is trying to force open an already open door.
No, I wasn't talking about fast single thread performance. My point is that it is a fallacy to think that adding more and more threads is always better. Like so many other things, returns diminish while costs increase. I claim it would be a waste to add more threads on an out-of-order core (max frequency would go down, more cache needed to reclaim performance loss, so not cost effective). Wilco
>>>> Multithreading on a high end general purpose CPU gives problem on their >>>> own. >>>> Especially with cache trashing. >>> >>> Absolutely. The "solution" is to add more cache... >> >> No, the solution is to have more associativity in the cache. >> Having 4GB of direct mapped cache will not help you when >> two threads start using the same cache line. > > No. If you switch between threads in a finegrained way you need to ensure > that the working set of each thread stays in the cache. This means the > cache > needs to be large enough to hold the code and data from several threads. > The problem is that L1 caches are often too small even for a single > thread...
Again you do not read, or you may not be aware of the difference between a direct mapped cache and a set-associative cache. And your memory is failing as well, as I am proposing tightly coupled memory without any cache for all threads except the "application" thread
> Associativity is not an issue at all, most caches are already 4 or 8-way > set > associative. If it were feasible, a 4GB direct mapped cache would not > thrash > at all as no threads would ever use the same line. >
Direct mapped means that for each memory location there is exactly one location in the cache which can fit that word. Since your cache is not the same size as the primary memory you have for each location in the cache a large number of memory locations which will only fit into that cache location. If all threads happen to access a memory location where all locations map into the same cache location, you have terrible cache trashing. Read a book on caches...
>>>> With an embedded core where you use tightly coupled high bandwidth >>>> memory >>>> for most of the threads you do not have that problem >>> >>> Same solution: more fast on-chip memory. >> >> If you want to solve the problem, general purpose for symmetric >> multiprocessing >> by putting the application memory on the chip, you are going to run into >> significant >> problems. >> You are beginning to get out of touch with reality, my dear friend. > > The current trend is clear: more on-chip memory either as caches or > tightly coupled memory. And FYI there are no problems with symmetric > multiprocessing, people have been doing it for many years. Cache > coherency is a well understood problem, even high-end ARMs have it.
And way too expensive, if you can solve it with a multithreaded core connected to TCM.
>> In order for interrupts to be equivalent to multithreading, >> where you can select a new executing an instruction from >> an interrupt every new clock cycle, you have to add >> additional constraint to your "interrupt" system. >> >> You have to have multiple register files and multiple program counters in >> the system. >> You have to add additional hardware to dynamically raise/lower priorities >> in order to distribute instructions among the different interrupts. >> Your "interrupt" driven system is likely to be mistaken for a >> multithreading system. > > Is it really that difficult to understand? Let me explain it in a > different way. > > Start with the MIPS 34k core, and assign 1 thread to the main task and > the others to one interrupt each. Set the thread priority of the interrupt > threads to infinite. At this point the CPU behaves exactly like an > interrupt > driven core that uses special registers on an interrupt (many do so, > including ARM). If you can only ever run one thread, you can't mistake > this > for a multithreaded core.
> > From the other perspective, in an interrupt drive core you typically > associate > a function with each interrupt. There is *nothing* that prevents a CPU > from > prefetching the first few instructions of some or all interrupt routines. > In > combination with the use of special registers to avoid save/restore > overhead, this can significantly reduce interrupt latency. > > Now tell me what the difference is between the above 2 cases. > Do you still believe interrupts and threads are not closely related? >
Tell me how your interrupt system will make the pipeline execute instructions for two interrupts A and B occuring in the same time. A1:B1:A2:B2:A3:B3:A4:B4:A5:B5:A6:B6:A7:B7 Instead of B1:B2:B3:B4:B5:B6:B7:A1:A2:A3:A4:A5:A6:A7 Which I believe is the normal way for interrupts to behave... You may want to note the time until both threads/interrupt
>> Your way of discussion is way off , you ignore ALL arguments >> and requests to prove your point, in favour of continued rambling... >> >> You need to show that the given example (Multiple SPI slaves) >> can be handled equally well by an *existing* interrupt driven system >> as well as how it can be handled by an *existing* multithreaded >> system like the zero context switch cost MIPS processor, > > Done, that, please reread my old posts. I have also shown that any > zero-cost context switch multithreaded CPU (if it exists) can behave > like a zero-cost interrupt based CPU. >
No, you have not shown that an interrupt based CPU can interleave instructions in the way a multithreaded core can do it. Your "zero" interrupt latency core does not and will not exist.
> However you haven't shown a 40-thread CPU capable of running your > example. Without one thread for each interrupt you need to use traditional > interrupt handling rather than polling for events. Most embedded systems > need more than the 8 interrupts/threads MIPS could handle, especially > when combining 2 or more existing cores into 1 as you suggest.
Again you refrain from answering. I have shown the MIPS threaded core, and running 40 threads on such a core is a simple extension of the basic concept. If it makes you happier, then try do it with 8 threads you can fit into the MIPS core.
>>>> If you continue, that just proves that you are either ignorant or not >>>> listening >>> >>> That kind of response is not helping your case. If you believe I'm >>> wrong, >>> then why not prove me wrong with some hard facts and data? >>> >> I already did. >> I showed that there exist zero context switch cost MIPS processor. > > No you didn't. The MIPS core can switch between threads on every > cycle, but that doesn't imply zero cost context switch on an interrupt. >
I have never tried to prove that there is zero cost interrupts That is your idea which will never fly.
>> You have not shown that there exist zero cost interrupts. > > There is no such thing as zero-cost interrupt. There are a few CPUs that > can respond extremely quickly (eg. Transputer, Forth chips). However there > is a tradeoff between the need for fast execution of normal code and fast > interrupt response time. > >> If go back to the example. >> >> You have a fixed clock. >> This is used by a number of SPI masters to provide data to your chip. >> Your chip implements SPI slaves and each SPI slave should run >> in a separate task/thread or whatever. >> The communication on each SPI slave channels is totally different >> and should be developed by two teams which do not communicate >> between each other and they are not aware of each other. >> once per byte, the SPI data is written to memory and >> an event flag register private to the thread/interrupt is written. >> >> They are aware of the execution environment, which in the interrupt case >> is the RTOS and how interrupts are handled >> >> Using one multithreaded and one interrupt processor, with frequency >> scaled so the top level >> of MIPS is equivalent, show that you can implement the SPI slave. > > I've already described 2 ways of doing it, reread my old posts. > If you think it is not possible, please explain why exactly you think > that, > then I'll explain the fallacy in your argument.
Done earlier in this post. You cannot interleave instructions at a pretermined rate.
> >>> The T1 has tiny caches and stalls on a cachemiss unlike any other >>> high-end out-of-order CPU, so they require more threads to keep going >>> if one thread stalls. It is also designed for highly multithreaded >>> workloads, >>> so having more thread contexts means fewer context switches in software, >>> which can be a big win on workloads running on UNIX/Windows (realtime >>> OSes are far better at these things). >> >> It is the other way around. *Because* you have many threads you CAN >> stall a thread on a cache miss, without affecting the total throughput >> of the CPU. > > For the same amount of hardware, more threads means less space for > caches, so more cache misses. More cache misses means you need > more threads. Typical chicken and egg situation... >
If you dont have a cache, you dont get any cache misses.
>>It is very likely that the T1 shoves more instructions >> per clock cycle than a "high end, branch prediction, out of order" single >> or dual thread CPU. > > Actually T1 benchmarks are very disappointing: with twice the number > of cores and 8 times the number of threads the T1 does not even get > close to Opteron or Woodcrest on heavily multithreaded benchmarks... > > It doesn't mean the whole idea is bad, I think the next generation will do > much better (and so will AMD/Intel). However claiming that an in-order > multithreaded CPU will easily outperform an out-of-order CPU on total > work done is total rubbish. > >>>> I do not think that they are limited by Intels vision... >>>> Also I pointed you at the new MIPS Multithreading core. >>>> They certainly do not agree with You! >>> >>> If you do not understand the differences between cores like Itanium-2, >>> Pentium-4, Nehalem, Power5, Power6 (all 2-way multithreaded), >>> and cores like the T1, MIPS34K and Ubicom (8+ -way threaded), >>> then you're not the expert on multithreading you claim to be. >>> >> You seems to want to slip into a discussion which type >> of CPU will exhibit the highest MIPS rate for a single thread. >> That is trying to force open an already open door. > > No, I wasn't talking about fast single thread performance. My point is > that > it is a fallacy to think that adding more and more threads is always > better. > Like so many other things, returns diminish while costs increase. I claim > it would be a waste to add more threads on an out-of-order core (max > frequency would go down, more cache needed to reclaim performance > loss, so not cost effective). >
If you can replace a full core with a thread you always win. Obviously you are not going to take the time to go through the SPI slave example which proves you wrong. I suspect the reason is that you know you are wrong but to stiff headed to admit it, so I consider any future discussion on this subject with You a total waste of time.
> Wilco >
-- Best Regards, Ulf Samuelsson This is intended to be my personal opinion which may, or may not be shared by my employer Atmel Nordic AB

The 2024 Embedded Online Conference