>>>> Multithreading on a high end general purpose CPU gives problem on their
>>>> own.
>>>> Especially with cache trashing.
>>>
>>> Absolutely. The "solution" is to add more cache...
>>
>> No, the solution is to have more associativity in the cache.
>> Having 4GB of direct mapped cache will not help you when
>> two threads start using the same cache line.
>
> No. If you switch between threads in a finegrained way you need to ensure
> that the working set of each thread stays in the cache. This means the
> cache
> needs to be large enough to hold the code and data from several threads.
> The problem is that L1 caches are often too small even for a single
> thread...
Again you do not read, or you may not be aware of the difference
between a direct mapped cache and a set-associative cache.
And your memory is failing as well, as I am proposing tightly coupled
memory without any cache for all threads except the "application" thread
> Associativity is not an issue at all, most caches are already 4 or 8-way
> set
> associative. If it were feasible, a 4GB direct mapped cache would not
> thrash
> at all as no threads would ever use the same line.
>
Direct mapped means that for each memory location there is exactly
one location in the cache which can fit that word.
Since your cache is not the same size as the primary memory
you have for each location in the cache a large number of memory
locations which will only fit into that cache location.
If all threads happen to access a memory location where all
locations map into the same cache location, you have terrible cache
trashing.
Read a book on caches...
>>>> With an embedded core where you use tightly coupled high bandwidth
>>>> memory
>>>> for most of the threads you do not have that problem
>>>
>>> Same solution: more fast on-chip memory.
>>
>> If you want to solve the problem, general purpose for symmetric
>> multiprocessing
>> by putting the application memory on the chip, you are going to run into
>> significant
>> problems.
>> You are beginning to get out of touch with reality, my dear friend.
>
> The current trend is clear: more on-chip memory either as caches or
> tightly coupled memory. And FYI there are no problems with symmetric
> multiprocessing, people have been doing it for many years. Cache
> coherency is a well understood problem, even high-end ARMs have it.
And way too expensive, if you can solve it with a multithreaded core
connected to TCM.
>> In order for interrupts to be equivalent to multithreading,
>> where you can select a new executing an instruction from
>> an interrupt every new clock cycle, you have to add
>> additional constraint to your "interrupt" system.
>>
>> You have to have multiple register files and multiple program counters in
>> the system.
>> You have to add additional hardware to dynamically raise/lower priorities
>> in order to distribute instructions among the different interrupts.
>> Your "interrupt" driven system is likely to be mistaken for a
>> multithreading system.
>
> Is it really that difficult to understand? Let me explain it in a
> different way.
>
> Start with the MIPS 34k core, and assign 1 thread to the main task and
> the others to one interrupt each. Set the thread priority of the interrupt
> threads to infinite. At this point the CPU behaves exactly like an
> interrupt
> driven core that uses special registers on an interrupt (many do so,
> including ARM). If you can only ever run one thread, you can't mistake
> this
> for a multithreaded core.
>
> From the other perspective, in an interrupt drive core you typically
> associate
> a function with each interrupt. There is *nothing* that prevents a CPU
> from
> prefetching the first few instructions of some or all interrupt routines.
> In
> combination with the use of special registers to avoid save/restore
> overhead, this can significantly reduce interrupt latency.
>
> Now tell me what the difference is between the above 2 cases.
> Do you still believe interrupts and threads are not closely related?
>
Tell me how your interrupt system will make the pipeline execute
instructions for two interrupts A and B occuring in the same time.
A1:B1:A2:B2:A3:B3:A4:B4:A5:B5:A6:B6:A7:B7
Instead of
B1:B2:B3:B4:B5:B6:B7:A1:A2:A3:A4:A5:A6:A7
Which I believe is the normal way for interrupts to behave...
You may want to note the time until both threads/interrupt
>> Your way of discussion is way off , you ignore ALL arguments
>> and requests to prove your point, in favour of continued rambling...
>>
>> You need to show that the given example (Multiple SPI slaves)
>> can be handled equally well by an *existing* interrupt driven system
>> as well as how it can be handled by an *existing* multithreaded
>> system like the zero context switch cost MIPS processor,
>
> Done, that, please reread my old posts. I have also shown that any
> zero-cost context switch multithreaded CPU (if it exists) can behave
> like a zero-cost interrupt based CPU.
>
No, you have not shown that an interrupt based CPU can interleave
instructions in the way a multithreaded core can do it.
Your "zero" interrupt latency core does not and will not exist.
> However you haven't shown a 40-thread CPU capable of running your
> example. Without one thread for each interrupt you need to use traditional
> interrupt handling rather than polling for events. Most embedded systems
> need more than the 8 interrupts/threads MIPS could handle, especially
> when combining 2 or more existing cores into 1 as you suggest.
Again you refrain from answering.
I have shown the MIPS threaded core, and running 40 threads
on such a core is a simple extension of the basic concept.
If it makes you happier, then try do it with 8 threads you
can fit into the MIPS core.
>>>> If you continue, that just proves that you are either ignorant or not
>>>> listening
>>>
>>> That kind of response is not helping your case. If you believe I'm
>>> wrong,
>>> then why not prove me wrong with some hard facts and data?
>>>
>> I already did.
>> I showed that there exist zero context switch cost MIPS processor.
>
> No you didn't. The MIPS core can switch between threads on every
> cycle, but that doesn't imply zero cost context switch on an interrupt.
>
I have never tried to prove that there is zero cost interrupts
That is your idea which will never fly.
>> You have not shown that there exist zero cost interrupts.
>
> There is no such thing as zero-cost interrupt. There are a few CPUs that
> can respond extremely quickly (eg. Transputer, Forth chips). However there
> is a tradeoff between the need for fast execution of normal code and fast
> interrupt response time.
>
>> If go back to the example.
>>
>> You have a fixed clock.
>> This is used by a number of SPI masters to provide data to your chip.
>> Your chip implements SPI slaves and each SPI slave should run
>> in a separate task/thread or whatever.
>> The communication on each SPI slave channels is totally different
>> and should be developed by two teams which do not communicate
>> between each other and they are not aware of each other.
>> once per byte, the SPI data is written to memory and
>> an event flag register private to the thread/interrupt is written.
>>
>> They are aware of the execution environment, which in the interrupt case
>> is the RTOS and how interrupts are handled
>>
>> Using one multithreaded and one interrupt processor, with frequency
>> scaled so the top level
>> of MIPS is equivalent, show that you can implement the SPI slave.
>
> I've already described 2 ways of doing it, reread my old posts.
> If you think it is not possible, please explain why exactly you think
> that,
> then I'll explain the fallacy in your argument.
Done earlier in this post. You cannot interleave instructions at a
pretermined rate.
>
>>> The T1 has tiny caches and stalls on a cachemiss unlike any other
>>> high-end out-of-order CPU, so they require more threads to keep going
>>> if one thread stalls. It is also designed for highly multithreaded
>>> workloads,
>>> so having more thread contexts means fewer context switches in software,
>>> which can be a big win on workloads running on UNIX/Windows (realtime
>>> OSes are far better at these things).
>>
>> It is the other way around. *Because* you have many threads you CAN
>> stall a thread on a cache miss, without affecting the total throughput
>> of the CPU.
>
> For the same amount of hardware, more threads means less space for
> caches, so more cache misses. More cache misses means you need
> more threads. Typical chicken and egg situation...
>
If you dont have a cache, you dont get any cache misses.
>>It is very likely that the T1 shoves more instructions
>> per clock cycle than a "high end, branch prediction, out of order" single
>> or dual thread CPU.
>
> Actually T1 benchmarks are very disappointing: with twice the number
> of cores and 8 times the number of threads the T1 does not even get
> close to Opteron or Woodcrest on heavily multithreaded benchmarks...
>
> It doesn't mean the whole idea is bad, I think the next generation will do
> much better (and so will AMD/Intel). However claiming that an in-order
> multithreaded CPU will easily outperform an out-of-order CPU on total
> work done is total rubbish.
>
>>>> I do not think that they are limited by Intels vision...
>>>> Also I pointed you at the new MIPS Multithreading core.
>>>> They certainly do not agree with You!
>>>
>>> If you do not understand the differences between cores like Itanium-2,
>>> Pentium-4, Nehalem, Power5, Power6 (all 2-way multithreaded),
>>> and cores like the T1, MIPS34K and Ubicom (8+ -way threaded),
>>> then you're not the expert on multithreading you claim to be.
>>>
>> You seems to want to slip into a discussion which type
>> of CPU will exhibit the highest MIPS rate for a single thread.
>> That is trying to force open an already open door.
>
> No, I wasn't talking about fast single thread performance. My point is
> that
> it is a fallacy to think that adding more and more threads is always
> better.
> Like so many other things, returns diminish while costs increase. I claim
> it would be a waste to add more threads on an out-of-order core (max
> frequency would go down, more cache needed to reclaim performance
> loss, so not cost effective).
>
If you can replace a full core with a thread you always win.
Obviously you are not going to take the time to go through the
SPI slave example which proves you wrong.
I suspect the reason is that you know you are wrong
but to stiff headed to admit it,
so I consider any future discussion on this subject with You a total waste
of time.
> Wilco
>
--
Best Regards,
Ulf Samuelsson
This is intended to be my personal opinion which may,
or may not be shared by my employer Atmel Nordic AB