EmbeddedRelated.com
Forums
The 2024 Embedded Online Conference

The 2005 ISSCC

Started by Xenon January 20, 2005
"Xenon" <xenonxbox2@xboxnext.com> writes:

> Cell Architecture Explained: Introduction
[...]
> 250 GFLOPS (Billion Floating Point Operations per Second)
[...]
> 6.4 Gigabit / second off-chip communication
A little bit memory starved, I guess -- or do you have an application that performs in the neighborhood of fifty FLOPS per *bit*? -kzm -- If I haven't seen further, it is by standing in the footprints of giants
Ketil Malde <ketil+news@ii.uib.no> writes:

>"Xenon" <xenonxbox2@xboxnext.com> writes:
>> Cell Architecture Explained: Introduction > [...] >> 250 GFLOPS (Billion Floating Point Operations per Second) > [...] >> 6.4 Gigabit / second off-chip communication
>A little bit memory starved, I guess -- or do you have an application >that performs in the neighborhood of fifty FLOPS per *bit*?
The claim made in the Cell paper is that there are 8 Rambus XDR channels at 3.2 GB/s each for a total of 25.6 GB/s (I could have sworn XDR was supposed to be 6.4 GB/s, but maybe that was down the road) That 6.4 GB/s off chip communication is the hypertransport equivalent (and also supposed to be per pin, can't remember how wide that was supposed to be) Not that this gets it near 250 GLOPS usable for problems larger than a few megabytes. -- Douglas Siebert dsiebert@excisethis.khamsin.net "They that can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety" -- Thomas Jefferson
In alt.games.video.xbox CEO Gargantua <gamers@r.lamers> wrote:

> Moore's Law is dead, and it's taking Wintel down with it.
But is it even needed anymore? Think about it - how many people *NEED* that 3+Ghz processor? And the people who do actually need that sort of power are going with SMP or paralell processing systems already. Even Doom3 is more concerned with the processor and memory of the graphics card - not with your main CPU. If anything, the days of the single CPU system are numbered. If you can't make the individual chip go faster, why not throw more chips at the problem? This won't lead to a speed-up across the board, but imagine being able to dedicate a processor to each application you're running on your system? BeOS used to be able to do this and even let you set how many processors you wanted dedicated to each process if you wanted. Just don't set it to 0 CPUs for the OS...bad things would happen ;) As for margins on its chips, I wouldn't worry about Intel just yet. If they start doing multiple core processors (one chip, 2 or more CPUs) which will push their prices along nicely. After all, there's only so much room on a standard desktop ATX motherboard.
> In alt.games.video.xbox CEO Gargantua <gamers@r.lamers> wrote:
This high authority on the issue spake:
> Moore's Law is dead, and it's taking Wintel down with it.
Pantagruel sez: Then they'll have to start making the software faster. Won't that be a hoot! http://www.centaurgalleries.com/Art/00077/I04274-02-500h.jpg
Xenon <xenonxbox2@xboxnext.com> wrote:
>Cell Architecture Explained: Introduction
A discussion at Joystiq points out that Mr. Blachford, from who you stole the article, also explained how to make an antigravity device, and how light reduces in frequency the further it travels... http://www.blachford.info/quantum/gravity.html http://www.blachford.info/quantum/dimeng.html follow-ups set to rgv.sony; apologies for this idiot crossposter to the rest of you all. -- "It's only now, with "Blinded by the Right," that conservatives have grown a sense of journalistic skepticism when it comes to [David] Brock." - "Fight or Flight", David Talbot, Salon Apr 17 2002
"Ronald H. Nicholson Jr." <rhn@mauve.rahul.net> wrote in message
news:cssfcg$fva$1@blue.rahul.net...
> In article <name99-42B264.17530821012005@localhost>, > Maynard Handley <name99@name99.org> wrote: > >Bottom line is that this thing doesn't resemble any traditional CPU and > >is therefore a godawful match to existing languages, compilers and > >algorithms. > > GPU shader algorithms and languages? Common DSP library/toolbox calls? > > -- > Ron Nicholson rhn AT nicholson DOT com http://www.nicholson.com/rhn/ > #include <canonical.disclaimer> // only my own opinions, etc.
??? You do realize that this will still have a GPU, and as a matter of fact Sony gave up the idea of doing it themselves and gave the contract to nVidia (my guess was cost vs. bowing to requests from the SW community). Essentially that means all your basic T&L (including your shader algorithms) are still done on the GPU. J
"Ketil Malde" <ketil+news@ii.uib.no> wrote in message
news:egu0p7f6aq.fsf@ii.uib.no...
> "Xenon" <xenonxbox2@xboxnext.com> writes: > > > Cell Architecture Explained: Introduction > [...] > > 250 GFLOPS (Billion Floating Point Operations per Second) > [...] > > 6.4 Gigabit / second off-chip communication > > A little bit memory starved, I guess -- or do you have an application > that performs in the neighborhood of fifty FLOPS per *bit*? > > -kzm > -- > If I haven't seen further, it is by standing in the footprints of giants
GPUs do. Stages and stages of pure logic circuitry. On this beast, every CPU/APU would have to be executing out of cache of course. Memory starved is par for the course. One of those issues that increases as we move forward. J
In article <ct3tks$8gj$1@news01.intel.com>,
Jeremy Williamson <jeremiah.d.williamson@NOSPAMintel.com> wrote:
> >"Ronald H. Nicholson Jr." <rhn@mauve.rahul.net> wrote in message >news:cssfcg$fva$1@blue.rahul.net... >> In article <name99-42B264.17530821012005@localhost>, >> Maynard Handley <name99@name99.org> wrote: >> >Bottom line is that this thing doesn't resemble any traditional CPU and >> >is therefore a godawful match to existing languages, compilers and >> >algorithms. >> >> GPU shader algorithms and languages? Common DSP library/toolbox calls?
...
>You do realize that this will still have a GPU, and as a matter of fact Sony >gave up the idea of doing it themselves and gave the contract to nVidia (my >guess was cost vs. bowing to requests from the SW community). Essentially >that means all your basic T&L (including your shader algorithms) are still >done on the GPU.
Yes, but aren't people experimenting with using shader and DSP languages and tools for stuff that has nothing to do with the workstation display or audio output? The question is whether this software is commercially interesting and whether this cell device is more suited for this stuff than the GPU's on which these algorithm were developed. IMHO. YMMV. -- Ron Nicholson rhn AT nicholson DOT com http://www.nicholson.com/rhn/ #include <canonical.disclaimer> // only my own opinions, etc.
"Ronald H. Nicholson Jr." <rhn@mauve.rahul.net> wrote in message
news:ctf9ic$n4c$1@blue.rahul.net...
> In article <ct3tks$8gj$1@news01.intel.com>, > Jeremy Williamson <jeremiah.d.williamson@NOSPAMintel.com> wrote: > > > >"Ronald H. Nicholson Jr." <rhn@mauve.rahul.net> wrote in message > >news:cssfcg$fva$1@blue.rahul.net... > >> In article <name99-42B264.17530821012005@localhost>, > >> Maynard Handley <name99@name99.org> wrote: > >> >Bottom line is that this thing doesn't resemble any traditional CPU
and
> >> >is therefore a godawful match to existing languages, compilers and > >> >algorithms. > >> > >> GPU shader algorithms and languages? Common DSP library/toolbox calls? > ... > >You do realize that this will still have a GPU, and as a matter of fact
Sony
> >gave up the idea of doing it themselves and gave the contract to nVidia
(my
> >guess was cost vs. bowing to requests from the SW community).
Essentially
> >that means all your basic T&L (including your shader algorithms) are
still
> >done on the GPU. > > Yes, but aren't people experimenting with using shader and DSP languages > and tools for stuff that has nothing to do with the workstation display > or audio output? > > The question is whether this software is commercially > interesting and whether this cell device is more suited for this stuff > than the GPU's on which these algorithm were developed. > > > IMHO. YMMV. > -- > Ron Nicholson rhn AT nicholson DOT com http://www.nicholson.com/rhn/ > #include <canonical.disclaimer> // only my own opinions, etc.
Yes, especially since GPUs are slowly becoming more generically programmable (due to the pixel and vertex shaders). AIUI, the next gen is likely to have primitive branching. There was a full day workshop on porting apps to the GPU at SIGGRAPH last year. There was even a published paper of someone porting a database to the GPU. But, what I was trying to say is the Cell is not a GPU nor is it likely to take away many of the tasks currently farmed to today's GPUs (T&L). Jeremy

The 2024 Embedded Online Conference