Reply by Boudewijn Dijkstra●June 7, 20062006-06-07
Op Mon, 05 Jun 2006 18:57:16 +0200 schreef David Hopwood
<david.nospam.hopwood@blueyonder.co.uk>:
> Boudewijn Dijkstra wrote:
>> Op Mon, 05 Jun 2006 16:04:33 +0200 schreef v4vijayakumar:
>>> v4vijayakumar wrote:
>>>
>>>> Just wanted to know. Is it possible for a microprocessor (so the
>>>> software) to improve their performance over period of time? Can
>>>> microprocessor improve its performance as they progress/continue to
>>>> work? If not, what we need to do to achieve this? could this be
>>>> implemented in silicon? Is it worth doing? Is there any research in
>>>> progress?
>>
>> Some virtual machines already improve program execution over time. I
>> don't see why a programmable microprocessor can't do the same.
>
> Because it would make the microprocessor excessively complex (if it isn't
> already), and bugs in hardware are considerably more inconvenient and
> costly to fix than bugs in software.
That doesn't mean that it can't be done reasonably.
> v4vijayakumar wrote:
> > Just wanted to know. Is it possible for a microprocessor (so the
> > software) to improve their performance over period of time?
>
> yes it's possible but the genius hasn't appeared that has implemented
> it practically yet. we're still waiting on our eistein.
>
> what most people do NOT know about are the strides being made with
> fpga's, etc. here, the algorithms leave the realm of software only
> optimization and enter the world of hardware. the chip itself can
> change. with an fpga you can actually change the arrangement of logic
> gates on the chip to create new processors, or modify existing designs,
> etc, immediately, through software. so, in theory, a self-modifying
> program could discover that it has a number of tasks that could run in
> parallel so it modifies it's own hardware to create exactly how many
> miniature processors it needs to do the job most efficiently.
>
> like i said, the capability is there, but the genius hasn't arrived yet
> who brings it all together. at the moment it is an untapped resource.
I do not think that it is just in search of a genius. It is also in
search of proper economics.
I have some familiarity with FPGA implementations of general purpose
microprocessors.
Overall, it looks like you give up at least 10X, and more like 100X,
in performance, 10X in power, and 10X in area, for an FPGA
implementation.
So your FPGA optimization of the existing microprocessor has to be
10-100X better - more performance, or better power - than an existing
full custom design of a general purpose microprocessor, to be
worthwhile. That's a big hurdle.
While I, you, and a recent college graduate, can probably create an
FPGA design that is 2-4X more efficient in gate delays - even 100X for
special problems - doing so across the board is hard, and harder still
to overcome that initial hurdle.
And then... say that you have such an FPGA logic design that is 100X
better than best full custom gehneral purpose microprocessor. Will it
still be better than a full custom implementation of the RTL you put
into the FPGA?
FPGAs only make sense if
a) you can gain performance by putting it into hardware
b) enough to overcome the performande lost by putting the hardware in an FPGA
c) the market is big enough to justofy developing the FPGA
d) but too small to justofy full custom
There are a lot of such markets for FPGAs. Especially if development time is also an issue.
But not everything fits this bill.
Reply by Roland●June 6, 20062006-06-06
v4vijayakumar wrote:
> Anyhow, it will take microprocessor one machine cycle to execute "move
> r1 r2". My question is, is it possible for a microprocessor to make
> more than one moves in a machine cycle or to make one move in less than
> one machine cycle.
Of course it's possible (as already answered). If you aren't talking
about AI and processors improving themselves automatically (no
learning), maybe reconfigurable computing would be what you're looking
for. You put and RAM-based FPGA next to your processor and configure it
to do some special tasks on-demand (i.e. if you need DES you configure
it to do DES like a dedicated encryption hardware). If the task is
recurring (you encrypt a huge file), you can ignore the impact of
reconfiguring the chip on overall performace since the gain is far greater.
And there are also ideas for processors that are reconfigurable in a
slightly different sense and so their architecture is optimised for one
special C program (i.e. MP3 decoding) and these could be dynamically
reconfigured during runtime (in FPGA). But it's ultimately left to the
system designer to perform SW/HW matching optimisations. This could be
taken even futher to multiprocessor designs, C program to system
synthesys etc.
Check out NISC (No Instruction Set Computer) and other D. Gajski's ideas
on the future of digital design.
- R.
Reply by purple_stars●June 6, 20062006-06-06
v4vijayakumar wrote:
> Just wanted to know. Is it possible for a microprocessor (so the
> software) to improve their performance over period of time? Can
> microprocessor improve its performance as they progress/continue to
> work? If not, what we need to do to achieve this? could this be
> implemented in silicon? Is it worth doing? Is there any research in
> progress?
yes it's possible but the genius hasn't appeared that has implemented
it practically yet. we're still waiting on our eistein.
most people know that software can be optimized and made to run in
parallel, so enough about that. it's not been done well, but
eventually programs will self-modify and approach some sort of
theoretical limit. with increasing chip speeds the pressure has not
ever been there to make it happen, so nobody has bothered to do the
math, etc.
what most people do NOT know about are the strides being made with
fpga's, etc. here, the algorithms leave the realm of software only
optimization and enter the world of hardware. the chip itself can
change. with an fpga you can actually change the arrangement of logic
gates on the chip to create new processors, or modify existing designs,
etc, immediately, through software. so, in theory, a self-modifying
program could discover that it has a number of tasks that could run in
parallel so it modifies it's own hardware to create exactly how many
miniature processors it needs to do the job most efficiently.
like i said, the capability is there, but the genius hasn't arrived yet
who brings it all together. at the moment it is an untapped resource.
Reply by Tom Lucas●June 6, 20062006-06-06
"Boudewijn Dijkstra" <boudewijn@indes.com> wrote in message
news:op.taohk3mcy6p7a2@ragnarok.lan...
>> or to make one move in less than
>> one machine cycle.
>
> Generally speaking, simple instructions can complete long before the cycle
> ends, but then have to wait for the next clock transition. Unless you
> have a clockless architecture, of course.
Has anyone worked with one of these clockless devices? I believe ARM Amulet
is one such device. I think one would be interesting from a tinkering point
of view but I'd hate to have to make one work in a system - particularly one
with time contraints.
Reply by Jasen Betts●June 6, 20062006-06-06
On 2006-06-05, v4vijayakumar <v4vijayakumar@yahoo.com> wrote:
> Just wanted to know. Is it possible for a microprocessor (so the
> software) to improve their performance over period of time? Can
> microprocessor improve its performance as they progress/continue to
> work? If not, what we need to do to achieve this? could this be
> implemented in silicon? Is it worth doing? Is there any research in
> progress?
unless you're talking of overclocking there's no after-market, gross, speed
boost available,
what is possible is to improve the performance by changing the software (
and in some circumstances the microcode) but this generally requires outside
help - it's not an autonomous process.
another possibility is some sort of learning algorythm, but if efficiency is
important it's usually best to get it right beforehand...
--
Bye.
Jasen
Reply by ●June 6, 20062006-06-06
Andrew Reilly <andrew-newspost@areilly.bpc-users.org> writes:
> On Mon, 05 Jun 2006 06:17:57 -0700, v4vijayakumar wrote:
>
> > Just wanted to know. Is it possible for a microprocessor (so the
> > software) to improve their performance over period of time? Can
> > microprocessor improve its performance as they progress/continue to
> > work?
>
> That is the premise of the field known as "dynamic recompilation" in
> general. I believe that IBM and HP both have results in that area, but
> the only two uses that have got much notice at a public level are
> Transmeta's Crusoe and Efficion processors and (perhaps) Sun's HotSpot
> JVM.
I'm not sure I would include HotSPot and other software-based JITs in
technologies that allow a processor to improve at run-time.
Transmeta's chips are included, though, as they at runtime improves
performance of the only architecturally visible instruction set (x86).
> The degree to which performance actually improves, as a function of
> time, is probably less than what could be achieved with a profile-directed
> native compiler, but various deployment issues can prevent that from being
> an option.
Indeed. Transmeta decided not to let the native ISA be visible, so
you can not do offline compilation from x86 to this, nor can you
replace the architecturally visible ISA. IIRC, Transmeta made a
prototype that had JVM as the visible ISA (translating at runtime to
the same native ISA as the x86 chip), but that chip was never sold.
As others have mentioned, caches, branch prediction and similar
technologies found in almost all modern CPUs will also make the
processor improve performance over time, though not as dramatically as
the Transmeta chips.
Torben
Reply by Ken Hagan●June 6, 20062006-06-06
v4vijayakumar wrote:
> Yesterday, it took my code 1 second to sort 10k records. It is
> continously running today, and it still requires 1 second to sort 10k
> records. My question is, why my code couldn't complete the task in less
> than 1 second, say 0.5 seconds?
That's an easier question to answer.
Unless there is some way of the processor knowing that it has been
given the same problem, it has to do all the work from scratch.
Unless the hardware has changed in the last 24 hours, this is going
to take the same amount of time.
You can cache results to undermine the first assumption. That generally
gives nearly all its improvements in the short term and actually loses
ground in the medium to long term as results fall out of the cache. It
is not a recipe for sustained long term improvement. In hardware, it is
very unlikely that any cache will have data from 24 hours ago.
In principle, I don't see why one couldn't build a machine that was able
to rewire itself and thereby undermine the second assumption. It would,
however, probably be out-performed by a "fixed" architecture on the vast
majority of problems, for any given cost of development.
However, if the machine was able to rewire itself any number of times,
then one might argue (philosophically) that the current state of the
wiring was merely a form of cached data. It would probably have a
longer lifetime than a conventional cache and take longer to adapt to
the problem in hand, but (timescales apart) it would give no new
benefits and in particular would still eventually lose whatever boost
it had once gained on a particular calculation.
A bit like people really.
Reply by Andrew Reilly●June 5, 20062006-06-05
On Mon, 05 Jun 2006 06:17:57 -0700, v4vijayakumar wrote:
> Just wanted to know. Is it possible for a microprocessor (so the
> software) to improve their performance over period of time? Can
> microprocessor improve its performance as they progress/continue to
> work?
That is the premise of the field known as "dynamic recompilation" in
general. I believe that IBM and HP both have results in that area, but
the only two uses that have got much notice at a public level are
Transmeta's Crusoe and Efficion processors and (perhaps) Sun's HotSpot
JVM. The degree to which performance actually improves, as a function of
time, is probably less than what could be achieved with a profile-directed
native compiler, but various deployment issues can prevent that from being
an option.
Cheers,
--
Andrew
Reply by ●June 5, 20062006-06-05
v4vijayakumar wrote:
> Just wanted to know. Is it possible for a microprocessor (so the
> software) to improve their performance over period of time? Can
> microprocessor improve its performance as they progress/continue to
> work? If not, what we need to do to achieve this? could this be
> implemented in silicon? Is it worth doing? Is there any research in
> progress?
There are people who are using genetic algorithms to invent
improved circuit designs for FPGA. So in theory a processor might
be able to optimize its own design when it is not doing anything
else using this technique. It could optimize both the software it
was running, and the software that defined the hardware that
defines it and that runs the software that optimizes itself.
But it probably would not be able to keep up with Moore's law and
optimize itself very far or very fast compared to the rest of the
industry. Although some circuit designers have reported more
optimization than they expected this way.