EmbeddedRelated.com
Forums

microprocessor that improves as they progress

Started by v4vijayakumar June 5, 2006
v4vijayakumar wrote:
> Just wanted to know. Is it possible for a microprocessor (so the > software) to improve their performance over period of time? Can > microprocessor improve its performance as they progress/continue to > work? If not, what we need to do to achieve this? could this be > implemented in silicon? Is it worth doing? Is there any research in > progress?
MPUs have generally done this. Each new PC microprocessor for instance, usually performs much better than the prior model. However, the software performance is also a variable, not a constant. In the case at present software performance continually decreases, so the increasing processor performance seems to result in us simply treading water, but having to pay money for it nonetheless. Of course then there is also software which is inherently flawed, which requires that an entire industry exist to produce software to compensate for the inherent flaws of the first software. What if all that capital were used to improve the first software instead? But what you were probably actually asking about was can a piece of completed hardware itself improve over time. This is possible: Reconfigurable systems. FPGA implemented CPUs could do this. However, it seems likely that a more beneficial situation would be to have highly optimized and fixed MPU hardware at the state of the art at present, coupled with a continually improving software efficiency and compiler quality for that particular generation of MPU. That would afford some measure of improvement for those not ready to update to a new CPU. But the broken software market doesn't work that way. After over 13 years, I still have yet to see a better word processor than Word Perfect 5.1 for DOS. That is really sad. -- Good day! ________________________________________ Christopher R. Carlen Principal Laser&Electronics Technologist Sandia National Laboratories CA USA crcarleRemoveThis@BOGUSsandia.gov NOTE, delete texts: "RemoveThis" and "BOGUS" from email address to reply.
v4vijayakumar wrote:
> Just wanted to know. Is it possible for a microprocessor (so the > software) to improve their performance over period of time? Can > microprocessor improve its performance as they progress/continue to > work? If not, what we need to do to achieve this? could this be > implemented in silicon? Is it worth doing? Is there any research in > progress?
> Luhan wrote: > What you are refering to is known as either AI or heuristics - software > that learns over time by interacting with its environment. > > Luhan
Boudewijn Dijkstra wrote:
> Some virtual machines already improve program execution over time. I > don't see why a programmable microprocessor can't do the same.
v4vijayakumar wrote:
> no. It is not about AI/heuristics/machine learning.
jon@beniston.com wrote:
> Sure, by adding a cache. > > Jon
v4vijayakumar wrote:
> neither cache memory. > > Anyhow, it will take microprocessor one machine cycle to execute "move > r1 r2". My question is, is it possible for a microprocessor to make > more than one moves in a machine cycle or to make one move in less than > one machine cycle.
Boudewijn Dijkstra wrote:
> Yes. Many architectures provide a 'swap' instruction that does just that. > Generally speaking, simple instructions can complete long before the cycle > ends, but then have to wait for the next clock transition. Unless you > have a clockless architecture, of course.
don't know what 'programmable microprocessor' / 'clockless architecture' can do in this scenario? will certainly look into this. thanks. Tom Lucas wrote:
> Perhaps a more relevent question is "Why has my code suddenly developed > timing problems?"
no. why? Yesterday, it took my code 1 second to sort 10k records. It is continously running today, and it still requires 1 second to sort 10k records. My question is, why my code couldn't complete the task in less than 1 second, say 0.5 seconds? Julian Kain wrote:
> <snip> ... just a matter of architecture and parallelism. ... </snip>
agree, but, parallelism can only be used if the processor is idle (in a single processor environment) or we require more processors/processing unit (in a multi-processor environment). Chris Carlen wrote:
> <snip> ... software which is inherently flawed ... </snip>
It is not necessery to think of softwares because it is a layer above microprocessor.
Boudewijn Dijkstra wrote:
> Op Mon, 05 Jun 2006 16:04:33 +0200 schreef v4vijayakumar: >> v4vijayakumar wrote: >> >>> Just wanted to know. Is it possible for a microprocessor (so the >>> software) to improve their performance over period of time? Can >>> microprocessor improve its performance as they progress/continue to >>> work? If not, what we need to do to achieve this? could this be >>> implemented in silicon? Is it worth doing? Is there any research in >>> progress? > > Some virtual machines already improve program execution over time. I > don't see why a programmable microprocessor can't do the same.
Because it would make the microprocessor excessively complex (if it isn't already), and bugs in hardware are considerably more inconvenient and costly to fix than bugs in software. There are already enough bugs in current microprocessors (e.g. see <http://www.intel.com/design/mobile/specupdt/309222.htm>) without making it worse. -- David Hopwood <david.nospam.hopwood@blueyonder.co.uk>
In article <1149526078.344384.26160@h76g2000cwa.googlegroups.com>, 
v4vijayakumar@yahoo.com says...
> > v4vijayakumar wrote: > > Just wanted to know. Is it possible for a microprocessor (so the > > software) to improve their performance over period of time? Can > > microprocessor improve its performance as they progress/continue to > > work? If not, what we need to do to achieve this? could this be > > implemented in silicon? Is it worth doing? Is there any research in > > progress? > > > > Luhan wrote: > > What you are refering to is known as either AI or heuristics - software > > that learns over time by interacting with its environment. > > > > Luhan > > > Boudewijn Dijkstra wrote: > > Some virtual machines already improve program execution over time. I > > don't see why a programmable microprocessor can't do the same. > > > v4vijayakumar wrote: > > no. It is not about AI/heuristics/machine learning. > > > jon@beniston.com wrote: > > Sure, by adding a cache. > > > > Jon > > > v4vijayakumar wrote: > > neither cache memory. > > > > Anyhow, it will take microprocessor one machine cycle to execute "move > > r1 r2". My question is, is it possible for a microprocessor to make > > more than one moves in a machine cycle or to make one move in less than > > one machine cycle. > > > Boudewijn Dijkstra wrote: > > Yes. Many architectures provide a 'swap' instruction that does just that. > > Generally speaking, simple instructions can complete long before the cycle > > ends, but then have to wait for the next clock transition. Unless you > > have a clockless architecture, of course. > > don't know what 'programmable microprocessor' / 'clockless > architecture' can do in this scenario? will certainly look into this. > thanks. > > > Tom Lucas wrote: > > Perhaps a more relevent question is "Why has my code suddenly developed > > timing problems?" > > > no. why? > > Yesterday, it took my code 1 second to sort 10k records. It is > continously running today, and it still requires 1 second to sort 10k > records. My question is, why my code couldn't complete the task in less > than 1 second, say 0.5 seconds? > > > Julian Kain wrote: > > <snip> ... just a matter of architecture and parallelism. ... </snip> > > agree, but, parallelism can only be used if the processor is idle (in a > single processor environment) or we require more processors/processing > unit (in a multi-processor environment).
No, super-scalar processors find parallelism in your code and take advantage of it by executing more than one instruction at a time.
> Chris Carlen wrote: > > <snip> ... software which is inherently flawed ... </snip> > > > It is not necessery to think of softwares because it is a layer above > microprocessor.
Huh? I'll have to remember that one next time I want to throw a dig at the programmer types. -- Keith
jon@beniston.com wrote:
> v4vijayakumar wrote: > > Just wanted to know. Is it possible for a microprocessor (so the > > software) to improve their performance over period of time? > > Sure, by adding a cache.
Or maybe even branch prediction.. Jon
v4vijayakumar wrote:
> Just wanted to know. Is it possible for a microprocessor (so the > software) to improve their performance over period of time? Can > microprocessor improve its performance as they progress/continue to > work? If not, what we need to do to achieve this? could this be > implemented in silicon? Is it worth doing? Is there any research in > progress?
There are people who are using genetic algorithms to invent improved circuit designs for FPGA. So in theory a processor might be able to optimize its own design when it is not doing anything else using this technique. It could optimize both the software it was running, and the software that defined the hardware that defines it and that runs the software that optimizes itself. But it probably would not be able to keep up with Moore's law and optimize itself very far or very fast compared to the rest of the industry. Although some circuit designers have reported more optimization than they expected this way.
v4vijayakumar wrote:
> Just wanted to know. Is it possible for a microprocessor (so the > software) to improve their performance over period of time? Can > microprocessor improve its performance as they progress/continue to > work? If not, what we need to do to achieve this? could this be > implemented in silicon? Is it worth doing? Is there any research in > progress?
There are people who are using genetic algorithms to invent improved circuit designs for FPGA. So in theory a processor might be able to optimize its own design when it is not doing anything else using this technique. It could optimize both the software it was running, and the software that defined the hardware that defines it and that runs the software that optimizes itself. But it probably would not be able to keep up with Moore's law and optimize itself very far or very fast compared to the rest of the industry. Although some circuit designers have reported more optimization than they expected this way.
v4vijayakumar wrote:
> Just wanted to know. Is it possible for a microprocessor (so the > software) to improve their performance over period of time? Can > microprocessor improve its performance as they progress/continue to > work? If not, what we need to do to achieve this? could this be > implemented in silicon? Is it worth doing? Is there any research in > progress?
There are people who are using genetic algorithms to invent improved circuit designs for FPGA. So in theory a processor might be able to optimize its own design when it is not doing anything else using this technique. It could optimize both the software it was running, and the software that defined the hardware that defines it and that runs the software that optimizes itself. But it probably would not be able to keep up with Moore's law and optimize itself very far or very fast compared to the rest of the industry. Although some circuit designers have reported more optimization than they expected this way.
On Mon, 05 Jun 2006 06:17:57 -0700, v4vijayakumar wrote:

> Just wanted to know. Is it possible for a microprocessor (so the > software) to improve their performance over period of time? Can > microprocessor improve its performance as they progress/continue to > work?
That is the premise of the field known as "dynamic recompilation" in general. I believe that IBM and HP both have results in that area, but the only two uses that have got much notice at a public level are Transmeta's Crusoe and Efficion processors and (perhaps) Sun's HotSpot JVM. The degree to which performance actually improves, as a function of time, is probably less than what could be achieved with a profile-directed native compiler, but various deployment issues can prevent that from being an option. Cheers, -- Andrew
v4vijayakumar wrote:
> Yesterday, it took my code 1 second to sort 10k records. It is > continously running today, and it still requires 1 second to sort 10k > records. My question is, why my code couldn't complete the task in less > than 1 second, say 0.5 seconds?
That's an easier question to answer. Unless there is some way of the processor knowing that it has been given the same problem, it has to do all the work from scratch. Unless the hardware has changed in the last 24 hours, this is going to take the same amount of time. You can cache results to undermine the first assumption. That generally gives nearly all its improvements in the short term and actually loses ground in the medium to long term as results fall out of the cache. It is not a recipe for sustained long term improvement. In hardware, it is very unlikely that any cache will have data from 24 hours ago. In principle, I don't see why one couldn't build a machine that was able to rewire itself and thereby undermine the second assumption. It would, however, probably be out-performed by a "fixed" architecture on the vast majority of problems, for any given cost of development. However, if the machine was able to rewire itself any number of times, then one might argue (philosophically) that the current state of the wiring was merely a form of cached data. It would probably have a longer lifetime than a conventional cache and take longer to adapt to the problem in hand, but (timescales apart) it would give no new benefits and in particular would still eventually lose whatever boost it had once gained on a particular calculation. A bit like people really.