Farcall, LOL. I love it! If I can't get it into this project, I'll have to
start another just to use it.
processor.
>
> It depends on the project, but another viable soultion would be to split
> into two controllers. The stable low level stuff, stays in 80C51(s), and
> the new stuff goes into some 32 bit core.
> uC these days are so cheap, they cost less than the packing, or cables,
> in many projects.
>
Yepp, that's another column in my KT matrix.
Cheers,
Alf
Reply by Jim Granville●August 11, 20062006-08-11
Alf Katz wrote:
> "Hans-Bernhard Broeker" <broeker@physik.rwth-aachen.de> wrote in message
> news:4k3o9aFa64eaU1@news.dfncis.de...
>
>>Hold it a moment, please. Just so we're 101% perfectly clear about
>>this: you've outgrown a 512 KiB super-8051 system writing all the code
>>in *assembler*? I'll hand it to you, you must have had a team of
>>brave people work on such monster. I've maintained a 96 KiB
>>code-size, Super-8051 project done entirely in assembly, and
>>considered that to be sitting on the fence, leaning dangerous towards
>>"mission impossible" territory.
>>
>
> You're right. And he is a team of brave people (albeit a small one). You
> can see how C *might* just improve maintainability.
is this 512KB of CODE ? - The only time I've seen such large 8051
footprints, have been where extensive menus, and multi-languages, were
included.
That stuff can go into cheap serial data-flash these days; code memory
is too expensive for that.
How much RAM does this system need, storing what sized variables ?
-jg
Reply by Alf Katz●August 11, 20062006-08-11
"Hans-Bernhard Broeker" <broeker@physik.rwth-aachen.de> wrote in message
news:4k3o9aFa64eaU1@news.dfncis.de...
>
> Hold it a moment, please. Just so we're 101% perfectly clear about
> this: you've outgrown a 512 KiB super-8051 system writing all the code
> in *assembler*? I'll hand it to you, you must have had a team of
> brave people work on such monster. I've maintained a 96 KiB
> code-size, Super-8051 project done entirely in assembly, and
> considered that to be sitting on the fence, leaning dangerous towards
> "mission impossible" territory.
>
You're right. And he is a team of brave people (albeit a small one). You
can see how C *might* just improve maintainability.
> I'd be careful with that expected speedup. Just ask yourself: if it
> were realistic to render that IP core Dallas bought for 100 MHz clock
> frequency, what kept Dalls from doing that in their chips (keeping in
> mind they're going ASIC, so they should be faster, not slower than an
> FPGA)? Would DalSemi really shoot their own foot just like that?
>
The speedup is mainly through demuxing the external busses, actually only
running 40MHz.
>
> There is, theoretically at least, a third option, albeit it a
> *thoroughly* nasty one: an 8051 machine code interpreter, running on
> whatever CPU you can find that is fast enough to pull off that stunt.
>
Yepp, did the numbers on this one, too. Have done it before when a
processor disappeared under us some 20+ years ago.
Cheers,
Alf
Reply by Jim Granville●August 11, 20062006-08-11
Alf Katz wrote:
>> May I know the exact reason for this conversion?
>>
>>Best Regards,
>>Vivekanandan M
>>
>
>
> We have a product that has outgrown the 8051. Even the 33MHz Dallas single
> cycle 8051's are no longer fast enough to do the job. Both program and data
> requirements have outgrown the current 512kB X/P memory mapping schema. I
> am examining and comparing two major solutions. One is building a better
> 8051 inside an FPGA with a 3:1 improvement in speed and heaps of hardware
> speed ups to critical tasks.
> The other is migrating to a faster processor
> (e.g. and ARM, but the actual processor is pretty irrelevant once we get to
> C). The latter has numerous other advantages, not least of which is the
> maintainability and expandability promised by conversion to C.
I'd define the data sizes carefully, depending on the code/data split,
you might already be above single chip devices. There are very few
1Mbyte ARM uC, so this might push into microprocessor solutiom, which
is a quite different animal.
>
> The reason I was interested in metrics for the conversion process is that
> the major difference in the development cost of the two approaches is the
> need to recode to use the faster processor.
It depends on the project, but another viable soultion would be to split
into two controllers. The stable low level stuff, stays in 80C51(s), and
the new stuff goes into some 32 bit core.
uC these days are so cheap, they cost less than the packing, or
cables, in many projects.
-jg
Reply by Hans-Bernhard Broeker●August 11, 20062006-08-11
> > May I know the exact reason for this conversion?
> >
> > Best Regards,
> > Vivekanandan M
> >
> We have a product that has outgrown the 8051. Even the 33MHz Dallas single
> cycle 8051's are no longer fast enough to do the job. Both program and data
> requirements have outgrown the current 512kB X/P memory mapping schema.
Hold it a moment, please. Just so we're 101% perfectly clear about
this: you've outgrown a 512 KiB super-8051 system writing all the code
in *assembler*? I'll hand it to you, you must have had a team of
brave people work on such monster. I've maintained a 96 KiB
code-size, Super-8051 project done entirely in assembly, and
considered that to be sitting on the fence, leaning dangerous towards
"mission impossible" territory.
> I am examining and comparing two major solutions. One is building a
> better 8051 inside an FPGA with a 3:1 improvement in speed
I'd be careful with that expected speedup. Just ask yourself: if it
were realistic to render that IP core Dallas bought for 100 MHz clock
frequency, what kept Dalls from doing that in their chips (keeping in
mind they're going ASIC, so they should be faster, not slower than an
FPGA)? Would DalSemi really shoot their own foot just like that?
> The reason I was interested in metrics for the conversion process is that
> the major difference in the development cost of the two approaches is the
> need to recode to use the faster processor.
There is, theoretically at least, a third option, albeit it a
*thoroughly* nasty one: an 8051 machine code interpreter, running on
whatever CPU you can find that is fast enough to pull off that stunt.
--
Hans-Bernhard Broeker (broeker@physik.rwth-aachen.de)
Even if all the snow were burnt, ashes would remain.
Reply by Alf Katz●August 11, 20062006-08-11
"Colin Paul Gloster" <Colin_Paul_Gloster@ACM.org> wrote in message
news:20060811144153.E69219@docenti.ing.unipi.it...
> On Fri, 11 Aug 2006, Alf Katz wrote:
>
> "[..]
>
> [..] One is building a better
> 8051 inside an FPGA with a 3:1 improvement in speed and heaps of hardware
> speed ups to critical tasks. [..]
>
> [..]"
>
> 8051s implemented in FPGAs are commercially available.
The core is (even the one Dallas bought), and I've found them to be a good
starting point. They tend to be *too* 8051 compatible without taking
advantage of what the extra pins and other resources of the FPGA offer.
Stuff like true harvard architecture (separate P and X mem) with non muxed
address & data busses, single write context switches and the other hardware
speedups alluded to. That's what makes the C conversion to a cheaper,
faster MCU the riskier approach, and consequently why I'm trying to get a
handle on the effort (man hours/KSLOC) involved in the migration before
comitting to one approach or the other.
Cheers,
Alf
Reply by steve●August 11, 20062006-08-11
Alf Katz wrote:
> Yepp, you're right Steve (not at five trees). In this case the code is the
> spec. It has been been debugged over 10 years is very stable and is the
> basis of the company's existence. The code in fact defines what the product
> does. The code is the documentation.
Yes I deal with this situation all the time, the customer can't even
tell you what precisely the product requirements are, only the legacy
code can.
Reply by Colin Paul Gloster●August 11, 20062006-08-11
On Fri, 11 Aug 2006, Alf Katz wrote:
"[..]
[..] One is building a better
8051 inside an FPGA with a 3:1 improvement in speed and heaps of hardware
speed ups to critical tasks. [..]
[..]"
8051s implemented in FPGAs are commercially available.
Reply by Alf Katz●August 11, 20062006-08-11
Thanks, all, for your interesting replies. Unfortunately, no one seems to
have collected metrics for the *effort* involved in translation of C to
assembler, which will be the main determinant of whether we proceed with
this project. I have performed the task on a small (<1k), but reasonably
representative portion of the code, and find that I can translate it at
approximately 130 assembler SLOC per hour, coming in reasonably cold, but
with an understanding of both C and assembler. I figure that after a couple
of weeks a(nother) competent assembler/C programmer (thin on the ground in
the antipodes) will get to know the common structures and macro usage a lot
better than I did in a day, so I think a rate of 100 SLOC/hour should be
achievable. Naturally, I'll budget on significantly less.
Interestingly, the number of lines of code was very close to the 2:1
Assembler:C SLOC ratios predicted by the function point boys.
Cheers,
Alf
Reply by Alf Katz●August 11, 20062006-08-11
> May I know the exact reason for this conversion?
>
> Best Regards,
> Vivekanandan M
>
We have a product that has outgrown the 8051. Even the 33MHz Dallas single
cycle 8051's are no longer fast enough to do the job. Both program and data
requirements have outgrown the current 512kB X/P memory mapping schema. I
am examining and comparing two major solutions. One is building a better
8051 inside an FPGA with a 3:1 improvement in speed and heaps of hardware
speed ups to critical tasks. The other is migrating to a faster processor
(e.g. and ARM, but the actual processor is pretty irrelevant once we get to
C). The latter has numerous other advantages, not least of which is the
maintainability and expandability promised by conversion to C.
The reason I was interested in metrics for the conversion process is that
the major difference in the development cost of the two approaches is the
need to recode to use the faster processor.
Cheers,
Alf