EmbeddedRelated.com
Forums

Portable Assembly

Started by rickman May 27, 2017
Dimiter_Popoff wrote:
> On 28.5.2017 г. 00:17, Les Cargill wrote: >> rickman wrote: >>> Someone in another group is thinking of using a portable assembler >>> to write code for an app that would be ported to a number of >>> different embedded processors including custom processors in FPGAs. >>> I'm wondering how useful this will be in writing code that will >>> require few changes across CPU ISAs and manufacturers. >>> >>> I am aware that there are many aspects of porting between CPUs that >>> is assembly language independent, like writing to Flash memory. I'm >>> more interested in the issues involved in trying to use a universal >>> assembler to write portable code in general. I'm wondering if it >>> restricts the instructions you can use or if it works more like a >>> compiler where a single instruction translates to multiple target >>> instructions when there is no one instruction suitable. >>> >>> Or do I misunderstand how a portable assembler works? Does it >>> require a specific assembly language source format for each target >>> just like using the standard assembler for the target? >>> >> >> That's what C is for. > > Or Basic. Or Fortran etc. >
Not so much. Perhaps Fortran plus say, LINPACK.
> However, they are by far not what a "portable assembler" - existing > under the name Virtual Processor Assembler in our house is. > And never will be, like any high level language C is yet another > phrase book - convenient when you need to do a quick interaction > when you don't speak the language - and only then. >
"I am returning this tobacconist; it is scratched." - Monty Python. It has been a long time since C presented a serious constraint in performance for me.
>> >> This being said, I've been doing this for >> 37 years and have only a few times seen an actual need for >> portability - usually, the new hardware is so radically >> different that porting makes little sense. >> > > The need for portability arises when you have megabytes of > sources which are good and need to be moved to another, better > platform.
Mostly, I've seen the source code outlast the company for which it was written :) I would personally view "megabytes of source" as an opportunity to infuse a system with better ideas through a total rewrite. I understand that this view is rarely shared; people prefer the arbitrage of technical debt.
> For smallish projects - anything which would fit in > an MCU flash - porting is likely a waste of time, rewriting it > for the new target will be faster if done by the same person > who has already done it once. >
L'il MCU projects are essentially disposable. Too many heresies.
> Dimiter > > ------------------------------------------------------ > Dimiter Popoff, TGI http://www.tgi-sci.com > ------------------------------------------------------ > http://www.flickr.com/photos/didi_tgi/ > > > >
-- Les Cargill
On Sat, 27 May 2017 16:17:57 -0500, Les Cargill wrote:

> rickman wrote: >> Someone in another group is thinking of using a portable assembler to >> write code for an app that would be ported to a number of different >> embedded processors including custom processors in FPGAs. I'm wondering >> how useful this will be in writing code that will require few changes >> across CPU ISAs and manufacturers. >> >> I am aware that there are many aspects of porting between CPUs that is >> assembly language independent, like writing to Flash memory. I'm more >> interested in the issues involved in trying to use a universal >> assembler to write portable code in general. I'm wondering if it >> restricts the instructions you can use or if it works more like a >> compiler where a single instruction translates to multiple target >> instructions when there is no one instruction suitable. >> >> Or do I misunderstand how a portable assembler works? Does it require >> a specific assembly language source format for each target just like >> using the standard assembler for the target? >> >> > That's what C is for. > > This being said, I've been doing this for 37 years and have only a few > times seen an actual need for portability - usually, the new hardware is > so radically different that porting makes little sense.
I have used some very good portable C code across three or four different architectures (depending on whether you view a 188 and a 286 as different architectures). This all in one company over the span of 9 years or so. So -- perhaps your scope is limited? -- www.wescottdesign.com
On 27/05/17 23:36, upsidedown@downunder.com wrote:
> On Sat, 27 May 2017 21:05:18 +0000 (UTC), Grant Edwards > <invalid@invalid.invalid> wrote: > >> On 2017-05-27, rickman <gnuarm@gmail.com> wrote: >> >>> Someone in another group is thinking of using a portable assembler > > The closest I can think of is called "C" :-) >
Sometimes people call C a "portable assembly" - they are wrong. But one of the purposes of C is so that you don't /need/ assembly, portable or not. What has been discussed so far in this branch (I haven't read the whole thread yet) has been a retargetable assembler - a way to generate an assembler program for different processors without going through all the work each time. Such tools have existed for many years, and are an efficient way to make an assembler if you need to cover more than one target. They don't help much for writing the actual target assembly code, however - though usually you can share the same directives (commands for sections, macros, etc.). GNU binutils "gas" is the most widely used example. As far as a portable assembly language is concerned, that does not and cannot exist. Assembly language is by definition too tightly connected to the ISA of the target. It is possible to have a language that is higher abstraction than assembler, but still lower level and with tighter control than C, and which can be translated/compiled to different target assemblies. LLVM is a prime example.
On 27/05/17 23:52, Theo Markettos wrote:
> Dimiter_Popoff <dp@tgi-sci.com> wrote: >> The need for portability arises when you have megabytes of >> sources which are good and need to be moved to another, better >> platform. For smallish projects - anything which would fit in >> an MCU flash - porting is likely a waste of time, rewriting it >> for the new target will be faster if done by the same person >> who has already done it once. > > Back in the 80s, lots of software was written in assembly. But it was > common for software to be cross-platform - a popular game might come out for > half a dozen or more machines, using Z80, 6502, 68K, 8086, 6809, etc. > > Obviously 'conversion' involved more than just the instruction set - parts > had to be written for the memory available and make use of the platform's > graphics capabilities (which could be substantially different). But were > there tools to handle this, or did the programmers sit down and rewrite the > assembly from scratch for each version? >
Writing a game involves a great deal more than just the coding. Usually, the coding is in fact just a small part of the whole effort - all the design of the gameplay, the storyline, the graphics, the music, the algorithms for interaction, etc., is inherently cross-platform. The code structure and design is also mostly cross-platform. Some parts (the graphics and the music) need adapted to suit the limitations of the different target platforms. The final coding in assembly would be done by hand for each target.
On 27/05/17 23:31, Dimiter_Popoff wrote:
> On 28.5.2017 &#1075;. 00:17, Les Cargill wrote: >> rickman wrote: >>> Someone in another group is thinking of using a portable assembler >>> to write code for an app that would be ported to a number of >>> different embedded processors including custom processors in FPGAs. >>> I'm wondering how useful this will be in writing code that will >>> require few changes across CPU ISAs and manufacturers. >>> >>> I am aware that there are many aspects of porting between CPUs that >>> is assembly language independent, like writing to Flash memory. I'm >>> more interested in the issues involved in trying to use a universal >>> assembler to write portable code in general. I'm wondering if it >>> restricts the instructions you can use or if it works more like a >>> compiler where a single instruction translates to multiple target >>> instructions when there is no one instruction suitable. >>> >>> Or do I misunderstand how a portable assembler works? Does it >>> require a specific assembly language source format for each target >>> just like using the standard assembler for the target? >>> >> >> That's what C is for. > > Or Basic. Or Fortran etc.
Exactly - you use a programming language appropriate for the job. For most low-level work, that is C (or perhaps C++, if you /really/ know what you are doing). Some parts of your code will be target-specific C, some parts will be portable C. And a few tiny bits will be assembly or "intrinsic functions" that are assembly made to look like C functions. Most of the assembly used will actually be written by the toolchain provider (startup code, library code, etc.) - and if you are using a half-decent processor, this would almost certainly have been better written in C than assembly. C is /not/ a "portable assembly" - it means you don't /need/ a portable assembly.
> > However, they are by far not what a "portable assembler" - existing > under the name Virtual Processor Assembler in our house is.
No, it is not a "portable assembler". It is just a translator to generate PPC assembly from 68K assembly, because you had invested so much time and code in 68K assembly and wanted to avoid re-writing everything for the PPC. That's a reasonable enough business strategy, and an alternative to writing an emulator for the 68K on the PPC, or some sort of re-compiler. But it is not a "portable assembler". If you can take code written in your VPA and translate it into PIC, 8051, msp430, ARM, and x86 assembly, in a way that gives near-optimal efficiency on each target, while letting you write your VPA code knowing exactly which instructions will be generated on the target, /then/ you would have a portable assembler. But such a language cannot be made, for obvious reasons. What you have is a two-target sort-of assembler that gives you reasonable code on two different targets. You could also say that you have your own personal low-level programming language with compiler backends for two different targets. Again, that's fair enough - and if it lets you write the code you want, great. But it is not a portable assembly.
> And never will be, like any high level language C is yet another > phrase book - convenient when you need to do a quick interaction > when you don't speak the language - and only then.
Spoken like a true fanatic (or salesman).
> >> >> This being said, I've been doing this for >> 37 years and have only a few times seen an actual need for >> portability - usually, the new hardware is so radically >> different that porting makes little sense. >> > > The need for portability arises when you have megabytes of > sources which are good and need to be moved to another, better > platform. For smallish projects - anything which would fit in > an MCU flash - porting is likely a waste of time, rewriting it > for the new target will be faster if done by the same person > who has already done it once. >
On 5/27/2017 2:52 PM, Theo Markettos wrote:
> Dimiter_Popoff <dp@tgi-sci.com> wrote: >> The need for portability arises when you have megabytes of >> sources which are good and need to be moved to another, better >> platform. For smallish projects - anything which would fit in >> an MCU flash - porting is likely a waste of time, rewriting it >> for the new target will be faster if done by the same person >> who has already done it once. > > Back in the 80s, lots of software was written in assembly.
For embedded systems (before we called them that), yes. There were few compilers that were really worth the media they were delivered on -- and few meant to generate code for bare iron.
> But it was > common for software to be cross-platform - a popular game might come out for > half a dozen or more machines, using Z80, 6502, 68K, 8086, 6809, etc. > > Obviously 'conversion' involved more than just the instruction set - parts > had to be written for the memory available and make use of the platform's > graphics capabilities (which could be substantially different). But were > there tools to handle this, or did the programmers sit down and rewrite the > assembly from scratch for each version?
Speaking from the standpoint of the *arcade* game industry, games were developed on hardware specific to that particular game (trying, where possible, to leverage as much of a previous design as possible -- for reasons of economy). Most games were coded from scratch in ASM; very little "lifted" from Game X to act as a basis for Game Y (this slowly changed, over time -- but, mainly in terms of core services... runtime executives predating "real" OS's). Often, the hardware was *very* specific to the game (e.g., a vector graphic display didn't draw vectors in a frame buffer but, rather, directly controlled the deflection amplifiers -- X & Y -- of the monitor to move the "beam" around the display tube in a particular path). As such, the "display I/O" wasn't really portable in an economic sense -- no reason to make a Z80 version of a 6502-based game with that same wonky display hardware. E.g., Atari had a vector graphic display system (basically, a programmable display controller) that could ONLY draw curves -- because curves were so hard to draw with a typical vector graphic processor! (You'd note that every "line segment" on the display was actually a curve of a particular radius) Also, games taxed their hardware to the limit. There typically wasn't an "idle task" that burned excess CPU cycles; all cycles were used to make the game "do more" (players are demanding). The hardware was designed to leverage whatever features the host CPU (often more than one CPU for different aspects of the game -- e.g., "sound" was its own processor, etc.) to the greatest advantage. E.g., 680x processors were a delight to interface to a frame buffer as the bus timing directly lent itself to "display controller gets access to the frame buffer for THIS half clock cycle... and the CPU gets access for the OTHER half cycle" (no wait states as would be the case with a processor having variable bus cycle timings (e.g., Z80). Many manufacturers invested in full custom chips to add value (and make the games harder to counterfeit). A port of a game to another processor (and perhaps entire hardware platform) typically meant rewriting the entire game, from scratch. But, 1980's games (arcade pieces) weren't terribly big -- tens of KB of executables. Note that any graphics for the game were directly portable (many of the driving games and some of the Japanese pseudo-3D games had HUGE image ROMs that were displayed by dedicated hardware -- under the control of the host CPU). In practical terms, these were small enough projects that *seeing* one that already works (that YOU coded or someone at your firm/affiliate coded) was the biggest hurdle to overcome; you know how the game world operates, the algorithms for the "robots", what the effects should look like, etc. If you look at emulations of these games (e.g., MAME), you will see that they aren't literal copies but, rather, just intended to make you THINK you're playing the original game (because the timing of the algorithms in the emulations isn't the same as that in the original game). E.g., the host (application) typically synchronized its actions to the position of the "beam" repainting the display from the frame buffer (in the case of a raster game; similar concepts for vector games) to minimize visual artifacts (like "object tearing") and provide other visual features ("OK, the beam has passed this portion of the display, we can now go in and alter it in preparation for its next pass, through") In a sense, the games were small systems, by today's standards. Indeed, many could be *emulated* on SoC's, today -- for far less money than their original hardware and far less development time!
On 5/28/2017 8:04 PM, Les Cargill wrote:
>> The need for portability arises when you have megabytes of >> sources which are good and need to be moved to another, better >> platform. > > Mostly, I've seen the source code outlast the company for > which it was written :)
Or, the platform on which it was originally intended to run! OTOH, there are many "regulated" industries where change is NOT seen as "good". Where even trivial changes can have huge associated costs (e.g., formal validation, reestablishing performance and reliability data, etc.) [I've seen products that required the manufacturer to scour the "used equipment" markets in order to build more devices simply because the *new* equipment on which the design was based was no longer being sold!] [[I've a friend here who hordes big, antique (Sun) iron because his enterprise systems *run* on old SPARCservers and the cost of replacing/redesigning the software to run on new/commodity hardware and software is simply too far beyond the company's means!]]
> I would personally view "megabytes of source" as an opportunity to > infuse a system with better ideas through a total rewrite. I > understand that this view is rarely shared; people prefer the > arbitrage of technical debt.
I've never seen this done, successfully. The "second system" effect seems to sabotage these attempts -- even for veteran developers! Instead of reimplementing the *same* system, they let feeping creaturism take over. The more developers, the more "pet features" try to weasel their way into the new design. As each *seems* like a tiny little change, no one ever approaches any of them with a serious evaluation of their impact(s) on the overall system. And, everyone is chagrined at how much *harder* it is to actually fold these changes into the new design -- because the new design was conceived with the OLD design in mind (i.e., WITHOUT these additions -- wasn't that the whole point of this effort?). Meanwhile, your (existing) market is waiting on the new release of the OLD product (with or without the new features) instead of a truly NEW product. And, your competitors are focused on their implementations of "better" products (no one wants to play "catch-up"; they all aim to "leap-frog"). Save your new designs for new products!
On 29.5.2017 &#1075;. 12:00, David Brown wrote:
> On 27/05/17 23:31, Dimiter_Popoff wrote: >..... >> >> However, they are by far not what a "portable assembler" - existing >> under the name Virtual Processor Assembler in our house is. > > No, it is not a "portable assembler". It is just a translator to > generate PPC assembly from 68K assembly, ....
I might agree with that - if we understand "portable" as "universally portable".
> > But it is not a "portable assembler". If you can take code written in > your VPA and translate it into PIC, 8051, msp430, ARM, and x86 assembly,
Well who in his right mind would try to port serious 68020 or sort of code to a PIC or MSP430 etc. I am talking about what is practical and has worked for me. It would be a pain to port back from code I have written for power to something with fewer registers but within reason it is possible and can even be practical. Yet porting to power has been easier because it had more registers than the original 68k and many other things, it is just more powerful and very well thought, whoever did it knew what he was doing. It even has little endian load and store opcodes... (I wonder of ARM have big endian load/store opcodes). Yet I agree it is not an "assembler" I suppose. I myself refer to it at times as a compiler, then as an assembler... It can generate many lines per statement - many opcodes, e.g. the 64/32 bit divide the 68020 has is done in a loop, no way around that (17 opcodes, just counted it). Practically the same what any other compiler would have to do.
> in a way that gives near-optimal efficiency on each target, while > letting you write your VPA code knowing exactly which instructions will > be generated on the target, /then/ you would have a portable assembler.
It comes pretty close to that as long as your CPU has 32 registers, but you need to know exactly what each line does only during debugging, running step by step through the native code.
>> And never will be, like any high level language C is yet another >> phrase book - convenient when you need to do a quick interaction >> when you don't speak the language - and only then. > > Spoken like a true fanatic (or salesman).
It may sound so but it is not what I intended. VPA has made me a lot more efficient than anyone else I have been able to compare myself with. Since I also am only human it can't be down to me, not by *that* much. It has to be down to something else; in all likelihood it is the toolchain I use. My "phrasebook" comment stays I'm afraid. Dimiter ------------------------------------------------------ Dimiter Popoff, TGI http://www.tgi-sci.com ------------------------------------------------------ http://www.flickr.com/photos/didi_tgi/
On 29/05/17 14:08, Dimiter_Popoff wrote:
> On 29.5.2017 &#1075;. 12:00, David Brown wrote: >> On 27/05/17 23:31, Dimiter_Popoff wrote: >> ..... >>> >>> However, they are by far not what a "portable assembler" - existing >>> under the name Virtual Processor Assembler in our house is. >> >> No, it is not a "portable assembler". It is just a translator to >> generate PPC assembly from 68K assembly, .... > > I might agree with that - if we understand "portable" as "universally > portable". >
"Universally portable" is perhaps a bit ambitious :-) But to be called a "portable assembly", I would expect a good deal more than two architectures that are relatively closely related (32-bit, reasonably orthogonal instruction sets, big endian). I would imagine that translating 68K assembly into PPC assembly is mostly straightforward - unlike translating it into x86, or even ARM. (The extra registers on the PPC give you the freedom you need for converting complex addressing modes on the 68K into reasonable PPC code - while the ARM has fewer registers available.)
>> >> But it is not a "portable assembler". If you can take code written in >> your VPA and translate it into PIC, 8051, msp430, ARM, and x86 assembly, > > Well who in his right mind would try to port serious 68020 or sort of > code to a PIC or MSP430 etc.
If the code were /portable/ assembly, then it would be possible. Standard C code will work fine on the msp430, ARM, x86, 68K and PPC - though it is unlikely to be efficient on a PIC or 8051.
> I am talking about what is practical and has worked for me. It would be > a pain to port back from code I have written for power to > something with fewer registers but within reason it is possible and > can even be practical. Yet porting to power has been easier because > it had more registers than the original 68k and many other things, > it is just more powerful and very well thought, whoever did it knew > what he was doing. It even has little endian load and store opcodes... > (I wonder of ARM have big endian load/store opcodes). >
(I agree that the PPC is a fine ISA, and have enjoyed using it on a couple of projects. ARM Cortex M, the most prevalent cores for microcontrollers, does not have big endian load or store opcodes. But it has byte-reverse instructions for both 16-bit and 32-bit values. The traditional ARM instruction set may have them - I am not as familiar with that.) If it were /portable/ assembly, then your code that works well for the PPC would automatically work well for the 68K. The three key points about assembly, compared to other languages, are that you know /exactly/ what instructions will be generated, including the ordering, register choices, etc., that you can access /all/ features of the target cpu, and that you can write code that is as efficient as possible for the target. There is simply no way for this to portable. Code written for the 68k may use complex addressing modes - they need multiple instructions in PPC assembly. If you do this mechanically, you will know exactly what instructions this generates - but the result will not be as efficient as code that re-uses registers or re-orders instructions for better pipelining. Code written for the PPC may use more registers than are available on the 68K - /something/ has to give. Thus your VLA may be a fantastic low-level programming language (having never used it or seen it, I can't be sure - but I'm sure you would not have stuck with it if it were not good!). But it is not a portable assembly language - it cannot let you write assembly-style code for more than one target.
> Yet I agree it is not an "assembler" I suppose. I myself refer to it > at times as a compiler, then as an assembler... It can generate many > lines per statement - many opcodes, e.g. the 64/32 bit divide the 68020 > has is done in a loop, no way around that (17 opcodes, just counted it). > Practically the same what any other compiler would have to do.
That's fine - you have a low-level language and a compiler, not a portable assembler. Some time it might be fun to look at some example functions, compiled for either the 68K or the PPC (or, better still, both) and compare both the source code and the generated object code to modern C and modern C compilers. (Noting that the state of C compilers has changed a great deal since you started making VLA.)
> >> in a way that gives near-optimal efficiency on each target, while >> letting you write your VPA code knowing exactly which instructions will >> be generated on the target, /then/ you would have a portable assembler. > > It comes pretty close to that as long as your CPU has 32 registers, > but you need to know exactly what each line does only during debugging, > running step by step through the native code. > >>> And never will be, like any high level language C is yet another >>> phrase book - convenient when you need to do a quick interaction >>> when you don't speak the language - and only then. >> >> Spoken like a true fanatic (or salesman). > > It may sound so but it is not what I intended.
Fair enough.
> VPA has made me a lot more efficient than anyone else I have been able > to compare myself with. Since I also am only human it can't be down > to me, not by *that* much. It has to be down to something else; in all > likelihood it is the toolchain I use. My "phrasebook" comment > stays I'm afraid. >
Good comparisons are, of course, extremely difficult - and not least, extremely expensive. You would need to do large scale experiments with at least dozens of programmers working on a serious project before you could compare efficiency properly.
On 2017-05-29, Dimiter_Popoff <dp@tgi-sci.com> wrote:
> On 29.5.2017 &#1075;. 12:00, David Brown wrote: >> On 27/05/17 23:31, Dimiter_Popoff wrote: >>..... >>> >>> However, they are by far not what a "portable assembler" - existing >>> under the name Virtual Processor Assembler in our house is. >> >> No, it is not a "portable assembler". It is just a translator to >> generate PPC assembly from 68K assembly, .... > > I might agree with that - if we understand "portable" as "universally > portable". > >> >> But it is not a "portable assembler". If you can take code written in >> your VPA and translate it into PIC, 8051, msp430, ARM, and x86 assembly, > > Well who in his right mind would try to port serious 68020 or sort of > code to a PIC or MSP430 etc.
Nobody. Yet, that's what a "Universal Assembler" would be able to do.
> I am talking about what is practical and has worked for me.
And it is not anything close to a "Universal Assembler". -- Grant