EmbeddedRelated.com
Forums

Portable Assembly

Started by rickman May 27, 2017
On 28.5.2017 г. 00:52, Theo Markettos wrote:
> Dimiter_Popoff <dp@tgi-sci.com> wrote: >> The need for portability arises when you have megabytes of >> sources which are good and need to be moved to another, better >> platform. For smallish projects - anything which would fit in >> an MCU flash - porting is likely a waste of time, rewriting it >> for the new target will be faster if done by the same person >> who has already done it once. > > Back in the 80s, lots of software was written in assembly. But it was > common for software to be cross-platform - a popular game might come out for > half a dozen or more machines, using Z80, 6502, 68K, 8086, 6809, etc. > > Obviously 'conversion' involved more than just the instruction set - parts > had to be written for the memory available and make use of the platform's > graphics capabilities (which could be substantially different). But were > there tools to handle this, or did the programmers sit down and rewrite the > assembly from scratch for each version? > > Theo >
I am not aware of tools doing it, they must have been rewritten. The exception on your list is the 6809, it was source level compatible to the 6800 (i.e. 6800 code could be assembled into 6809 code, slightly larger but very similar object code). BTW I still have a 6809 system working under DPS - emulated as a task in a window, running MDOS09 (which ran on the Exorsiser systems), http://tgi-sci.com/misc/sc09em.gif . The 6809 assembler is what I grew up on back in the 80-s. I may of course be simply unaware of something. I have never looked into other people's work more than I needed to do what I wanted to do as fast as I could, many times I may have chosen to reinvent things simply because this has been the fastest (pre-www) way. Dimiter ------------------------------------------------------ Dimiter Popoff, TGI http://www.tgi-sci.com ------------------------------------------------------ http://www.flickr.com/photos/didi_tgi/
On 5/27/2017 2:17 PM, Les Cargill wrote:
> That's what C is for.
Arguably, ANY HLL.
> This being said, I've been doing this for > 37 years and have only a few times seen an actual need for > portability - usually, the new hardware is so radically > different that porting makes little sense.
Depends on how you handle your abstractions in the design. If you tie the design directly to the hardware, then you've implicitly made it dependent on that hardware -- without even being aware of the dependencies. OTOH, you can opt to create abstractions that give you a "slip sheet" above the bare iron -- at some (small, if done well) cost in efficiency. (e.g., "Hardware Abstraction Layer" -- though not necessarily as explicit or limiting therein) E.g., my current RTOS moves reasonably well between different hardware platforms (I'm running on ARM and x86, currently) with the same sorts of services exported to the higher level API's. OTOH, the API's explicitly include provisions that allow the "application" layers to tailor themselves to key bots of the hardware made largely opaque by the API (e.g., MMU page sizes, number and granularity of hardware timers, etc.) But, this changes the level of proficiency required of folks working with those API's. Arguably, I guess it should (?) Of course, if you want to shed all "hardware dependencies" and just code to a POSIX API... <shrug> One could make an abstraction that is sufficiently *crude* (the equivalent of single-transistor logic) and force the coder to use that as an implementation language; then, recognize patterns of "operations" and map those to templates that correlate with opcodes of a particular CPU (i.e., many operations -> one opcode). Or, the HLL approach of mapping *an* operation into a sequence of CPU-specific opcodes. Or, many<->many, in between.
On 5/27/2017 2:31 PM, Dimiter_Popoff wrote:
> The need for portability arises when you have megabytes of > sources which are good and need to be moved to another, better > platform. For smallish projects - anything which would fit in > an MCU flash - porting is likely a waste of time, rewriting it > for the new target will be faster if done by the same person > who has already done it once.
+1 The *concepts*/design are what you are trying to reuse, not the *code*. OTOH, we see increasing numbers of designs migrating into software that would previously have been done with hardware as the costs of processors falls and capabilities rise. This makes it economical to leverage the higher levels of integration available in an MCU over that of "discretes" or, worse, a *specific* "custom". E.g., I can design an electronic tape rule in hardware or software in roughly the same amount of effort. But, the software version will be more mutable, in the long term, and leverage a single "raw part number" (the unprogrammed MCU) in the MRP system. OToOH, we are seeing levels of complexity now -- even in SoC's -- that make "big" projects much more commonplace. I'd hate to have to recode a cell-phone for a different choice of processor if I'd not made plans for that contingency in the first place!
On 28.5.2017 &#1075;. 01:52, Don Y wrote:
> On 5/27/2017 2:31 PM, Dimiter_Popoff wrote: >> The need for portability arises when you have megabytes of >> sources which are good and need to be moved to another, better >> platform. For smallish projects - anything which would fit in >> an MCU flash - porting is likely a waste of time, rewriting it >> for the new target will be faster if done by the same person >> who has already done it once. > > +1 > > The *concepts*/design are what you are trying to reuse, not > the *code*. > > OTOH, we see increasing numbers of designs migrating into > software that would previously have been done with hardware > as the costs of processors falls and capabilities rise. > This makes it economical to leverage the higher levels of > integration available in an MCU over that of "discretes" > or, worse, a *specific* "custom".
Well of course, it is where all of us here have been moving to last 25 years or so (for me, since the HC11 days).
> > E.g., I can design an electronic tape rule in hardware or > software in roughly the same amount of effort. But, the software > version will be more mutable, in the long term, and leverage > a single "raw part number" (the unprogrammed MCU) in the MRP > system.
Yes of course, but porting does not necessarily mean porting to another CPU architecture, typically you will reuse the code on the same one - and modify just some peripheral interactions etc. sort of thing.
> > OToOH, we are seeing levels of complexity now -- even in SoC's -- that > make "big" projects much more commonplace. I'd hate to have to > recode a cell-phone for a different choice of processor if I'd not > made plans for that contingency in the first place! >
Well phones do not have the flash as part of the SoC, I said "in the MCU flash", meaning on the same chip. This is what I regard as a "small" thingie, can't see what it will have to do to take up more that 3-4 months of my time as long as I know what I want to program. Anything where external disks and/or "disks" are involved is in the other category of course. Dimiter
On Saturday, May 27, 2017 at 2:39:41 PM UTC-5, rickman wrote:
> Someone in another group is thinking of using a portable assembler to write > code for an app that would be ported to a number of different embedded > processors including custom processors in FPGAs. I'm wondering how useful > this will be in writing code that will require few changes across CPU ISAs > and manufacturers. > > I am aware that there are many aspects of porting between CPUs that is > assembly language independent, like writing to Flash memory. I'm more > interested in the issues involved in trying to use a universal assembler to > write portable code in general. I'm wondering if it restricts the > instructions you can use or if it works more like a compiler where a single > instruction translates to multiple target instructions when there is no one > instruction suitable. > > Or do I misunderstand how a portable assembler works? Does it require a > specific assembly language source format for each target just like using the > standard assembler for the target? > > -- > > Rick C
]>ported to a number of different embedded processors including custom processors in FPGAs It's possible to do direct threaded code in C. For small projects, the number of threaded code routines is small and highly application specific. So all the thread code segments are very portable and the debugging is in the threaded code routines (e.g. one can perfect the application in C on a PC and then migrate to any number of custom ISAs). That said, am currently creating a system of symbolic constants for all the op-codes and operand values (using VHDL and for each specific ISA). One can create symbolic constants for various locations in the code (and manually update the constants as code gets inserted or deleted). Per-opcode functions can be defined that make code generation less troublesome. The code (either constant expressions or function calls) is laid out as initialization for the instruction memory. Simulation can be used to debug the code and the ISA. A quick two step process: edit the code and run the simulator. One can also write a C or any other language program that generates the binary code file which is then inserted into FPGA RAM during the FPGA compile step. Typically one writes a separate function for each op-code or label generator (and for each ISA). Two passes through all the function calls (e.g. the application program) first pass to generate the labels and the second pass to generate the binary file. For use with FPGA simulation this is a three step process: edit the application program, run the binary file generator and run the FPGA simulator. The preferred solution is to support label generators in the memory initialization sections of the VHDL or Verilog code. Would be very interested if someone has managed to do label generators?
On 5/27/2017 4:14 PM, Dimiter_Popoff wrote:
>> E.g., I can design an electronic tape rule in hardware or >> software in roughly the same amount of effort. But, the software >> version will be more mutable, in the long term, and leverage >> a single "raw part number" (the unprogrammed MCU) in the MRP >> system. > > Yes of course, but porting does not necessarily mean porting to > another CPU architecture, typically you will reuse the code on > the same one - and modify just some peripheral interactions etc. > sort of thing.
Yes, but you can't always be sure of that. I've seen many products "squirm" when the platform they adopted for early versions suddenly became unavailable -- or, too costly -- to support new versions/revisions of the product. This is probably one of the most maddening positions to be in: having *a* product and facing a huge re-development just to come up with the NEXT product in its evolution.
>> OToOH, we are seeing levels of complexity now -- even in SoC's -- that >> make "big" projects much more commonplace. I'd hate to have to >> recode a cell-phone for a different choice of processor if I'd not >> made plans for that contingency in the first place! > > Well phones do not have the flash as part of the SoC, I said > "in the MCU flash", meaning on the same chip. This is what I regard > as a "small" thingie, can't see what it will have to do to take > up more that 3-4 months of my time as long as I know what I want > to program.
But you can pick devices with *megabytes* of on-board (on-chip) FLASH, nowadays: <https://www.microchip.com/wwwproducts/en/ATSAM4SD32C> It seems fairly obvious that more real-estate will find its way into devices' "memory" allocations. You could keep a small staff busy just tracking new offerings and evaluating price/performance points for each. I've discarded several "finished" hardware designs for my current project because I *know* they'll be obsolete before the rest of the designs are complete! Instead, I concentrate on getting all of the software written for the various applications on hardware that I *know* I won't be using (I've a stack of a couple dozen identical x86 SBC's that I've been repurposing for each of the application designs) just to allow me to have "working prototypes" that the other applications can talk to as THEY are being developed. As most of the design effort is in OS and application software -- with a little bit of specialized hardware I/O development -- the choice of processor is largely boring (so, why make it NOW?)
> Anything where external disks and/or "disks" are involved is in the > other category of course.
On 28.5.2017 &#1075;. 03:14, Don Y wrote:
> On 5/27/2017 4:14 PM, Dimiter_Popoff wrote: >>> E.g., I can design an electronic tape rule in hardware or >>> software in roughly the same amount of effort. But, the software >>> version will be more mutable, in the long term, and leverage >>> a single "raw part number" (the unprogrammed MCU) in the MRP >>> system. >> >> Yes of course, but porting does not necessarily mean porting to >> another CPU architecture, typically you will reuse the code on >> the same one - and modify just some peripheral interactions etc. >> sort of thing. > > Yes, but you can't always be sure of that. I've seen many > products "squirm" when the platform they adopted for early > versions suddenly became unavailable -- or, too costly -- to > support new versions/revisions of the product.
Of course you can't be sure what other people will do. We can't really be sure what we'll do ourselves.... :-)
> ... This is probably > one of the most maddening positions to be in: having *a* product > and facing a huge re-development just to come up with the NEXT > product in its evolution.
Exactly this situation forced my hand to create vpa (virtual processor assembly language). I had several megabytes of good sources written in 68k assembly and the 68k line was going to an end. Sure I could have used it for another few years but it was obvious I had to move forward so I did.
> >>> OToOH, we are seeing levels of complexity now -- even in SoC's -- that >>> make "big" projects much more commonplace. I'd hate to have to >>> recode a cell-phone for a different choice of processor if I'd not >>> made plans for that contingency in the first place! >> >> Well phones do not have the flash as part of the SoC, I said >> "in the MCU flash", meaning on the same chip. This is what I regard >> as a "small" thingie, can't see what it will have to do to take >> up more that 3-4 months of my time as long as I know what I want >> to program. > > But you can pick devices with *megabytes* of on-board (on-chip) FLASH, > nowadays: > <https://www.microchip.com/wwwproducts/en/ATSAM4SD32C> > It seems fairly obvious that more real-estate will find its way into > devices' "memory" allocations.
Well this part is closer to a "big thing" but not there yet. 160k RAM is by no means much nowadays, try buffering a 100 Mbps Ethernet link on that for example. It is still for stuff you can do within a few months if you know what you want to do. The high level language will take care of clogging up the 2M flash if anything :D. Although I must say that my devices (MPC5200b based) have 2M flash and can boot a fully functional dps off it... including most of the MCA software. It takes a disk to get the complete functionality but much of it - OS, windows, shell and all commands etc. fit in (just between 100 and 200k are used for "BIOS" purposes, the rest of the 2M is a "disk"). This with 64M RAM of course - and no wallpapers, you need a proper disk for that :-). However I doubt anyone writing in C could fit a tenth of that in 2M flash which is why it is there; it is just way more than really necessary for the RAM that part has if programming would not be done at high level. Dimiter ------------------------------------------------------ Dimiter Popoff, TGI http://www.tgi-sci.com ------------------------------------------------------ http://www.flickr.com/photos/didi_tgi/
On 5/27/2017 5:38 PM, Dimiter_Popoff wrote:
>> ... This is probably >> one of the most maddening positions to be in: having *a* product >> and facing a huge re-development just to come up with the NEXT >> product in its evolution. > > Exactly this situation forced my hand to create vpa (virtual processor > assembly language). I had several megabytes of good sources written in > 68k assembly and the 68k line was going to an end. Sure I could have > used it for another few years but it was obvious I had to move > forward so I did.
Its one of the overwhelming reasons I opt for HLL's -- I can *buy* a tool that will convert my code to the target of my choosing. :>
>>>> OToOH, we are seeing levels of complexity now -- even in SoC's -- that >>>> make "big" projects much more commonplace. I'd hate to have to >>>> recode a cell-phone for a different choice of processor if I'd not >>>> made plans for that contingency in the first place! >>> >>> Well phones do not have the flash as part of the SoC, I said >>> "in the MCU flash", meaning on the same chip. This is what I regard >>> as a "small" thingie, can't see what it will have to do to take >>> up more that 3-4 months of my time as long as I know what I want >>> to program. >> >> But you can pick devices with *megabytes* of on-board (on-chip) FLASH, >> nowadays: >> <https://www.microchip.com/wwwproducts/en/ATSAM4SD32C> >> It seems fairly obvious that more real-estate will find its way into >> devices' "memory" allocations. > > Well this part is closer to a "big thing" but not there yet. 160k RAM > is by no means much nowadays, try buffering a 100 Mbps Ethernet link > on that for example. It is still for stuff you can do within a few > months if you know what you want to do. The high level language will > take care of clogging up the 2M flash if anything :D.
I don't think you realize just how clever modern compilers have become. I've taken portions of old ASM projects and tried to code them in HLL's to see what the "penalty" would be. It was alarming to see how much cleverer they have become (over the course of many decades!). Of course, you need to be working on a processor that is suitable to their use to truly benefit from their cleverness -- I doubt an 8x300 compiler would beat my ASM code! :>
> Although I must say that my devices (MPC5200b based) have 2M flash > and can boot a fully functional dps off it... including most of the > MCA software. It takes a disk to get the complete functionality > but much of it - OS, windows, shell and all commands etc. > fit in (just between 100 and 200k are used for "BIOS" purposes, > the rest of the 2M is a "disk"). > This with 64M RAM of course - and no wallpapers, you need a proper > disk for that :-).
I use the FLASH solely for initial POST, a secure netboot protocol, a *tiny* RTOS and "fail safe/secure" hooks to ensure a "mindless" device can't get into -- or remain in -- an unsafe state (including the field devices likely tethered to it). [A device may not be easily accessible to a human user!] Once "enough" of the hardware is known to be functional, a second level boot drags in more diagnostics and a more functional protocol stack. A third level boot drags in the *real* OS and real network stack. After that, other aspects of the "environment" can be loaded and, finally, the "applications". [Of course, any of these steps can fail/timeout and leave me with a device with *just* the functionality of the FLASH] Even this level of functionality deferral isn't enough to keep me from having to "customize" the FLASH in each type of device (cuz they have different I/O complements). So, big incentive to come up with a more universal set of "peripherals" just to cut down on the number of different designs.
> However I doubt anyone writing in C could fit a tenth of that > in 2M flash which is why it is there; it is just way more than really > necessary for the RAM that part has if programming would not be > done at high level.
On Sat, 27 May 2017 21:58:50 +0000 (UTC), Grant Edwards
<invalid@invalid.invalid> wrote:

>On 2017-05-27, Theo Markettos <theom+news@chiark.greenend.org.uk> wrote: > >> Back in the 80s, lots of software was written in assembly. But it was >> common for software to be cross-platform - a popular game might come out for >> half a dozen or more machines, using Z80, 6502, 68K, 8086, 6809, etc. >> >> Obviously 'conversion' involved more than just the instruction set - parts >> had to be written for the memory available and make use of the platform's >> graphics capabilities (which could be substantially different). But were >> there tools to handle this, or did the programmers sit down and rewrite the >> assembly from scratch for each version? > >Usually the latter. > >There were tools that were supposed to help you do things like port >8080 assmebly language programs to the 8086, but from what I >read/heard they didn't turn out to be very useful in the real world.
In original marketing material, itt was claimed that the 8086 was 8080 compatible. However, when the opcode tables were released, it was quite obvious that this was not the case. Then they claimed assembly level compatibility ... .
upsidedown@downunder.com wrote on 5/28/2017 12:37 AM:
> On Sat, 27 May 2017 21:58:50 +0000 (UTC), Grant Edwards > <invalid@invalid.invalid> wrote: > >> On 2017-05-27, Theo Markettos <theom+news@chiark.greenend.org.uk> wrote: >> >>> Back in the 80s, lots of software was written in assembly. But it was >>> common for software to be cross-platform - a popular game might come out for >>> half a dozen or more machines, using Z80, 6502, 68K, 8086, 6809, etc. >>> >>> Obviously 'conversion' involved more than just the instruction set - parts >>> had to be written for the memory available and make use of the platform's >>> graphics capabilities (which could be substantially different). But were >>> there tools to handle this, or did the programmers sit down and rewrite the >>> assembly from scratch for each version? >> >> Usually the latter. >> >> There were tools that were supposed to help you do things like port >> 8080 assmebly language programs to the 8086, but from what I >> read/heard they didn't turn out to be very useful in the real world. > > In original marketing material, itt was claimed that the 8086 was > 8080 compatible. However, when the opcode tables were released, it was > quite obvious that this was not the case. Then they claimed assembly > level compatibility ...
I've never heard they claimed anything other than assembly source compatibility. It would have been very hard to make the 8086 opcode compatible with the 8080. They needed to add a lot of new instructions and the 8080 opcodes used nearly all the space. They would have had to treat all the new opcodes as extended opcodes wasting a byte on each one. Or worse, they could have used instruction set modes which would have been a disaster. -- Rick C