EmbeddedRelated.com
Forums
The 2026 Embedded Online Conference

x86 real mode

Started by Don Y October 17, 2014
On Fri, 17 Oct 2014 13:54:25 -0700, Don Y <this@is.not.me.com> wrote:

>On 10/17/2014 8:36 AM, Arlet Ottens wrote: >> On 10/17/2014 05:28 PM, Don Y wrote: >> >>>> Here's an overview of the memory models: >>>> http://en.wikipedia.org/wiki/Intel_Memory_Model >>>> >>>> The smallest one is the Tiny model, but I don't know if it makes sense >>>> to use >>>> that as a design constraint, since it's a trivial matter to change to >>>> another >>>> memory model if your tools support it. >>> >>> Not a question of toolchain but, rather, what the *environment* >>> already "expects" (provides). Hence the "*practical*" qualifier in >>> my original post. >> >> The environment is provided by the toolchain startup code, so it is a question >> of toolchain. > >The environment is defined by the rest of the code which I have to >co-operate. It is poorly documented -- hence my query as to what I >could likely expect to encounter. > >I'm all set, though. Just have to figure out what I want to shoehorn >in and then find the right sized shoehorn! Thanks!
The best guess is to look at the available RAM (for loadable programs) or the amount of RAM and ROM. If you have less than 64 KiB total or less than 64 KiB+64 KiB, the answer would be pretty obvious :-)
On Fri, 17 Oct 2014 21:48:18 +0000 (UTC), glen herrmannsfeldt
<gah@ugcs.caltech.edu> wrote:

>Tauno Voipio <tauno.voipio@notused.fi.invalid> wrote: > >(snip) >> Actually, Intel did not intend to use the 8086 real mode >> as such, only as bootstrap mode for 80286. The success >> of 8088 and original PC came as a surprise, and it influenced >> the design of 80386 with the virtual 8086 mode. > >Well, the 8086 was an upgrade from the 8080 or 8085. >They knew by then that people were running out of address space. >In 1976, I used an 8080 system with 64K unmarked (factory samples, >or something like that) DRAM. > >Systems with (externally) bank switched RAM weren't all that >unusual, so maybe putting something like bank switch inside >the chip wouldn't have been so strange.
One of the (pre-release) selling point was that the 8086 was 8080 compatible. When 8086 details finally released, this appeared to be some degree of assembler mnemonics similarity.
>But yes, I am pretty sure that Intel didn't expect the 80286 >to mostly be used in systems only running real mode.
The 80286 was some kind of stop gap between the 8080 and the iAPX432, which proved to be disastrous.
On Fri, 17 Oct 2014 22:25:12 +0000 (UTC), glen herrmannsfeldt
<gah@ugcs.caltech.edu> wrote:

>Don Y <this@is.not.me.com> wrote: > >(snip, I wrote) > >>> Well, the 8086 was an upgrade from the 8080 or 8085.
>>> They knew by then that people were running out of address space. > >> Yes, an Our Hero, Bill figured 640K was more than anyone would >> EVER need! :-/ > >> [How much nicer things would have been had the 68K won that design-in] > >Might have been a 68008, though. They seem to like the 8 bit bus.
A world full of Sinclair QL clones :-)
On 18.10.14 02:39, Les Cargill wrote:
> Tauno Voipio wrote: >> On 17.10.14 17:23, Don Y wrote: >>> Hi, >>> >>> [What a screwed up architecture!] >>> >>> Thx, >>> --don >> >> Actually, Intel did not intend to use the 8086 real mode >> as such, only as bootstrap mode for 80286. The success >> of 8088 and original PC came as a surprise, and it influenced >> the design of 80386 with the virtual 8086 mode. >> >> There were no PC's at the design time of 8086 and 80286, >> and the aim was for well protected real-time multitasking >> applications. > > For an 8086? I'm pretty sure it was all 100% "real mode".
No - it was intentionally made similar to the boot-up mode of a 80286. There would not be any segment registers in any sensible 16+ bit processor. A bank switcher would have been much simpler. A segmented processor architecture was well known decades before the 80(2)86, so the registers were originally intended to point to segment descriptors a la -286+. I was at an invitational Intel meeting in 1980 (PC is from -81) and had some interesting discussions with the designers. They already had some of -386 planned, so that -286 would not become incompatible with it. -- -TV
On 18.10.14 06:26, glen herrmannsfeldt wrote:

> OS/2 1.x allowed for 8K segment selectors for a user process, > each could be up to 64K, so 512MB. With DOS, only 640K, or if > you use the right display board, you can go up a litle more.
This was (and is) an architectural limitation of the segment descriptor system: there are 13 bits for local descriptor table indices. The 32 bit system can have larger segments, but not more of them. -- -TV
On Fri, 17 Oct 2014 15:39:20 -0700, Don Y <this@is.not.me.com> wrote:

>On 10/17/2014 3:25 PM, glen herrmannsfeldt wrote: >> Don Y <this@is.not.me.com> wrote: >> >>>> Well, the 8086 was an upgrade from the 8080 or 8085. >> >>> The 8085 was essentially an 8080 with an internal clock driver >>> (and a couple of other little fixups -- RIM/SIM, INT5.5/6.5/7.5, >>> etc.) >> >> And a 5V only power supply. > >Note that most practical memory still required two (even *three*!) >power supplies.
Only if you used DRAMs. Apart from some early (1702) EPROMs, most EPROMs were single supply as well as small SRAMs. The first 6800 demo system was single supply.
>> As well as I know it, the 8085 and Z80 >> are but successors to the 8080, but different groups from intel. >> (One group left, one stayed.) Being 5V only made it a lot easier >> to use in small systems. > >Different approaches to the "problem" (that being, the design of >an 8 bit MPU). Likewise, the Motogorilla/NatSemi/Signetics/TI/etc. >folks each took a trip in a different direction. We seem to have >been left with the worst of the bunch :-/ > >(Well, perhaps not *worst* -- there were some really ghastly designs >in the early/mid 70's. But, it beat the hell out of coding for the >i4004!) > >The Z80 was a much more powerful architecture than the 8080/85. >Especially if you were trying to deal with quick response times >for ISRs, structured coding, etc. Far more suited to HLL constructs >(though still very clumsy). > >Most of the "16b" machines were dog slow, by comparison. And, ate >more memory on top of it! > >>>> They knew by then that people were running out of address space. >> >>> Yes, an Our Hero, Bill figured 640K was more than anyone would >>> EVER need! :-/ >> >>> [How much nicer things would have been had the 68K won that design-in] >> >> Might have been a 68008, though. They seem to like the 8 bit bus. > >But, there was a SEEMLESS path upward from the 68000. Intel gave us all these >"backward compatibility issues" to carry forward (into the 31st century!). >It was just a memory bandwidth tradeoff (08, 010, 020, etc.) moving forward.
You seem to think that the "computer" was invented by the semiconductor industry, in fact there was a quarter of a century computers before that. With higher levels of integration possible, more and more functionality could be integrated into a "single" chip. First you might have needed a few hundred TTL chips, then the key functionality could be put into a few LSI chips and finally you could announce the world first 4/8/16/32 bit chip, but still needing a large amount of auxiliary chips.
>>> IIRC, the original "ISA" bus wasn't even formally characterized >>> until after the fact. >> >> IBM always was, and still is, good at documented what they did, >> though not always why they did it. There are manuals that give lots >> of detail on every line of the bus, and BIOS listings with comments. >> >> But then there were mistakes along the way. Edge triggered >> interrupts make it hard to share INT lines. > >I think documentation exists *now* but not "back then". I can recall >having one helluva time trying to design "ISA bus" cards with any >sort of *guarantee* that they would work in ANY pc (even genuine big blue). >You ended up designing empirically and to "typical" numbers as there were >no hard and fast numbers that you could rely upon from a PC vendor.
There had been various minicomputer busses before the first microcomputers, so it should have been possible to pick the best features.
On Fri, 17 Oct 2014 21:10:18 -0700, Don Y <this@is.not.me.com> wrote:

>Hi George, > >On 10/17/2014 7:47 PM, George Neuner wrote: >> On Fri, 17 Oct 2014 15:39:20 -0700, Don Y <this@is.not.me.com> wrote: >> >>> On 10/17/2014 3:25 PM, glen herrmannsfeldt wrote: >>>> Don Y <this@is.not.me.com> wrote: >>>> >>>>> Yes, an Our Hero, Bill figured 640K was more than anyone would >>>>> EVER need! :-/ >> >> Billy boy had nothing to do with it - he wasn't even in the picture >> when the PC was designed. IBM wanted CPM-86 for the PC and only >> turned to Microsoft when DR made a mistake in dealing with them. > >Didn't mean to imply that he had a hand in the design of the hardware. >Only that he had "concluded" that 640K was effectively "infinite" >and, as such, cast the foundations of MS-DOS in mud instead of >"thinking forward". > >Sort of like picking Jan 1 1970 as the epoch ("Obviously, time will >come to an end before <original_unix_developers> die...") > >Always amusing to see lack of vision, in practice! :>
There are some flaws with the _32_ bit _signed_ representation. If you intend to interpret it as a signed value, the problem is that it only extends back to year 1902, so you could not the register the birth date of a lot people living in the 1970's. At least for real time control system, interpreting the 32 bit second counter would allow the use to year 2106. Some other problems with Unix time is the handling of leap seconds.
On Fri, 17 Oct 2014 14:32:20 -0700, Don Y <this@is.not.me.com> wrote:

>Hi George, > >On 10/17/2014 12:29 PM, George Neuner wrote: >> On Fri, 17 Oct 2014 07:23:40 -0700, Don Y <this@is.not.me.com> wrote: >> >>> What are the most conservative, *practical* expectations I can >>> make living in x86 real mode: TEXT of 64K and DATA of (disjoint) >>> 64K? BSS in it's own segment? Or, shared with DATA? >> >> If you use the tiny (64K code+data) or small (64K code, 64K data) >> model, .bss and stack share space with your data. If you choose a >> model with multiple data segments, .bss and stack can be separate (but >> tool dependent you may have to ask for it explicitly). > >It seems the "safest" (most conservative) assumption is to figure >a single 64KB address space. If, instead, the x86 ALWAYS prepared >a separate data space (etc), then I could assume a larger model. > >Or, if the tiny model was IMPRACTICAL for any use (and was just >included as an homage to the 8085). > >Or, if ints were always 32b, etc. > >I.e., it seems safe to assume the tiny model was intended to be >*usable* and not just "an engineering/marketing exercise".
What is the problem with 64 KiB program address space with much larger total system memory ? In a previous company, my department used machines with 64 KiB or 128 KiB (I+D) and we never had problems with the program address space limits. Just split the functionality into a sufficient number of tasks. Of course if you need to access a huge data base, this was problematic.
On Fri, 17 Oct 2014 22:31:44 +0000 (UTC), glen herrmannsfeldt
<gah@ugcs.caltech.edu> wrote:

>Hans-Bernhard Br&#4294967295;ker <HBBroeker@t-online.de> wrote: >> Am 17.10.2014 um 23:32 schrieb Don Y: > >(snip) >>> If, instead, the x86 ALWAYS prepared >>> a separate data space (etc), then I could assume a larger model. > >> Well, yes, it essentially always does just that. All code addressing is >> relative to CS, all data addressing is relative to DS or SS, at least by >> default. And since all those are allowed to be different, the only >> a-priori _safe_ assumption obviously is that they will be. > >Not so unusual to keep DS and SS together. > >Also, even for the 8080 you could have separate address space >for stack and data. There was a way to decode which data references >were to the stack. I don't know anyone ever did that, though.
Some of the 8080 support chip had some exotic outputs that you could determine what type of access was intended, but did also include data/stack access and not just instruction/RWdata access ?
upsidedown@downunder.com wrote:

(snip, I wrote)
>>Systems with (externally) bank switched RAM weren't all that >>unusual, so maybe putting something like bank switch inside >>the chip wouldn't have been so strange.
> One of the (pre-release) selling point was that the 8086 was 8080 > compatible. When 8086 details finally released, this appeared to be > some degree of assembler mnemonics similarity.
Most 8080 instructions translate to a single 8086 instruction, but some need two or three. The number of bytes might be a little more, so it won't fit into 64K anymore. Someone might still have the program that Intel sent out to do the conversion.
>>But yes, I am pretty sure that Intel didn't expect the 80286 >>to mostly be used in systems only running real mode.
> The 80286 was some kind of stop gap between the 8080 and > the iAPX432, which proved to be disastrous.
The four protection levels, as I understand, are from Multics and its host. But yes, the 432 was way too complicated for what was needed at the time. -- glen
The 2026 Embedded Online Conference