On 2014-10-17, Don Y <this@is.not.me.com> wrote:> Not a question of toolchainHow are you going to use a memory model that isn't supported by your toolchain?> but, rather, what the *environment* already "expects" (provides).What do you mean by "environment"? Are you asking how the CPU instructions and addressing modes work? Are you asking how much physical RAM your platform has?> Hence the "*practical*" qualifier in my original post.I've no clue what you mean by "practical" if you're not going to take into account what your toolchain supports. -- Grant Edwards grant.b.edwards Yow! I joined scientology at at a garage sale!! gmail.com
x86 real mode
Started by ●October 17, 2014
Reply by ●October 17, 20142014-10-17
Reply by ●October 17, 20142014-10-17
Don Y wrote:> On 10/17/2014 7:31 AM, Arlet Ottens wrote: >> On 10/17/2014 04:23 PM, Don Y wrote: >>> What are the most conservative, *practical* expectations I can >>> make living in x86 real mode: TEXT of 64K and DATA of (disjoint) >>> 64K? BSS in it's own segment? Or, shared with DATA? >>> >>> And, how seemlessly will the compiler let me *implicitly* move >>> those segments around as well as between them? (e.g., practical >>> limitations on code/data sizes). "PC" handled 640K so should >>> I expect that to be the size of my playground? >> >> It would depend on what tools you use. I remember working with Borland >> tools, and they'd offer a choice of several different memory models. > > So, the "most conservative" would be to assume a tiny-ish model -- 64K > TOTAL address space? (with everything residing therein)Depends on how you want to be "most conservative". TINY model means you can effectively ignore segment registers and work with 16-bit flat addresses. Of course this is an assumption that doesn't hold everywhere. Being "conservative" from this standpoint would mean you have to assume that every object is in a different segment, i.e. LARGE / HUGE model. Stefan
Reply by ●October 17, 20142014-10-17
David Brown <david.brown@hesbynett.no> wrote:> On 17/10/14 16:23, Don Y wrote:(snip)>> What are the most conservative, *practical* expectations I can >> make living in x86 real mode: TEXT of 64K and DATA of (disjoint) >> 64K? BSS in it's own segment? Or, shared with DATA?> It should, I think be possible to have DS, CS and SS each pointing to a > different 64 KB segment. But .bss and .data would both be part of the > data segment. Access to other segments (including pointers to stack > data, if SS is not the same as DS) is via "far pointers".Some instructions, such as CALL, use a far pointer. Others use a segment override prefix. Most use DS: for accessing data, but you can put a CS:, ES:, FS:, or GS: prefix on the instruction (or on the data operand), and it will use the specified segment instead. If you have a far pointer, you load it into a segment register, index register pair, and then use the appropriate prefix. -- glen
Reply by ●October 17, 20142014-10-17
On Fri, 17 Oct 2014 07:23:40 -0700, Don Y <this@is.not.me.com> wrote:>What are the most conservative, *practical* expectations I can >make living in x86 real mode: TEXT of 64K and DATA of (disjoint) >64K? BSS in it's own segment? Or, shared with DATA?The smallest would be putting into a single 64 KiB segment. That would be equivalent to a 1960/70's minicomputer. Putting 64 KiB of TEXT in one segment and DATA+BSS in an other 64 KiB would be equivalent to code/data address space of some minicomputers. Some had issues with self modifying code, but unless you did something exotic, there should not have been much issues even on the x86. Those small modes could be easily programmed in assembler, but with larger data models with segment overrides and long pointer made assembly programming really ugly, so it is understandable that high level support were developed.
Reply by ●October 17, 20142014-10-17
Don Y wrote:> Hi, > > What are the most conservative, *practical* expectations I can > make living in x86 real mode: TEXT of 64K and DATA of (disjoint) > 64K? BSS in it's own segment? Or, shared with DATA? >Go find out how tiny, small, (medium?) large and huge models worked in Borland and M$ tools. It's all about how the compiler treats the segment registers. And then there's extended memory, expanded memory, exalted* memory. *I made that last one up.> And, how seemlessly will the compiler let me *implicitly* move > those segments around as well as between them? (e.g., practical > limitations on code/data sizes). "PC" handled 640K so should > I expect that to be the size of my playground? > > [Presumably, any "object" is constrained to fit within a single > segment] > > I imagine this will all be accomplished in the linkage editor > (not visible to the source code). > > [What a screwed up architecture!]It fit well enough into memory prices for the time. But yeah. We know :)> > Thx, > --don-- Les Cargill
Reply by ●October 17, 20142014-10-17
Don Y wrote:> On 10/17/2014 7:31 AM, Arlet Ottens wrote: >> On 10/17/2014 04:23 PM, Don Y wrote: >>> Hi, >>> >>> What are the most conservative, *practical* expectations I can >>> make living in x86 real mode: TEXT of 64K and DATA of (disjoint) >>> 64K? BSS in it's own segment? Or, shared with DATA? >>> >>> And, how seemlessly will the compiler let me *implicitly* move >>> those segments around as well as between them? (e.g., practical >>> limitations on code/data sizes). "PC" handled 640K so should >>> I expect that to be the size of my playground? >> >> It would depend on what tools you use. I remember working with Borland >> tools, >> and they'd offer a choice of several different memory models. > > So, the "most conservative" would be to assume a tiny-ish model -- 64K > TOTAL address space? (with everything residing therein) > > [I don't really care what it is, just need to know the constraints > before I settle on a design.The point is that you have choices.> E.g., I surely wouldn't use int's > if values would fit in char's and every byte of data "cost" me > a byte of code!!] > >Okay then. -- Les Cargill
Reply by ●October 17, 20142014-10-17
On 17.10.14 17:23, Don Y wrote:> Hi, > > [What a screwed up architecture!] > > Thx, > --donActually, Intel did not intend to use the 8086 real mode as such, only as bootstrap mode for 80286. The success of 8088 and original PC came as a surprise, and it influenced the design of 80386 with the virtual 8086 mode. There were no PC's at the design time of 8086 and 80286, and the aim was for well protected real-time multitasking applications. So it was natural that 80286 does not voluntarily return back to real mode from protected mode. This led to a massive kludge in PC/AT, where the keyboard controller can reset the main processor and the RTC CMOS RAM will contain a code explaining why the reset was this time. -- Tauno Voipio
Reply by ●October 17, 20142014-10-17
Hi Don, On Fri, 17 Oct 2014 07:23:40 -0700, Don Y <this@is.not.me.com> wrote:>What are the most conservative, *practical* expectations I can >make living in x86 real mode: TEXT of 64K and DATA of (disjoint) >64K? BSS in it's own segment? Or, shared with DATA?If you use the tiny (64K code+data) or small (64K code, 64K data) model, .bss and stack share space with your data. If you choose a model with multiple data segments, .bss and stack can be separate (but tool dependent you may have to ask for it explicitly).>And, how seemlessly will the compiler let me *implicitly* move >those segments around as well as between them? (e.g., practical >limitations on code/data sizes). "PC" handled 640K so should >I expect that to be the size of my playground?The linker determines where your segments get placed (assuming the OS permits the requested placement).>[Presumably, any "object" is constrained to fit within a single >segment]In the "huge" model arrays and structs can straddle segments. However, you should take care that no individual element of an array or struct lies across a segment boundary ... I have found that many compilers don't renormalize intermediate pointers when indexing from a huge pointer, so you can get into trouble when the offset is close to or exceeds 64K. I've been bitten by that more times than I care to admit. [The solution is to deliberately construct a new huge pointer. When you store the new pointer it will be normalized and thus safe to use.] Arithmetic on "far" pointers affects only the 16-bit offset and wraps at the segment boundary. This means that two far pointers having different segments can't reasonably be compared. If you are using multiple data segments and you need to compare pointers, you have to use normalized "huge" pointers. Do be aware that huge pointer arithmetic can be quite a bit slower due to (re)normalization of the results. You really only need to choose the huge model if you expect a single array or struct to exceed 64K. If you need huge pointers purely for comparison, they can be mixed with far pointers in any of the multiple data segment models. Because far pointer arithmetic is faster, they should be preferred wherever you can live with their limitations.>I imagine this will all be accomplished in the linkage editor >(not visible to the source code).Depends on the tool - some compilers use pragmas to control placement of objects into particular segments. Placement of the segments themselves is controlled by the linker (and/or OS).>[What a screwed up architecture!]It isn't bad until you grow beyond 64K data and are forced into explicit use of far or huge pointers. There's a model that allows>64K code and <=64K data where the compiler transparently handles codepointers so you don't have to worry about them.>Thx, >--donGeorge
Reply by ●October 17, 20142014-10-17
On 10/17/2014 8:36 AM, Arlet Ottens wrote:> On 10/17/2014 05:28 PM, Don Y wrote: > >>> Here's an overview of the memory models: >>> http://en.wikipedia.org/wiki/Intel_Memory_Model >>> >>> The smallest one is the Tiny model, but I don't know if it makes sense >>> to use >>> that as a design constraint, since it's a trivial matter to change to >>> another >>> memory model if your tools support it. >> >> Not a question of toolchain but, rather, what the *environment* >> already "expects" (provides). Hence the "*practical*" qualifier in >> my original post. > > The environment is provided by the toolchain startup code, so it is a question > of toolchain.The environment is defined by the rest of the code which I have to co-operate. It is poorly documented -- hence my query as to what I could likely expect to encounter. I'm all set, though. Just have to figure out what I want to shoehorn in and then find the right sized shoehorn! Thanks!
Reply by ●October 17, 20142014-10-17
On 2014-10-17, Don Y <this@is.not.me.com> wrote:> On 10/17/2014 8:36 AM, Arlet Ottens wrote: >> On 10/17/2014 05:28 PM, Don Y wrote: >> >>> Not a question of toolchain but, rather, what the *environment* >>> already "expects" (provides). Hence the "*practical*" qualifier in >>> my original post. >> >> The environment is provided by the toolchain startup code, so it is a >> question of toolchain. > > The environment is defined by the rest of the code which I have to > co-operate. It is poorly documented -- hence my query as to what I > could likely expect to encounter.So you wanted us to explain the memory model used by code that we haven't seen and wasn't even mentioned until now? -- Grant Edwards grant.b.edwards Yow! I have a VISION! It's at a RANCID double-FISHWICH on gmail.com an ENRICHED BUN!!







