EmbeddedRelated.com
Forums
Memfault Beyond the Launch

Execute Disable Bit in Intel Core 2 Duo processor

Started by karthikbalaguru December 8, 2009
On Dec 8, 11:34=A0pm, Paul Keinanen <keina...@sci.fi> wrote:
> This overlapping is a (possibly stupid) design made by the OS and > linker designer. Using smaller segments than 4 GiB and using separate > segment base address would allow protecting the code segment and > data+stack segment from each other.
It's not quite that easy if you want to have a flat address space encompassing both your code and data - and there are very definite advantages to that. You *can* create a code segment with a limit value that prevents code from executing above a certain address, and then mark all the corresponding pages read-only (thus the pages below the limit are at most execute/read, and the pages above the limit are read/write). The problem is that this conflicts with the very common OS design of having both OS and application segments (segment !=3D x86 hardware segment) in the address space, but separate. It would perhaps have been reasonable to end up with four areas in the address space - OS code, OS data, application code and application data (with both code areas below the CS limit), at the expense of additional fragmentation of the address space.
On Dec 9, 11:17=A0am, James Harris <james.harri...@googlemail.com>
wrote:
> They do make a fuss about a single bit don't they. In a sense it is a > fix to a problem that didn't need to exist. Each code segment could > have been prevented from overlapping with data but it wasn't. As much > to the point, operating systems could have been more secure but they > weren't. For example, why should execution of any unprivileged code > whether it's in a buffer or not be able to subvert a system? Or why > should a buffer overflow be able to overwrite privileged code or data? > Neither should be possible.
In general it cannot. Injecting code into applications is quite enough to do damage.
On Wed, 9 Dec 2009 14:04:13 -0800 (PST), "robertwessel2@yahoo.com"
<robertwessel2@yahoo.com> wrote:

>On Dec 8, 11:34&#4294967295;pm, Paul Keinanen <keina...@sci.fi> wrote: >> This overlapping is a (possibly stupid) design made by the OS and >> linker designer. Using smaller segments than 4 GiB and using separate >> segment base address would allow protecting the code segment and >> data+stack segment from each other. > > >It's not quite that easy if you want to have a flat address space >encompassing both your code and data - and there are very definite >advantages to that.
Such as ? On a PDP-11 with separate I/D support, on a subroutine call I preferred loosing the ability of using the (more or less useless) R0-R5 as the parameter passing register (i.e. in-line parameters) and be forced to use the PC as the only parameter passing register (i.e. stack based parameter passing) when using separate I/D (64 KiB Code and 64 KiB data space :-). Of course, this was the days of core memory. The situation might be different with some Harvard architecture processors (such as PIC), in which the instruction space is in Flash and the non-volatile data space is in RAM.
>You *can* create a code segment with a limit >value that prevents code from executing above a certain address, and >then mark all the corresponding pages read-only (thus the pages below >the limit are at most execute/read, and the pages above the limit are >read/write). The problem is that this conflicts with the very common >OS design of having both OS and application segments (segment != x86 >hardware segment) in the address space, but separate.
While I fully understand the need for 32 or even 64 bit data/stack address space for handling large data arrays, with current modular software design methods, a 64 KiB address space should be enough, provided that the "far" calls can be used easily. My guess is that the reason of full code space is the frustration with 128/256/512 byte branch restrictions in most older platforms. With current programming practices, there shouldn't be much need for branches +/-32KiB, but instead a "far" call shouldn't be a problem.
>It would >perhaps have been reasonable to end up with four areas in the address >space - OS code, OS data, application code and application data (with >both code areas below the CS limit), at the expense of additional >fragmentation of the address space.
The more or less standard practice with 32 bit OS since the 1970's has been 2 GiB for user data space and 2 GiB for kernel code/data. Unfortunately, this has caused quite careless use of the virtual space, since for example in Windows NT, it is quite hard to find at least 100 MiB continuous address space for file mapping into virtual address space.
On Wed, 09 Dec 2009 14:05:54 +0000, Simon Clubley wrote:

> For example, you could have a model which allowed 64K of code, but allowed > a larger than 64K data size. (But don't ask me to remember which memory > model it was. :-))]
Compact. FWIW: Model Data Code Tiny near Small near near Medium near far Compact far near Large far far Huge huge huge near = single segment, far = multiple segments without normalisation (pointer arithmetic only affects the offset), huge = multiple segments with normalisation.
On Thu, 10 Dec 2009 02:23:04 +0200, Paul Keinanen wrote:

> The more or less standard practice with 32 bit OS since the 1970's has > been 2 GiB for user data space and 2 GiB for kernel code/data.
Linux/x86 typically uses 3GiB for user-space with the top 1GiB reserved for kernel mode.
On Wed, 09 Dec 2009 09:17:09 -0800, James Harris wrote:

>> What is so special with 'Execute Disable Bit' option and >> why is it hightlighted so explicity in the Intel Core 2 Duo >> processors ? Any ideas ? > > They do make a fuss about a single bit don't they. In a sense it is a > fix to a problem that didn't need to exist. Each code segment could > have been prevented from overlapping with data but it wasn't. As much > to the point, operating systems could have been more secure but they > weren't.
A large part of the problem was that Windows was designed for 8086 and Unix was designed for systems with page-level protection. The 80386 didn't include page-level execute permission on the assumption that software would use segments, but Unix assumes a flat address space (e.g. the pointers returned by mmap() and passed to munmap() can be either code or data).
> For example, why should execution of any unprivileged code > whether it's in a buffer or not be able to subvert a system? Or why > should a buffer overflow be able to overwrite privileged code or data? > Neither should be possible.
In general, it can't. On both Windows NT and Unix, a buffer overflow can only subvert that process; however, that still means that an attacker can run code under the account in question. Windows 95/98/ME had problems due to the bottom 1MiB of physical memory needing to be writable by all applications, so that legacy real-mode 8086 applications worked. OTOH, "classic" Macs (i.e. prior to the Unix-based OSX) didn't have *any* memory protection, yet buffer overflows were relatively uncommon, mostly due to the use of objective-C rather than C/C++. Linux/x86 has supported a non-executable stack since before the NX bit was added, by making the code segment shorter than the data segment (the stack is at the top of the user-mode address space), but this doesn't work for the heap (which is at the bottom of the address space). However, a non-executable stack caused problems for code which uses trampolines (this was quite common for objective-C code), and for some emulators, so many distributions disabled this feature. Various compiler features can guard against buffer overflows, but they either have a memory penalty (inserting guard pages between stack frames) or a performance penalty (inserting canary words which are checked before restoring the saved PC from the stack).
"Nobody" <nobody@nowhere.com> wrote in message 
news:pan.2009.12.10.17.04.54.250000@nowhere.com...
> Tiny near > Small near near > Medium near far > Compact far near > Large far far
Looks like a mnemonic for Double Norwich Court Bob Major!

Nobody wrote:

> On Wed, 09 Dec 2009 14:05:54 +0000, Simon Clubley wrote: > > >>For example, you could have a model which allowed 64K of code, but allowed >>a larger than 64K data size. (But don't ask me to remember which memory >>model it was. :-))] > > > Compact. > > FWIW: > > Model Data Code > > Tiny near > Small near near > Medium near far > Compact far near > Large far far > Huge huge huge > > near = single segment, far = multiple segments without normalisation > (pointer arithmetic only affects the offset), huge = multiple segments > with normalisation.
Memory model just meant the type of pointers by default. It was not a limitation for accessible code and data spaces per se. VLV
On 2009-12-10, Nobody <nobody@nowhere.com> wrote:
> On Wed, 09 Dec 2009 14:05:54 +0000, Simon Clubley wrote: > >> For example, you could have a model which allowed 64K of code, but allowed >> a larger than 64K data size. (But don't ask me to remember which memory >> model it was. :-))] > > Compact. >
Thanks. (It's been a _long_ time since I had to care about this :-) ). Simon. -- Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP Microsoft: Bringing you 1980's technology to a 21st century world

Memfault Beyond the Launch