EmbeddedRelated.com
Forums
The 2024 Embedded Online Conference

Direction of Stack Growth

Started by karthikbalaguru October 21, 2007
Steve Underwood wrote:
> Terje Mathisen wrote: >> Steve Underwood wrote: >>> I've seen that elsewhere, but I've never seen a machine with a >>> decrementing program counter. I bet there must be one somewhere. >>> Every other strange combination has been used. >>> >>> There are various machines where there program counter doesn't >>> actually count at all, but each instruction points to the next. Most >>> microcoded systems do that, but a few higher level programmable >>> machines have done so as well. >> >> "The Story of Mel": >> >> <http://catb.org/jargon/html/story-of-mel.html> >> >> Here's a relevant quote: >> >>> The new computer had a one-plus-one >>> addressing scheme, >>> in which each machine instruction, >>> in addition to the operation code >>> and the address of the needed operand, >>> had a second address that indicated where, on the revolving drum, >>> the next instruction was located. >> >> Terje > > If anecdotes are your only exposure to that kind of programming, you've > lead a sheltered life. :-) Much is still coded for specialist machines > where there is no real program counter, and every instruction points to > the next. It makes patches wonderfully arcane :-) > > Back in the good old days, when AMD was the king of the DSP chip makers, > practically all DSP programming was done that way (and it took a lot of > AMD chips to make one machine). > > Steve
I suppose if I'm going to say its still done, I should quote a real world example. Try looking at the high performance programmable timers used for things like car engine control. Some of those work in this way. Steve
Nick Maclaren wrote:
> In article <XvmdnaRhB9sU_oDanZ2dnUVZ_o-mnZ2d@speakeasy.net>, > rpw3@rpw3.org (Rob Warnock) writes: > |> > |> Don't forget, DEC did it *both* ways! The PDP-10's hardware stack > |> grew "up"; the PDP-11's grew "down". The overlap in the lifecycle > |> of those two product lines was quite considerable. In fact, many > |> later PDP-10s used PDP-11s as front-end processors, so you even had > |> both ways in one system! ;-} > > Oh, indeed. Now, why the PDP-11 should have disproportionately more > influence on computer scientists (and this is not the only respect), > I leave to the sociologists.
Of course it has a louder voice in the computer world, if it goes to 11. :-) Steve
On Oct 23, 1:25 am, karthikbalaguru <karthikbalagur...@gmail.com>
wrote:
> Interesting :) > 1) The author claims as below in that link - > "Wikipedia tells me that most modern OSes grow the stack down which > is odd given the security advantages of doing it up." > Is that true or some kind of wrong information in internet ?
It's a common misconception that an upwards growing stack is less vulnerable to buffer overflow/stack smashing attacks. In one limited case that's semi-correct, where the buffer being overflowed is in the routine that is doing the overflowing, and there is no active return address on the stack after the overflowed buffer. Unfortunately the vast majority of real stack smashes use subroutines to do the dirty work in a buffer owned by a caller. That just moves the point at which the bad return address is used. Consider: void f(void) { char s[4]; strcpy(s, "abcdefghjklmnopqrstuvwxyz"); } With a typical downward growing stack, the smashed return address will be the one from f(). With an upwards growing stack, it'll be the return from strcpy() that's altered. Not exactly a huge improvement. And the stack growth direction doesn't do anything to prevent the corruption of any other items in the same stack frame that are places after the buffer in question, or any items following an overflowed buffer anywhere in the system. Consider what happens if the string in the following structure is overflowed (again, assuming typical layout in memory): struct st {char s[4]; void (*fp)(void);}; In any event, the Wikipedia page on the subject was edited a couple of months ago to eliminate the claim.
In article <1193129323.588079.51960@z24g2000prh.googlegroups.com>,
"robertwessel2@yahoo.com" <robertwessel2@yahoo.com> writes:
|> On Oct 23, 1:25 am, karthikbalaguru <karthikbalagur...@gmail.com>
|> wrote:
|> > Interesting :)
|> > 1) The author claims as below in that link -
|> >  "Wikipedia tells me that most modern OSes grow the stack down which
|> > is odd given the security advantages of doing it up."
|> > Is that true or some kind of wrong information in internet ?
|> 
|> It's a common misconception that an upwards growing stack is less
|> vulnerable to buffer overflow/stack smashing attacks.  In one limited
|> case that's semi-correct, where the buffer being overflowed is in the
|> routine that is doing the overflowing, and there is no active return
|> address on the stack after the overflowed buffer.  Unfortunately the
|> vast majority of real stack smashes use subroutines to do the dirty
|> work in a buffer owned by a caller.  That just moves the point at
|> which the bad return address is used.  ...

There is another case, too, but it is rarer.  Using a value that is
MUCH too large is more likely to cause a SIGSEGV than to trash some
frames much higher up the tree.

However, that does not deny your point, with which I agree.


Regards,
Nick Maclaren.

On Mon, 22 Oct 2007 16:12:31 +0000 (UTC), johnl@iecc.com (John L)
wrote:

>>I am pretty sure that this is yet another artifact of the way that >>DEC was the dominating computer science supplier in the 1970s. Now, >>why DEC did things the way they did, I don't know. > >It was the same time they gave us little-endian addressing. Perhaps >it was just to do everyhing sdrawkcab.
Little endian addressing is a good idea, if the bus width is less than the address size. You can perform the effective address calculations on the LSB (and generate the carry from that calculation) before the MSB is loaded. In big endian systems with a narrow bus, you must first load both the MSB and LSB, before you can start calculating the effective address, thus, the calculation is slower or you need much more carry-lookahead gates to perform the effective address calculation swiftly.
>Early PDP-11s had a 64KB address space where you typically put the code >at the bottom, so the model where the stack grows down toward the heap >was reasonable. Later 11's had a larger physical address space but the >per process space was still 64K.
The upper 4 kW (8 KiB) was reserved for the I/O page (memory mapped I/O). At least the RSX-11 linker (TKB) reserved 512 words (or was it 512 bytes) of stack space in the low addresses. This convention was also used by Fortran IV and Fortran IV+. Some sissy language implementations such as Pascal or C might have used the software stack in addresses above the code (but below the 8 KiB I/O page). Paul
On Oct 22, 9:12 pm, jo...@iecc.com (John L) wrote:
> >I am pretty sure that this is yet another artifact of the way that > >DEC was the dominating computer science supplier in the 1970s. Now, > >why DEC did things the way they did, I don't know. > > It was the same time they gave us little-endian addressing. Perhaps > it was just to do everyhing sdrawkcab.
Interesting :) But, I think, in those days IBM used little-endian. DEC used Big- endian. Later came the tie-up of Motorola(Big-Endian) with Apple and Intel(Little-Endian) with IBM. After that many other competitors. Intel's 80x86 processors and their clones - little endian (also called as Intel format). SPARC, Motorola's 68K, and the PowerPC families - big endian. Earlier ARM processors (ARM2, ARM3, ARM2aS) - little-endian. Current generation ARM processors (from ARM6 onwards) - can operate in either little-endian or big-endian mode. Karthik Balaguru
In article <1193135352.100503.51110@i13g2000prf.googlegroups.com>,
karthikbalaguru <karthikbalaguru79@gmail.com> writes:
|> 
|> But, I think,  in those days IBM used little-endian. DEC used Big-
|> endian.

IBM was primarily (perhaps entirely - I don't know) big-endian from
the early 1960s onwards.


Regards,
Nick Maclaren.
On Oct 23, 1:48 pm, "robertwess...@yahoo.com"
<robertwess...@yahoo.com> wrote:
> On Oct 23, 1:25 am, karthikbalaguru <karthikbalagur...@gmail.com> > wrote: > > > Interesting :) > > 1) The author claims as below in that link - > > "Wikipedia tells me that most modern OSes grow the stack down which > > is odd given the security advantages of doing it up." > > Is that true or some kind of wrong information in internet ? > > It's a common misconception that an upwards growing stack is less > vulnerable to buffer overflow/stack smashing attacks. In one limited > case that's semi-correct, where the buffer being overflowed is in the > routine that is doing the overflowing, and there is no active return > address on the stack after the overflowed buffer. Unfortunately the > vast majority of real stack smashes use subroutines to do the dirty > work in a buffer owned by a caller. That just moves the point at > which the bad return address is used. Consider: > > void f(void) > { > char s[4]; > strcpy(s, "abcdefghjklmnopqrstuvwxyz"); > > } > > With a typical downward growing stack, the smashed return address will > be the one from f(). With an upwards growing stack, it'll be the > return from strcpy() that's altered. Not exactly a huge improvement. > > And the stack growth direction doesn't do anything to prevent the > corruption of any other items in the same stack frame that are places > after the buffer in question, or any items following an overflowed > buffer anywhere in the system. Consider what happens if the string in > the following structure is overflowed (again, assuming typical layout > in memory): > > struct st {char s[4]; void (*fp)(void);}; > > In any event, the Wikipedia page on the subject was edited a couple of > months ago to eliminate the claim.
Check this link . Collected some more information from internet :- http://diku.edu/hjemmesider/ansatte/torbenm/Basics/basics_a4_11pt.pdf I got some interesting info from the sections 8.8.2, 9.8.3 and 9.8.4. Interesting !! :):) 9=2E8.3 Direction of stack-growth and position of FP ---------------------------------------------------------------------------- There is no particular reason why a stack has to grow upwards in memory. It is, in fact, more common that call stacks grow downwards in memory. Sometimes the choice is arbitrary, but at other times there is an advantage to have the stack growing in a particular direction. Some instruction sets have memory-access instructions that include a constant offset from a register-based address. If this offset is unsigned (as it is on, e=2Eg., IBM System/370), it is an advantage that all fields in the activation record are at non-negative offsets. This means that either FP must point to the bottom of the frame and the stack grow upwards, or FP must point to the top of the frame and the stack grow downwards. If, on the other hand, offsets are signed but have a small range (as on Digital's Vax, where the range is -128 - +127), it is an advantage to use both positive and negative offsets. This can be done, as suggested in section 9.8.2, by placing FP after the parameters but before the rest of the frame, so parameters are addressed by negative offsets and the rest by positive. Alternatively, FP can be positioned k bytes above the bottom of the frame, where k is the largest negative offset 9=2E8.2 Variable number of parameters ------------------------------------------------------ Some languages (e.g., C and LISP) allow a function to have a variable number of parameters. This means that the function can be called with a different number of parameters at each call. In C, the printf function is an example of this. The layouts we have shown in this chapter all assume that there is a fixed number of arguments, so the offsets to, e.g., local variables are known. If the number of parameters can vary, this is no longer true. One possible solution is to have two frame pointers: One that shows the position of the first parameter and one that points to the part of the frame that comes after the parameters. However, manipulating two FP's is somewhat costly, so normally another trick is used: The FP points to the part of the frame that comes after the parameters, Below this, the parameters are stored at negative offsets from FP, while the other parts of the frame are accessed with (fixed) positive offsets. The parameters are stored such that the first parameter is closest to FP and later parameters further down the stack. This way, parameter number k will be a fixed offset ( 4 k) from FP. When a function call is made, the number of arguments to the call is known to the caller, so the offsets (from the old FP) needed to store the parameters in the new frame will be fixed at this point. Alternatively, FP can point to the top of the frame and all fields can be accessed by fixed negative offsets. If this is the case, FP is sometimes called SP, as it points to the top of the stack. 9=2E8.4 Register stacks ----------------------------------- Some processors, e.g., Suns Sparc and Intels IA-64 have on-chip stacks of registers. The intention is that frames are kept in registers rather than on a stack in memory. At call or return of a function, the register stack is adjusted. Since the register stack has a finite size, which is often smaller than the total size of the call stack, it may overflow. This is trapped by the operating system which stores part of the stack in memory and shifts the rest down (or up) to make room for new elements. If the stack underflows (at a pop from an empty register stack), the OS will restore earlier saved parts of the stack. Thx, Karthik Balaguru
On Oct 23, 3:40 pm, n...@cus.cam.ac.uk (Nick Maclaren) wrote:
> In article <1193135352.100503.51...@i13g2000prf.googlegroups.com>,karthikbalaguru <karthikbalagur...@gmail.com> writes: > > |> > |> But, I think, in those days IBM used little-endian. DEC used Big- > |> endian. > > IBM was primarily (perhaps entirely - I don't know) big-endian from > the early 1960s onwards. >
The 370 series(IBM System/370) of computers was a 32-bit big endian style mainframe architecture, as compared with little endian architectures such as the x86 series of 32-bit microprocessors. Refer -> http://en.wikipedia.org/wiki/System/370 Karthik Balaguru
On Oct 23, 3:40 pm, n...@cus.cam.ac.uk (Nick Maclaren) wrote:
> In article <1193135352.100503.51...@i13g2000prf.googlegroups.com>,karthik=
balaguru <karthikbalagur...@gmail.com> writes:
> > |> > |> But, I think, in those days IBM used little-endian. DEC used Big- > |> endian. > > IBM was primarily (perhaps entirely - I don't know) big-endian from > the early 1960s onwards. > > Regards, > Nick Maclaren.
http://www.intel.com/design/intarch/papers/endian.pdf -> Refer the section "Merits of Endian Architectures" and the Table for clarifications w.r.t endiannes DEC Alpha* - Little-Endian Intel=AE 80x86 - Little-Endian ARM* - Bi-Endian HP PA-RISC 8000* - Bi-Endian IBM PowerPC* - Bi-Endian Intel=AE IXP network processors - Bi-Endian Intel=AE Itanium=AE processor family - Bi-Endian Java Virtual Machine* - Big-Endian MIPS* - Bi-Endian Motorola 68k* - Big-Endian Sun SPARC* - Big-Endian Also, DEC Alpha computers are configurable for Big Endian or Little Endian (that is, it is Bi-Endian as told in wikipedia). Thx, Karthik Balaguru

The 2024 Embedded Online Conference