Reply by Chris Carlen September 16, 20042004-09-16
Brad Griffis wrote:
> Chris, > > I think you're spending way too much time thinking about this and not really > going to see any benefit in the end from it. The 28xx instruction set is > composed of a mixture of 16- and 32-bit instructions. Specifically, about > 80% of the instructions are 16-bit and 20% are 32-bit. Both the 16- and > 32-bit instructions execute in 1 cycle.
Yes, once they are in the pipeline, I suppose. What I still don't quite understand is that the program read data bus is 32 bits, but the memory is 16 bits. Is it just that the smallest memory unit that can be accessed is 16 bits, but the CPU can also read 32 bits chunks in one cycle? If so then 32 bit instructions can exec in 1 cycle, yes. Now if most addressing modes used involve 32 bit instructions, then yes, there is little to be gained fussing over short pointers. If you think about it, you will
> need a 32-bit instruction whether you're using the small or large memory > model. Plus it will execute in the same amount of time. Therefore you're > not seeing a speed or memory advantage to the small memory model in terms of > the instruction set. I think the only place you get savings is when you > declare a pointer (i.e. short vs long). I think this whole memory model > thing is implemented as more of a ANSI compliance thing rather than being > some kind of great optimization. > > Also, I think you still have a misconception about the memory in general on > the 28xx. The 28xx is a unified memory map. You talked about _c_int00 not > being able to access the .cinit section because it is in program space. It > does not matter whether .cinit is in program space or data space. IT IS ALL > THE SAME MEMORY AND ALL MEMORY IS ACCESSIBLE FROM BOTH BUSES.
I understand that.
> So long story short, save yourself a headache and just set it to large > memory model and be done with it!
Yes. Thanks for the input. Good day! -- _______________________________________________________________________ Christopher R. Carlen Principal Laser/Optical Technologist Sandia National Laboratories CA USA crcarle@sandia.gov -- NOTE: Remove "BOGUS" from email address to reply.
Reply by Brad Griffis September 15, 20042004-09-15
Chris,

I think you're spending way too much time thinking about this and not really 
going to see any benefit in the end from it.  The 28xx instruction set is 
composed of a mixture of 16- and 32-bit instructions.  Specifically, about 
80% of the instructions are 16-bit and 20% are 32-bit.  Both the 16- and 
32-bit instructions execute in 1 cycle.  If you think about it, you will 
need a 32-bit instruction whether you're using the small or large memory 
model.  Plus it will execute in the same amount of time.  Therefore you're 
not seeing a speed or memory advantage to the small memory model in terms of 
the instruction set.  I think the only place you get savings is when you 
declare a pointer (i.e. short vs long).  I think this whole memory model 
thing is implemented as more of a ANSI compliance thing rather than being 
some kind of great optimization.

Also, I think you still have a misconception about the memory in general on 
the 28xx.  The 28xx is a unified memory map.  You talked about _c_int00 not 
being able to access the .cinit section because it is in program space.  It 
does not matter whether .cinit is in program space or data space.  IT IS ALL 
THE SAME MEMORY AND ALL MEMORY IS ACCESSIBLE FROM BOTH BUSES.

So long story short, save yourself a headache and just set it to large 
memory model and be done with it!

Regards,

Brad

"Chris Carlen" <crcarle@BOGUS.sandia.gov> wrote in message 
news:ci738a02583@news4.newsguy.com...
> Brad Griffis wrote: >> Chris, >> >> If you use the small memory model all data addresses will be 16-bit. If >> you use the large memory model all data addresses will be 22-bit. If all >> of your code/data fits in the low 64k of memory then you can simply use >> the small memory model. This does not have anything to do with accesses >> to program or data space, only the size of the data pointers. > > Hi Brad, thanks for the reply. This is about how I figured it. I > understand all memory objects are 16 bits and that with large-model 22 bit > pointers are used vs. 16 bits for small-model. > > That leads me to think that for any instructions which load/store pointers > to/from RAM or literal pointers stored in the instructions themselves, > that the large model would lead to two memory accesses vs. one for small. > Thus, small-model code should be faster and more compact. > > The problem is that I am unsure what the implications are regarding the > autoinitialization of C variables. Usually (as I understand things) the > _c_int00 code which executes before main() copies all pre-initialized > variable and constant values from the program space to the data memory > sections allocated to contain them. It would seem then that using the > large model, the code to do this could use ordinary load/store operations. > > However, if using the small model, then the variable initialization tables > (which are in program memory) are outside the scope of the small model's > 16 bit pointers. Thus, the _c_int00 code would have to use PREAD > instructions to move the initial values to the intended data space > locations. Perhaps in fact the _c_int00 code always uses PREAD, so it > just doesn't care what model you use. I don't know, I haven't attempted > to decipher _c_int00 yet. > > The sticky point arises in the following scenario: Say you want to > benefit from the small model's more efficient code, but also wanted to > optimize memory usage by keeping constant data values in program memory > rather than wasting data RAM space with them. > > At this point my understanding of how all the various memory sections work > breaks down. I am not sure what really happens with constant data, such > as strings and constant data lookup tables. It would make sense to keep > them in program memory, if RAM is at a premium. They could still be > accessed in the small model using PREAD (I think) even though they are > outside the scope of normal small model load/store instructions. > > My question is thus, is the compiler smart enough to know how to do this? > It would seem unlikely since the final location of a memory section, say > for constant data, is decided by the linker. But the code needed to > access it would be drastically different if using the small model AND > attempting to keep it in program memory, compared to either large or small > models if all data is first copied to data RAM by _c_int00. > > I would therefore suspect that it is not possible to do what I am > describing. > >> I usually just code with large memory model so I don't have to worry >> about it. However, you could just as easily start coding with the small >> memory model if you know everything will fit into the lower 64k of >> memory. If you ever get any errors to the effect of "data not within >> reach" then you'll know something isn't in the lower 64k and you'll need >> to either use the far keyword or switch to large memory model. > > Yes. > > Good day! > > > -- > _______________________________________________________________________ > Christopher R. Carlen > Principal Laser/Optical Technologist > Sandia National Laboratories CA USA > crcarle@sandia.gov -- NOTE: Remove "BOGUS" from email address to reply.
Reply by Chris Carlen September 14, 20042004-09-14
Brad Griffis wrote:
> Chris, > > If you use the small memory model all data addresses will be 16-bit. If you > use the large memory model all data addresses will be 22-bit. If all of > your code/data fits in the low 64k of memory then you can simply use the > small memory model. This does not have anything to do with accesses to > program or data space, only the size of the data pointers.
Hi Brad, thanks for the reply. This is about how I figured it. I understand all memory objects are 16 bits and that with large-model 22 bit pointers are used vs. 16 bits for small-model. That leads me to think that for any instructions which load/store pointers to/from RAM or literal pointers stored in the instructions themselves, that the large model would lead to two memory accesses vs. one for small. Thus, small-model code should be faster and more compact. The problem is that I am unsure what the implications are regarding the autoinitialization of C variables. Usually (as I understand things) the _c_int00 code which executes before main() copies all pre-initialized variable and constant values from the program space to the data memory sections allocated to contain them. It would seem then that using the large model, the code to do this could use ordinary load/store operations. However, if using the small model, then the variable initialization tables (which are in program memory) are outside the scope of the small model's 16 bit pointers. Thus, the _c_int00 code would have to use PREAD instructions to move the initial values to the intended data space locations. Perhaps in fact the _c_int00 code always uses PREAD, so it just doesn't care what model you use. I don't know, I haven't attempted to decipher _c_int00 yet. The sticky point arises in the following scenario: Say you want to benefit from the small model's more efficient code, but also wanted to optimize memory usage by keeping constant data values in program memory rather than wasting data RAM space with them. At this point my understanding of how all the various memory sections work breaks down. I am not sure what really happens with constant data, such as strings and constant data lookup tables. It would make sense to keep them in program memory, if RAM is at a premium. They could still be accessed in the small model using PREAD (I think) even though they are outside the scope of normal small model load/store instructions. My question is thus, is the compiler smart enough to know how to do this? It would seem unlikely since the final location of a memory section, say for constant data, is decided by the linker. But the code needed to access it would be drastically different if using the small model AND attempting to keep it in program memory, compared to either large or small models if all data is first copied to data RAM by _c_int00. I would therefore suspect that it is not possible to do what I am describing.
> I usually just code with large memory model so I don't have to worry about > it. However, you could just as easily start coding with the small memory > model if you know everything will fit into the lower 64k of memory. If you > ever get any errors to the effect of "data not within reach" then you'll > know something isn't in the lower 64k and you'll need to either use the far > keyword or switch to large memory model.
Yes. Good day! -- _______________________________________________________________________ Christopher R. Carlen Principal Laser/Optical Technologist Sandia National Laboratories CA USA crcarle@sandia.gov -- NOTE: Remove "BOGUS" from email address to reply.
Reply by Brad Griffis September 13, 20042004-09-13
Chris,

If you use the small memory model all data addresses will be 16-bit.  If you 
use the large memory model all data addresses will be 22-bit.  If all of 
your code/data fits in the low 64k of memory then you can simply use the 
small memory model.  This does not have anything to do with accesses to 
program or data space, only the size of the data pointers.

I usually just code with large memory model so I don't have to worry about 
it.  However, you could just as easily start coding with the small memory 
model if you know everything will fit into the lower 64k of memory.  If you 
ever get any errors to the effect of "data not within reach" then you'll 
know something isn't in the lower 64k and you'll need to either use the far 
keyword or switch to large memory model.

Brad

"Chris Carlen" <crcarle@BOGUS.sandia.gov> wrote in message 
news:ci54se026kn@news3.newsguy.com...
> Hi: > > I am beginning work with the TMS320F2812 on the eZdsp platform. Most of > the examples are done with the -ml compiler option. Since the memory is > "unified" meaning that program and data memory are in the same address > space even if read by different busses, what is the point of using the > large model IF one has no need to read/write data from program memory? > > (Note I will be using C only, no C++) > > I mean, if I make M0/M1 and L0/L1 RAMs data spaces, they are all within > 64k, so what is the need for far pointers? > > Can't I still access the program memory in the small model using PREAD? > > Or does this cause headaches for the compiler in that it can't store > constants in the data space? How does it initialize the initialized data > sections when in the small model? It must use PREAD, right? > > What might be the pros/cons to using small vs. large models? Are there > code size and execution speed benefits to using the small model, for > instance in allowing only single word absolute address references in > instructions rather than needing two words for 22 bit references? > > Thanks for comments. > > > Good day! > > > -- > _______________________________________________________________________ > Christopher R. Carlen > Principal Laser/Optical Technologist > Sandia National Laboratories CA USA > crcarle@sandia.gov -- NOTE: Remove "BOGUS" from email address to reply.
Reply by Chris Carlen September 13, 20042004-09-13
Hi:

I am beginning work with the TMS320F2812 on the eZdsp platform.  Most of 
the examples are done with the -ml compiler option.  Since the memory is 
"unified" meaning that program and data memory are in the same address 
space even if read by different busses, what is the point of using the 
large model IF one has no need to read/write data from program memory?

(Note I will be using C only, no C++)

I mean, if I make M0/M1 and L0/L1 RAMs data spaces, they are all within 
64k, so what is the need for far pointers?

Can't I still access the program memory in the small model using PREAD?

Or does this cause headaches for the compiler in that it can't store 
constants in the data space?  How does it initialize the initialized 
data sections when in the small model?  It must use PREAD, right?

What might be the pros/cons to using small vs. large models?  Are there 
code size and execution speed benefits to using the small model, for 
instance in allowing only single word absolute address references in 
instructions rather than needing two words for 22 bit references?

Thanks for comments.


Good day!


-- 
_______________________________________________________________________
Christopher R. Carlen
Principal Laser/Optical Technologist
Sandia National Laboratories CA USA
crcarle@sandia.gov -- NOTE: Remove "BOGUS" from email address to reply.