EmbeddedRelated.com
Forums
Memfault State of IoT Report

Vary size of char from 1 to 2

Started by RahulS January 10, 2006
RahulS wrote:
> I am using M68AW512M RAM.It has 2 pins to read/write the upper or lower > byte. When i try to write a lower byte(char) onto this by enabling the > lower byte pin, it writes that to both lower as well as higher byte and > vice-versa. > When i try to write a single word, it writes it correctly on both the > lower and upper bytes. > Problem is only when i try writing a byte or in other words a ' char '. > That is why, i was trying to change the size of char from a byte to 2 > bytes. > > Can you suggest me some other way to overcome this problem. > Thanks > RahulS >
If you *must* use google groups, please learn to use it properly so that context is included in your replies. As for your problem, it's a hardware issue, and depends greatly on the particular microcontroller connected to the RAM. Microcontrollers which directly access 16-bit (or greater) buses always have some sort of byte select or byte mask pins that are used for precisely this purpose - they are used to direct the ram to only store in the correct bytes.
On 2006-01-11, RahulS <rahuls@kpitcummins.com> wrote:

> I am using M68AW512M RAM.It has 2 pins to read/write the upper or lower > byte. When i try to write a lower byte(char) onto this by enabling the > lower byte pin, it writes that to both lower as well as higher byte and > vice-versa.
Your hardware is broken.
> When i try to write a single word, it writes it correctly on both the > lower and upper bytes. > Problem is only when i try writing a byte or in other words a ' char '. > That is why, i was trying to change the size of char from a byte to 2 > bytes. > > Can you suggest me some other way to overcome this problem.
I suggest you fix your hardware. -- Grant Edwards grante Yow! I love FRUIT at PICKERS!! visi.com
On 2006-01-11, David Brown <david@westcontrol.removethisbit.com> wrote:

> It's an extra thing to keep in mind. For many tasks, it makes > little difference, but when trying to manipulate > character-based telegrams using the least possible ram space > it is very much in the way. It certainly leads to unportable > code (you don't want code filled with byte extraction > functions if you don't need it!).
It would be easier if the pre-processor knew the sizes of types. That way the byte-extraction code could be automagically included or not. Since that's not the case, using macros that are conditional on the target architecture works pretty well.
>>> It's not something one would chose voluntarily. >> >> Writing C is not something one would choose voluntarily. ;)
-- Grant Edwards grante Yow! Look!! Karl Malden! at visi.com
Grant Edwards <grante@visi.com> wrote:

> It would be easier if the pre-processor knew the sizes of > types. That way the byte-extraction code could be > automagically included or not. Since that's not the case,
Nonsense. Of course it knows! <limits.h> is guaranteed to be available in every C compiler rightfully bearing that name, and it contains all the information needed in a form the preprocessor can access. For starters, all you need is a single #include <limits.h> #if CHAR_BIT > 8 # error Arghhh! #endif in the right place if the source. -- Hans-Bernhard Broeker (broeker@physik.rwth-aachen.de) Even if all the snow were burnt, ashes would remain.
On 2006-01-11, Hans-Bernhard Broeker <broeker@physik.rwth-aachen.de> wrote:
> Grant Edwards <grante@visi.com> wrote: > >> It would be easier if the pre-processor knew the sizes of >> types. That way the byte-extraction code could be >> automagically included or not. Since that's not the case, > > Nonsense. Of course it knows! <limits.h> is guaranteed to be > available in every C compiler rightfully bearing that name,
You're right. I forgot about that. [However, this being. c.a.e, a lot of people are stuck using C compilers not rightfully bearing that name.] -- Grant Edwards grante Yow! I always liked FLAG at DAY!! visi.com
Hans-Bernhard Broeker wrote:
> Grant Edwards <grante@visi.com> wrote: > >> It would be easier if the pre-processor knew the sizes of >> types. That way the byte-extraction code could be >> automagically included or not. Since that's not the case, > > Nonsense. Of course it knows! <limits.h> is guaranteed to be > available in every C compiler rightfully bearing that name, and it > contains all the information needed in a form the preprocessor can > access. For starters, all you need is a single > > #include <limits.h> > > #if CHAR_BIT > 8 > # error Arghhh! > #endif > > in the right place if the source. >
Yes, theoretically you can use this sort of pre-processor stuff to write "portable" code that takes into account the width of a char. But you cannot write good, clean, legible code (for things like character-based messages) that compiles efficiently and works well on 8-bit char and 16-bit char (and 32-bit char) architectures. limits.h and pre-processing directives can give you efficient code, or legible code, but not both. Having said that, I've written programs on 16-bit char targets, and they work. It's just another inconvenience to deal with.

Memfault State of IoT Report