EmbeddedRelated.com
Forums

Endianness does not apply to byte

Started by karthikbg November 17, 2006
CBFalconer wrote:

> > I built a machine in 1965 that used bit serial arithmetic, coupled > with excess-3 decimal, and 9 significand digits floating point. > The logic was DTL. The use of excess-3 and 9's complement > arithmetic simplified display of negative values. See: > > <http://cbfalconer.home.att.net/firstpc/> >
That's a _beautiful_ machine! Did you ever have access to the Wang LOCI which would have been one of the closest competitors at that time? Regards, Michael msg _at_ cybertheque _dot_ org
msg wrote:
> CBFalconer wrote: > >> I built a machine in 1965 that used bit serial arithmetic, coupled >> with excess-3 decimal, and 9 significand digits floating point. >> The logic was DTL. The use of excess-3 and 9's complement >> arithmetic simplified display of negative values. See: >> >> <http://cbfalconer.home.att.net/firstpc/> > > That's a _beautiful_ machine! > > Did you ever have access to the Wang LOCI which would have been > one of the closest competitors at that time?
Thanks. The Wang came out slightly later. It was very fast at multiplications, due to the fundamental use of logarithms, with attendant inaccuracies. I never did figure out just what they did. They had significantly more financing than we did, we started with a pool of about $25,000 and borrowed our way up to about 100k. Our biggest mistake was the diodes, which decayed to less than 10 V inverse on the shelf, although rated for 25V. Just one of three cracks at becoming a millionaire, none of which succeeded :-( -- Chuck F (cbfalconer at maineline dot net) Available for consulting/temporary embedded and systems. <http://cbfalconer.home.att.net>
On Fri, 17 Nov 2006 02:37:43 -0800, karthikbg wrote:

You are mixing up endian with counting numbers.

With an x digit number, the digits to the left always have a larger
weighting than the numbers to the right, irrespective of how you write
down the number (ie in decimal, hexadecimal, or binary).

Why is the left most digit the highest weighting and not the lowest? Well
you would have to ask _____________. Don't know enough about the history
of numbers to fill in that last blank! But I am sure we have a lot of
years of convention here.

Have fun.

Paul.

Grant Edwards <grante@visi.com> writes:
> On 2006-11-17, DJ Delorie <dj@delorie.com> wrote: > > "karthikbg" <karthik.balaguru@lntinfotech.com> writes: > >> Why did Little-Endian come ? > > > > My guess is compatibility with previous processors. I.e. emulating > > an 8 bit cpu on a 16 bit LE cpu might be easier than on a 16 bit BE > > cpu. > > Little-endian can be a bit simpler for the hardware design > and/or compiler since the address you put on the bus when you > read a variable doesn't change depending on the destination > type. > > If variable X is a 32-bit integer at address 0x1234, reading it > as a long, char or short always generates address 0x1234. For a > big-endian machine, the address changes depending on how you > want to read the variable. Reading it as a char generates > address 0x1237. Reading it as a short generates 0x1236.
Your example doesn't make any sense. Address generation for your example would be made at compile/assembly time and wouldn't require any extra logic at run time.
> Not a huge deal, but back in the day gates were more expensive.
-- "Big-Endian byte order is known as 'normal', 'intuitive', or 'obvious'. Little-Endian is sometimes called 'perverse', 'annoying', 'dysfunctional', or 'stupid'. These designations do not, of course, imply any bias or preference." Christopher R. Hertel, "Implementing CIFS" 2004
Paul Keinanen wrote:
> There seems to be various Turing machine emulators, but are there any > hardware implementations ?
A Turing machine is a concept, not an actual machine. It consists of a finite state machine (FSM) that controls a head that can read and write a tape and move it left and right to the next bit of data on the tape. To implement it in hardware would require a definition of what the FSM does and that is application dependant. So Turing machines are a class of computer, not any particular computer. Emulating it in software is easy since you can change the FSM at will. Building it in hardware would require a definition of the FSM or at least a limitation on the complexity of the FSM in which case it is no longer a Turing machine. Oh, and you would have to have an infinitely long tape (memory) too... I guess you could always use a microprocessor for the FSM and write software for it. :^)
rickman wrote:
> Grant Edwards wrote: >> On 2006-11-17, DJ Delorie <dj@delorie.com> wrote: >>> "karthikbg" <karthik.balaguru@lntinfotech.com> writes: >>>> Why did Little-Endian come ? >>> My guess is compatibility with previous processors. I.e. emulating >>> an 8 bit cpu on a 16 bit LE cpu might be easier than on a 16 bit BE >>> cpu. >> Little-endian can be a bit simpler for the hardware design >> and/or compiler since the address you put on the bus when you >> read a variable doesn't change depending on the destination >> type. >> >> If variable X is a 32-bit integer at address 0x1234, reading it >> as a long, char or short always generates address 0x1234. For a >> big-endian machine, the address changes depending on how you >> want to read the variable. Reading it as a char generates >> address 0x1237. Reading it as a short generates 0x1236. >> >> Not a huge deal, but back in the day gates were more expensive. > > I can't say I undestand. If you have a 32 bit integer, in what context > would it be permissible to read it as a char? Why would this be a > function of the hardware rather than the software? Maybe I have been > working with MCUs too long, but I am missing this. > > My understanding is that little and big endian-ness came about the same > way that msb and lsb bit numbering came about. Two different companies > had different ideas of which was better. If I am not mistaken, this is > still a topic of some debate. Personally I prefer to order the bytes > with the ls byte first, followed by the ms byte and followed by all > intermediate bytes. This way you only need to increment by 1 to > address the sign bit vs addressing the low byte. Sounds pretty > optimal, no? >
Back when code was done in ASM and silicon was expensive and slow the rules where different.
"Big-Endian byte order is known as 'normal', 'intuitive', or
'obvious'.  Little-Endian is sometimes called 'perverse',
'annoying', 'dysfunctional', or 'stupid'.  These designations
do not, of course, imply any bias or preference."

I am sooooo confused now!, why would Little Endian be perverse,
annoying & such?!?!?!

depending on the endianess: 1234 = to 4321? omg!!!! (and that is in
decimal notation btw).

LOL

Everett M. Greene wrote:
> Grant Edwards <grante@visi.com> writes: > > On 2006-11-17, DJ Delorie <dj@delorie.com> wrote: > > > "karthikbg" <karthik.balaguru@lntinfotech.com> writes: > > >> Why did Little-Endian come ? > > > > > > My guess is compatibility with previous processors. I.e. emulating > > > an 8 bit cpu on a 16 bit LE cpu might be easier than on a 16 bit BE > > > cpu. > > > > Little-endian can be a bit simpler for the hardware design > > and/or compiler since the address you put on the bus when you > > read a variable doesn't change depending on the destination > > type. > > > > If variable X is a 32-bit integer at address 0x1234, reading it > > as a long, char or short always generates address 0x1234. For a > > big-endian machine, the address changes depending on how you > > want to read the variable. Reading it as a char generates > > address 0x1237. Reading it as a short generates 0x1236. > > Your example doesn't make any sense. Address generation > for your example would be made at compile/assembly time > and wouldn't require any extra logic at run time. > > > Not a huge deal, but back in the day gates were more expensive. > > -- > > "Big-Endian byte order is known as 'normal', 'intuitive', or > 'obvious'. Little-Endian is sometimes called 'perverse', > 'annoying', 'dysfunctional', or 'stupid'. These designations > do not, of course, imply any bias or preference." > > Christopher R. Hertel, "Implementing CIFS" 2004
rickman wrote:

> A Turing machine is a concept, not an actual machine. It consists of a > finite state machine (FSM) that controls a head that can read and write > a tape and move it left and right to the next bit of data on the tape. > To implement it in hardware would require a definition of what the FSM > does and that is application dependant. So Turing machines are a class > of computer, not any particular computer. > > Emulating it in software is easy since you can change the FSM at will. > Building it in hardware would require a definition of the FSM or at > least a limitation on the complexity of the FSM in which case it is no > longer a Turing machine. Oh, and you would have to have an infinitely > long tape (memory) too... > > I guess you could always use a microprocessor for the FSM and write > software for it. :^)
It is a concept, but it can be turned into an actual machine, as long as you limit the tape to some finite length. It then becomes computationally equivalent to any physical computer. As far as the FSM, you could implement a Universal Turing Machine. It would use an interpreted "language" with instructions that it reads from the tape itself. That way, the FSM can remain constant. For a binary tape, the smallest known design uses 24 states according to http://en.wikipedia.org/wiki/Universal_Turing_machine For a hardware implementation, you would need something equivalent to a tape with bits that allows you to read current bit, overwrite current bit, and move tape forward/backwards by one step. To control the tape, you'll need 5 flipflops for the 24 states, and a small ROM for the state machine. The ROM would map from {current state, read bit } -> { next state, write bit, tape direction }. To perform a computation, you put the input data, and instructions on the tape, turn the machine on, and it would put the output data somewhere else on the tape. Of course, the efficiency would be totally horrible. :)
On Fri, 17 Nov 2006 13:53:26 -0800, Darin Johnson wrote:

> Grant Edwards wrote: >> Is that convention visible to the programmer in any way? IOW, >> at the assembly language level are there instructions that use >> an integer "bit index" as an operand? > > Not really on the PowerPC. The bit numbering is just a convention, > invisible to the programmer. > > However, there are rotate and mask instructions on the PowerPC > that can sort of act like bit operations. But they take bit counts > and not indexes. >
AFAIK the mask in rlwinm instructions requires the begin and end mask bit indices to be specified. Rob
"Grant Edwards" wrote:
> > [1] There are some processors that can address bits within a > register with certain instructions. All the ones I've see > call the LSB bit "0". I've never seen such bit-addressing > made visible in a high-level language -- except possibly > PL/M-51 from Intel (I never actually wrote in PL/M-51, and > have rather vague memories of it). The 8051 had a nifty > feature where there was a block of memory that was > bit-addressible. I don't remember if the bit addressing > was big-endian or little-endian
Well, here is your change to see a reversed databus! It is the (old) TMS99xx series. I remember using it's IEEE488 controller TMS9914 and it had D0=MSB...D7=MSB, and I didn't notice. Of course the board had the databus upside down. I rewrote the driver to redefine the control / status registers and to exchange the data bits. It worked after that, just a bit slower. See the datasheet of a compatible chip here, page 8: http://www.team-solutions.com/Products/Misc/iGPIB72010/hw4882.pdf BTW: the 8051 bit addressing is litte endian, bit.0 in a byte is D0 = LSB and the instruction carries '000' in the field used to select that bit. Regards, Arie de Muijnck