Grant Edwards wrote:> Is that convention visible to the programmer in any way? IOW, > at the assembly language level are there instructions that use > an integer "bit index" as an operand?Not really on the PowerPC. The bit numbering is just a convention, invisible to the programmer. However, there are rotate and mask instructions on the PowerPC that can sort of act like bit operations. But they take bit counts and not indexes. To further obfuscate this, there are "simplified mnemonics" (ie, macros) built on these rotate and mask instructions that take bit positions. These happen to be the same as the Motorola bit numbering. But I can't imagine anyone using these mnemonics without also knowing what the conventions are. -- Darin Johnson
Endianness does not apply to byte
Started by ●November 17, 2006
Reply by ●November 17, 20062006-11-17
Reply by ●November 17, 20062006-11-17
karthikbg wrote:> It is really strange that Endiannes is not dependent on the Bit_Order > but only on the > Byte_order.Computers can address bytes directly. Very very few computers have been built that allow addressing of individual bits. Endianness is all about how data is stored in *memory*. It has little to do with how the data is used internally on a computer. Now when you look at the lines coming out of a processor, the data lines may be numbered D0 up to D7 (or even D15 or D31). This can be wired to memory in any order you want. You can have D0 be the most significant bit or the least significant bit. It does not matter at all. This is because when you read the data back, you get the bits back in exactly the same order that they were written. Even if you shuffled all the pins around on purpose, the bits would be unshuffled when you read them back. If you have more than one processor sharing the memory, or other devices, then you have to make sure that the bits line up correctly (ie, you can't shuffle the bits in different ways). It doesn't matter though what the order is, as long as the most significant bit connects to the other most significant bits on the other processors, and so forth. Different processors may name these pins different things, but that does not matter. As long as the pins are wired up correctly the names can be anything you want. -- Darin Johnson
Reply by ●November 17, 20062006-11-17
karthikbg wrote:> It is really strange that Endiannes is not dependent on the Bit_Order > but only on the > Byte_order.Computers can address bytes directly. Very very few computers have been built that allow addressing of individual bits. Endianness is all about how data is stored in *memory*. It has little to do with how the data is used internally on a computer. Now when you look at the lines coming out of a processor, the data lines may be numbered D0 up to D7 (or even D15 or D31). This can be wired to memory in any order you want. You can have D0 be the most significant bit or the least significant bit. It does not matter at all. This is because when you read the data back, you get the bits back in exactly the same order that they were written. Even if you shuffled all the pins around on purpose, the bits would be unshuffled when you read them back. If you have more than one processor sharing the memory, or other devices, then you have to make sure that the bits line up correctly (ie, you can't shuffle the bits in different ways). It doesn't matter though what the order is, as long as the most significant bit connects to the other most significant bits on the other processors, and so forth. Different processors may name these pins different things, but that does not matter. As long as the pins are wired up correctly the names can be anything you want. -- Darin Johnson
Reply by ●November 17, 20062006-11-17
Everett M. Greene wrote:> As for bit-numbering, IBM (once?) used a fractional > notation for binary values instead of the integer notation > used by everyone else.IIRC, the LGP-30 (and probably the RPC-4000) also used fractional notation and it was quite natural once learned. Circuit design and efficiency made it a logical choice. Regards, Michael
Reply by ●November 17, 20062006-11-17
rickman wrote:> > There have been a few machines that were addressable at the bit level, > but none have been popular and mostly they have had little impact on > computing. >Responding to this statement would be the start of a wonderful thread in another newsgroup and deserves argument, however I submit that architectures with the greatest 'impact on computing' in the last twenty years have done so not on their technical merits. Regards, Michael
Reply by ●November 17, 20062006-11-17
Grant Edwards wrote:> On 2006-11-17, rickman <gnuarm@gmail.com> wrote: > > >> If variable X is a 32-bit integer at address 0x1234, reading > >> it as a long, char or short always generates address 0x1234. > >> For a big-endian machine, the address changes depending on how > >> you want to read the variable. Reading it as a char generates > >> address 0x1237. Reading it as a short generates 0x1236. > >> > >> Not a huge deal, but back in the day gates were more expensive. > > > > I can't say I undestand. If you have a 32 bit integer, in what context > > would it be permissible to read it as a char? > > What do you mean "permissible"? In C the following is > permissible: > > char x; > long y; > > [...] > > x = y; > > If y is static, then the address to be read for the assignment > statement can be calculated at compile time, so it doesn't > really matter. If y is being accessed indirectly, then the > address must be offset by 3 at run-time: > > char x; > long *yp; > > x = *yp; > > The assigment statement above must generate a read of adderss > (yp+3) for big-endian machines. On some CPUs, that would > require an extra instruction or two compared with generating a > read of addresss (yp).Or it can just read the variable y as a long and then store the data to X as a char. I don't recall the assumption that C makes when you do an assignment like this, but it does not *require* that you read y as a char.> > Why would this be a function of the hardware rather than the > > software? > > It usually isn't -- which is why a said little endian can be > simpler for the hardware or the compiler.I still don't follow how it is simpler.
Reply by ●November 18, 20062006-11-18
On Fri, 17 Nov 2006 16:39:01 -0500, CBFalconer <cbfalconer@yahoo.com> wrote:>Paul Keinanen wrote: >> >... snip ... >> >> However, the list time I have seen a truly serial computer was the >> HP-35 scientific pocket calculator in the early 1970's, while other >> pocket calculators used bit parallel but decimal (BCD) digit serial >> architectures. > >I built a machine in 1965 that used bit serial arithmetic, coupled >with excess-3 decimal, and 9 significand digits floating point. >The logic was DTL. The use of excess-3 and 9's complement >arithmetic simplified display of negative values. See: > > <http://cbfalconer.home.att.net/firstpc/>Impressive ! How many gates/chips were required for this ? Did the 1965 DTL series contain only NAND/NOR gates or did Flip-Flops or XOR gates exist as single packages ? At lest the DTL families available in Europe (mainly Philips) were not very sophisticated even in 1968. As a related issue, what would be the simplest hardware (in the number of gates/transistors/tubes etc.) that could support a high level language and thus be able to execute any program, if the execution time is not an issue ? No doubt this would have to be bit serial machine and the compiler would have to generate a large number of (micro)instructions for each high level language statement. Early computers had about 1000 tubes but could it be doable with a smaller number of active elements ? There seems to be various Turing machine emulators, but are there any hardware implementations ? Paul
Reply by ●November 18, 20062006-11-18
Paul Keinanen wrote:> CBFalconer <cbfalconer@yahoo.com> wrote: >> Paul Keinanen wrote: >>> >>... snip ... >>> >>> However, the list time I have seen a truly serial computer was the >>> HP-35 scientific pocket calculator in the early 1970's, while other >>> pocket calculators used bit parallel but decimal (BCD) digit serial >>> architectures. >> >> I built a machine in 1965 that used bit serial arithmetic, coupled >> with excess-3 decimal, and 9 significand digits floating point. >> The logic was DTL. The use of excess-3 and 9's complement >> arithmetic simplified display of negative values. See: >> >> <http://cbfalconer.home.att.net/firstpc/> > > Impressive ! > > How many gates/chips were required for this ?Chips were still very expensive then. It contained (IIRC) about 1500 transistors and 4000 diodes. The logic element was built around a custom RC ceramic package that provided the base drive network and the collector load, and required +- 12 Volts. So the majority of the system was built out of three components. Double sided boards cost significantly more than single sided, so the system was designed to use single sided. The diodes provided the connectivity in most cases, so jumpers were rare.> > Did the 1965 DTL series contain only NAND/NOR gates or did Flip-Flops > or XOR gates exist as single packages ? At lest the DTL families > available in Europe (mainly Philips) were not very sophisticated even > in 1968.Two nand gates made a set/reset flip flop. Add a capacitor and resistor and you could build a toggling counter. Basically 2 diodes, 2 transistors, 2 ceramics, and one capicitor and resistor per counter element.> > As a related issue, what would be the simplest hardware (in the number > of gates/transistors/tubes etc.) that could support a high level > language and thus be able to execute any program, if the execution > time is not an issue ?The net component count was as above. The memory was magnetic core, 4 bits per word, 2k words. The core was inherently non-volatile, so programming survived power cycles.> > No doubt this would have to be bit serial machine and the compiler > would have to generate a large number of (micro)instructions for each > high level language statement. Early computers had about 1000 tubes > but could it be doable with a smaller number of active elements ?I conceived the machine as having pico-instructions executed by nano-instructions executed by micro-instructions executed by milli-instructions executed by instructions (which last were the key presses, or the recordings of such). Decimal floating point operations executed in 50 to 70 millisecs, with a clock rate of about 100 khz. No fans, the design tried to minimize dissipation. No roms, all was hard-wired. -- Chuck F (cbfalconer at maineline dot net) Available for consulting/temporary embedded and systems. <http://cbfalconer.home.att.net>
Reply by ●November 18, 20062006-11-18
Paul Keinanen wrote:> As a related issue, what would be the simplest hardware (in the number > of gates/transistors/tubes etc.) that could support a high level > language and thus be able to execute any program, if the execution > time is not an issue ? > > No doubt this would have to be bit serial machine and the compiler > would have to generate a large number of (micro)instructions for each > high level language statement. Early computers had about 1000 tubes > but could it be doable with a smaller number of active elements ? > > There seems to be various Turing machine emulators, but are there any > hardware implementations ?Depends a lot on what you will use as your main memory, and how much you care about its efficiency.
Reply by ●November 18, 20062006-11-18
Paul Keinanen wrote:> As a related issue, what would be the simplest hardware (in the number > of gates/transistors/tubes etc.) that could support a high level > language and thus be able to execute any program, if the execution > time is not an issue ? > > No doubt this would have to be bit serial machine...A good example of a minimalistic implementation is (again) the LGP-30 which had a 4K 30-bit word drum and used germanium diodes for logic and about 100 mostly 12AT7-equivalent tubes. It had ALGOL, BASIC and a number of other compilers; some could be stored resident on a protected area of the drum, others were run in passes from paper tape. Like most drum-based machines, the programmer could optimize by positioning code and data based on rotational latency and since it was a multi-address architecture one could also use addresses and opcodes as immediate constants and use other tricks for code conservation. Regards, Michael msg _at_ cybertheque _dot_ org