EmbeddedRelated.com
Forums

FFT Speeds

Started by Rick C March 30, 2020
On Thursday, April 2, 2020 at 2:33:20 PM UTC-4, Tauno Voipio wrote:
> > Booth's algorithm does handle signed operands in two's complement notation.
Your mention of two's complement as if there were other choices these days made me recall that in the Forth language group there is a discussion of standardizing on 2's complement... finally. Seems someone looked and there are literally no remaining hardware computers running anything other than 2's complement. There is an architecture running 1's complement, but it is no longer a hardware implementation but only simulated on other machines, lol. There must be some commercial niche for software written for that machine. So no real need to specify 2's complement these days although I suppose in an FPGA there could be anything. Some ADCs are still signed magnitude, so that conceivably could be carried through the chip. -- Rick C. ++ Get 1,000 miles of free Supercharging ++ Tesla referral code - https://ts.la/richard11209
Rick C <gnuarm.deletethisbit@gmail.com> writes:
> Your mention of two's complement as if there were other choices
IEEE 754 floating point, relevant to this discussion, is sign-magnitude.
On Thursday, April 2, 2020 at 11:33:24 PM UTC-4, Paul Rubin wrote:
> Rick C <gnuarm.deletethisbit@gmail.com> writes: > > Your mention of two's complement as if there were other choices > > IEEE 754 floating point, relevant to this discussion, is sign-magnitude.
That's not the main data format on any machines I know of. No one uses floating point for logic or flags. It was an issue in Forth because of the way people perform logic using non-flags or math using flags. A one's complement or a signed magnitude data format breaks any assumptions you make about mixing the two. Some people complain that the practice happens making programs potentially non-portable so it looks like they are going to standardize on two's complement so these programs will be compliant and the griping will stop. No one is complaining the change will break anything. -- Rick C. --- Get 1,000 miles of free Supercharging --- Tesla referral code - https://ts.la/richard11209
On Thu, 2 Apr 2020 12:39:55 -0700 (PDT), Rick C
<gnuarm.deletethisbit@gmail.com> wrote:

>On Thursday, April 2, 2020 at 2:33:20 PM UTC-4, Tauno Voipio wrote: >> >> Booth's algorithm does handle signed operands in two's complement notation. > >Your mention of two's complement as if there were other choices these days made me recall that in the Forth language group there is a discussion of standardizing on 2's complement... finally. > >Seems someone looked and there are literally no remaining hardware computers running anything other than 2's complement. There is an architecture running 1's complement, but it is no longer a hardware implementation but only simulated on other machines, lol. There must be some commercial niche for software written for that machine.
The nice feature with 1&#4294967295;s complement is that you can negate a number by simply inverting all bits in a word. Thus inserting the inverters in one data path and both inverting and some other operation can be used in a single instruction. The problem with 2&#4294967295;s complement is that after inverting all bits, "1" must be added to the result, requiring an adder and the ripple carry can propagate through all bit positions. Thus complementing a number needs an ADD instruction and takes as long as an ordinary ADD, Of course these days with carry look-ahead logic, the ADD is nearly as fast as AND/OR instructions.
> >So no real need to specify 2's complement these days although I suppose in an FPGA there could be anything. Some ADCs are still signed magnitude, so that conceivably could be carried through the chip.
With linear ADCs the zero line can be freely selected. One could put the zero at 1/3 FSD and hence large numeric range on one side. However, in floating point ADCs the range just around zero is of most interest, thus the sign/magnitude is more appropriate than biased unsigned or 2'c complement. The floating point sign/magnitude is used e.g. in digital telephony, so that weak signals are reproduced better.
On 02/04/2020 21:39, Rick C wrote:
> On Thursday, April 2, 2020 at 2:33:20 PM UTC-4, Tauno Voipio wrote: >> >> Booth's algorithm does handle signed operands in two's complement >> notation. > > Your mention of two's complement as if there were other choices these > days made me recall that in the Forth language group there is a > discussion of standardizing on 2's complement... finally. > > Seems someone looked and there are literally no remaining hardware > computers running anything other than 2's complement. There is an > architecture running 1's complement, but it is no longer a hardware > implementation but only simulated on other machines, lol. There must > be some commercial niche for software written for that machine. > > So no real need to specify 2's complement these days although I > suppose in an FPGA there could be anything. Some ADCs are still > signed magnitude, so that conceivably could be carried through the > chip. >
Two's complement dominates entirely for simple signed integers. But other formats are used in a variety of different situations. IEEE floating point, for instance, uses sign-magnitude for the mantissa, and offset for the exponent. And internally in processors, various redundant forms are used in hardware to reduce latencies for carry chains. In software, different representations are often used for extended arithmetic, as they can be much more efficient for some types of operations. At the basic level - for types like "int" in C and the base types in Forth - two's complement has emerged as the undisputed winner. (I think it was Knuth who said the biggest reason to stop using ones' complement is that so few people get the apostrophe in the right place!)
On Friday, April 3, 2020 at 2:34:42 AM UTC-4, upsid...@downunder.com wrote:
> On Thu, 2 Apr 2020 12:39:55 -0700 (PDT), Rick C > <gnuarm.deletethisbit@gmail.com> wrote: > > >On Thursday, April 2, 2020 at 2:33:20 PM UTC-4, Tauno Voipio wrote: > >> > >> Booth's algorithm does handle signed operands in two's complement notation. > > > >Your mention of two's complement as if there were other choices these days made me recall that in the Forth language group there is a discussion of standardizing on 2's complement... finally. > > > >Seems someone looked and there are literally no remaining hardware computers running anything other than 2's complement. There is an architecture running 1's complement, but it is no longer a hardware implementation but only simulated on other machines, lol. There must be some commercial niche for software written for that machine. > > The nice feature with 1&#31350; complement is that you can negate a number > by simply inverting all bits in a word. Thus inserting the inverters > in one data path and both inverting and some other operation can be > used in a single instruction. > > The problem with 2&#31350; complement is that after inverting all bits, "1" > must be added to the result, requiring an adder and the ripple carry > can propagate through all bit positions. Thus complementing a number > needs an ADD instruction and takes as long as an ordinary ADD, Of > course these days with carry look-ahead logic, the ADD is nearly as > fast as AND/OR instructions.
Exactly, and the 2's complement doesn't have the various issues the 1's complement does. It only has a single value for zero. I can't recall how they deal with that. If you subtract 2 from 1 you would get an all ones word which is zero, so you have to subtract another 1 to get -1 which is all ones with the LSB zero. So it's not really simpler, just messy in different ways. Actually the invert and add 1 is pretty easy to do in an ALU. You already have the adder, just add a conditional inverter in front and you have a subtract, oh, don't forget to add the 1 through the carry in. Easy and no different timing than the ALU without the inverter if it's done in the same LUT in an FPGA. In fact, I've seen macros for add/sub blocks. The same sort of trick can be used to turn an adder into a mux. In the instruction fetch unit of a CPU I designed there are multiple sources for the address, plus there is a need for an incrementer. This is combined in one LUT per bit by using an enable to disable one of the inputs which only passes the other input like a mux. I think I use that for the return address.
> >So no real need to specify 2's complement these days although I suppose in an FPGA there could be anything. Some ADCs are still signed magnitude, so that conceivably could be carried through the chip. > > With linear ADCs the zero line can be freely selected. One could put > the zero at 1/3 FSD and hence large numeric range on one side. > > However, in floating point ADCs the range just around zero is of most > interest, thus the sign/magnitude is more appropriate than biased > unsigned or 2'c complement. The floating point sign/magnitude is used > e.g. in digital telephony, so that weak signals are reproduced better.
You are talking about u-Law/A-Law compression. Yes, very familiar with that. Not really relevant to the native data format on CPUs though. I know in u-Law there is a bias which can muck things up if you aren't careful. So it's not even signed magnitude either. -- Rick C. --+ Get 1,000 miles of free Supercharging --+ Tesla referral code - https://ts.la/richard11209
On Friday, April 3, 2020 at 3:12:52 AM UTC-4, David Brown wrote:
> > (I think it was Knuth who said the biggest reason to stop using ones' > complement is that so few people get the apostrophe in the right place!)
Sounds like an xkcd joke. -- Rick C. -+- Get 1,000 miles of free Supercharging -+- Tesla referral code - https://ts.la/richard11209
On 03/04/2020 10:29, Rick C wrote:
> On Friday, April 3, 2020 at 2:34:42 AM UTC-4, upsid...@downunder.com > wrote: >> On Thu, 2 Apr 2020 12:39:55 -0700 (PDT), Rick C >> <gnuarm.deletethisbit@gmail.com> wrote: >> >>> On Thursday, April 2, 2020 at 2:33:20 PM UTC-4, Tauno Voipio >>> wrote: >>>> >>>> Booth's algorithm does handle signed operands in two's >>>> complement notation. >>> >>> Your mention of two's complement as if there were other choices >>> these days made me recall that in the Forth language group there >>> is a discussion of standardizing on 2's complement... finally. >>> >>> Seems someone looked and there are literally no remaining >>> hardware computers running anything other than 2's complement. >>> There is an architecture running 1's complement, but it is no >>> longer a hardware implementation but only simulated on other >>> machines, lol. There must be some commercial niche for software >>> written for that machine. >> >> The nice feature with 1&#31350; complement is that you can negate a >> number by simply inverting all bits in a word. Thus inserting the >> inverters in one data path and both inverting and some other >> operation can be used in a single instruction. >> >> The problem with 2&#31350; complement is that after inverting all bits, >> "1" must be added to the result, requiring an adder and the ripple >> carry can propagate through all bit positions. Thus complementing a >> number needs an ADD instruction and takes as long as an ordinary >> ADD, Of course these days with carry look-ahead logic, the ADD is >> nearly as fast as AND/OR instructions. > > Exactly, and the 2's complement doesn't have the various issues the > 1's complement does. It only has a single value for zero. I can't > recall how they deal with that. If you subtract 2 from 1 you would > get an all ones word which is zero, so you have to subtract another 1 > to get -1 which is all ones with the LSB zero. So it's not really > simpler, just messy in different ways.
Yes - ones' complement has several complications, and not many benefits. Sign-magnitude also has the "negative zero" issue, but has nice symmetry and is good for multiplication and division. These formats also avoid the asymmetry of having a larger negative range than positive range, and odd things like "abs(-128) == -128" (using 8-bit to keep the numbers small).
> > Actually the invert and add 1 is pretty easy to do in an ALU. You > already have the adder, just add a conditional inverter in front and > you have a subtract, oh, don't forget to add the 1 through the carry > in. Easy and no different timing than the ALU without the inverter > if it's done in the same LUT in an FPGA. In fact, I've seen macros > for add/sub blocks.
Generally the "borrow" flag for subtraction is the inversion of the "carry" flag for addition. This means that a "subtract with borrow" instruction is implemented as an "add with carry" where the subtracted value is complemented first. That is, to do "A - B - borrow" you do "A + ~B + carry". That is (another) nice feature of two's complement that lets you re-use existing hardware without having extra carry propagation delays.
On 03/04/2020 10:30, Rick C wrote:
> On Friday, April 3, 2020 at 3:12:52 AM UTC-4, David Brown wrote: >> >> (I think it was Knuth who said the biggest reason to stop using ones' >> complement is that so few people get the apostrophe in the right place!) > > Sounds like an xkcd joke. >
I don't have a reference to it, but it is certainly believable as the kind of thing Knuth would say. He had both the sense of humour ("I can&rsquo;t go to a restaurant and order food because I keep looking at the fonts on the menu") and the obsession about typographic details to have said something on those lines. And people regularly spell it incorrectly. It is, of course, equally believable that the quotation is a myth, or a misattribution. He certainly /did/ write about the apostrophe: Donald Knuth, that doyen of computer science, says in Art of Computer Programming, Vol 2.: Detail-oriented readers and copy-editors should notice the position of the apostrophe in terms like "two's complement" and "ones' complement": a two's complement number is complemented with respect to a single power of 2, whereas a ones' complement number is complemented with respect to a long sequence of 1s. Indeed, there is also a twos' complement notation, which has radix 3 and complementation with respect to (2...22) (base 3).
On 3/4/20 9:23 pm, David Brown wrote:
> Yes - ones' complement has several complications, and not many benefits. > &nbsp;Sign-magnitude also has the "negative zero" issue, but has nice > symmetry and is good for multiplication and division.
Negative-zero can be a nice thing to have in floating point. Given that IEEE has both Infinities, I'm surprised they don't have proper infinitesimals too, but +ve and -ve zero is a close substitute. Pointless in integer arithmetic, except to use up the assymetrical extra code. CH