Linear Feedback Shift Registers for the Uninitiated, Part V: Difficult Discrete Logarithms and Pollard's Kangaroo Method
Last time we talked about discrete logarithms which are easy when the group in question has an order which is a smooth number, namely the product of small prime factors. Just as a reminder, the goal here is to find \( k \) if you are given some finite multiplicative group (or a finite field, since it has a multiplicative group) with elements \( y \) and \( g \), and you know you can express \( y = g^k \) for some unknown integer \( k \). The value \( k \) is the discrete logarithm of \( y \) with respect to the generator \( g \); some references refer to \( k \) as the index of \( y \) with respect to \( g \).
This time we’ll talk about discrete logarithms in large groups of prime order; for the most part, the same topic applies to groups that have an order which is a product of large primes, where you can use SilverPohligHellman to find the discrete logarithm in the group itself, once you find the discrete logarithm in a sufficient set of its subgroups.
I’ll warn you in advance that this article doesn’t have much to do with embedded systems, and although some of the techniques are useful, you probably won’t need them for working with LFSRs. If that’s the case, just skip to the next article. Otherwise, read on!
Why Should I Care about Difficult Discrete Logarithms?
The extreme case of a discrete logarithm in \( GF(2^n) \) is when the order \( M = 2^n1 \) is a Mersenne prime. As I mentioned last time, if I have \( GF(2^{127}) \), this has \( 2^{127}  1 \approx 1.7 \times 10^{38} \) elements in the multiplicative group, and even if I could check a billion of them each second, it would take \( 5.39 \times 10^{21} \) years to check them all, which is something like a trillion times longer than the age of the universe.
The application of discrete logarithms in these groups is cryptographic security. There are cryptographic protocols like DiffieHellman key exchange, the ElGamal encryption and signature systems, and elliptic curve cryptosystems, all of which depend on the discrete logarithm problem being intractable to solve for large values of \( n \).
Because they’re supposed to be intractable, it’s not actually meant for anyone to succeed in computing the discrete logarithm. So why should we care?
Well, in part, we need a metric to determine quantitatively what “intractable” means. The algorithms for computing discrete logarithms are essentially used to measure how difficult it is to attack a particular cryptosystem as a function of the group order \( M \), and this tells us how large \( M \) must be in order to keep that system reasonably safe from attack.
Just for illustration purposes, let’s look at \( GF(2^7) \) which has prime order 127, and suppose we’re trying to find the discrete log of \( x^{35} = x^5 + x^4 + x + 1 \) in \( H_{83} = GF(2)[x]/(x^7+x+1) \). Here is a table showing all 127 powers of \( x^k \), in order of their index \( k \); the top of each cell contains the index, and the bottom of each cell contains the polynomial in hexencoded bit vector form, so for example, the cell containing 21 0f
stands for \( x^{21} = x^3 + x^2 + x + 1 \):
from IPython.display import display_svg def grid(M,nx,ny,d=30,f=None,g=None): svgcode = [] svgcode += ['<svg height="%d" width="%d">' % (d*(ny+0.2), d*(nx+0.2)), '''<style type="text/css"> rect.gridsquare { fill:none; stroke: #777777;} rect.bluehighlight { fill: #aaccff; } rect.greenhighlight { fill: #ccffaa; } rect.redhighlight { fill: #ff8888; } rect.orangehighlight { fill: #ffcc88; } .hop {fill: none; stroke: black; strokewidth: 1.5; markerend: url(#arrow); } .gridlabel { textanchor: middle; dominantbaseline: middle; fontfamily: "Open Sans", sansserif; fontsize: 83%; fontweight: normal; fill: #777777; } </style>''','''<defs> <marker id="arrow" viewBox="0 5 10 10" markerWidth="4" markerHeight="4" refX="5" refY="0" orient="auto" markerUnits="strokeWidth"> <path d="M0,5 L10,0 L0,5 z" /> </marker> </defs>'''] for i in xrange(M): x = i % nx y = i // nx cssclass = ["gridsquare"] text = str(i) if f: info = f(nx,ny,x,y,i) cssclass += info.get('class',[]) text = info.get('text',text) svgcode += ['<rect x="%d" y="%d" width="%d" height="%d" class="%s" />' % (1+x*d, 1+y*d, d, d, ' '.join(cssclass))] textlines = [text] if isinstance(text, basestring) else text nlines = len(textlines) n1 = nlines + 1 for k, textline in enumerate(textlines): svgcode.append('<text x="%d" y="%d" class="gridlabel">%s</text>' % (1+d*(x+0.5),1+d*(y+0.5+((k+1.0n1/2.0)/n1)*1.3),textline)) def cfunc(i): x = i % nx y = i // nx return (1+(x+0.5)*d),(1+(y+0.5)*d) if g: svgcode += g(nx,ny,d, cfunc) svgcode += ['</svg>'] display_svg('\n'.join(svgcode), raw=True) from libgf2.gf2 import GF2Element e83 = GF2Element(1, 0x83) print e83 << 35 def fsquare1(nx,ny,x,y,i): result = {'text': ['%d' % i,'%02x' % (e83 << i).coeffs] } if i == 35: result['class'] = ['bluehighlight'] return result grid(128,16,8,f=fsquare1)
GF2Element(0b0110011,0x83)
The dumb, brute force method would be to check each element in sequence: \( x^0, x^1, x^2, \ldots \) until we get to \( x^{35} \) and we get a match. This takes \( O(2^n) = O(M) \) operations both on average and worstcase, and isn’t practical for large groups. Again, \( GF(2^{127}) \) would take \( 5.39 \times 10^{21} \) years to check one element per nanosecond — I’ve got better things to do.
We could also precompute all elements \( x^k \) in the group and put them into a lookup table with the polynomial coefficient bit vector as a key, and \( k \) as the stored value. This would take \( O(1) \) operations with a hash table, but would take up \( O(2^n) = O(M) \) space, so it isn’t practical for large groups either.
Square root methods
There are, however, some fairly simple methods that can find discrete logarithms faster.
Babystep Giantstep
The simplest one is Shanks’s babystep giantstep algorithm. This basically is a time/space tradeoff between the dumb, brute force method of checking each element in sequence, and the dumb, brute force method of precomputing all group elements and putting them in a lookup table. Instead, we take the square root of the group order, round to a convenient integer, and use this as some number of table elements. For example, for our \( M=127 \) case, let’s take \( \sqrt{127} \) and round down to the nearest power of two, which is 8, then calculate all values \( x^{8k} \) and store them in a table (the green highlighted squares in the following diagram):
def fsquare2(nx,ny,x,y,i): result = {'text': ['%d' % i,'%02x' % (e83 << i).coeffs] } if i % 8 == 0: result['class'] = ['greenhighlight'] if i == 35: result['class'] = ['bluehighlight'] return result grid(128,16,8,f=fsquare2)
Then we take our unknown power \( y \) (with \( y=x^{35} \) in the example) and multiply by \( x \) repeatedly, until we find a match to the items in the table. For the example \( y=x^{35} \), we find that \( x^5 y \) (with bit vector 0x74
) matches \( x^{40} \) in the table, and we can calculate \( k = 40  5 = 35 \). The table in the example takes 16 elements of space and takes at most 8 steps to match any unknown power of \( x \). In general, the babystep giantstep algorithm takes \( O(\sqrt{M}) \) for both time and space.
import math import random import time def bstepgstep(field, table_stride=None): e = field.wrap(1) m = (1 << field.degree)  1 assert (e << m) == e if table_stride is None: table_stride = 1 << int(math.floor(field.degree/2)) table = {} mul = e << table_stride y = e for k in xrange(0,m,table_stride): table[y.coeffs] = k y *= mul def f(y): for j in xrange(table_stride): k = table.get(y.coeffs) if k is not None: return (kj) % m y <<= 1 return f def randg(n, seed=None): """ random generator of integers with n bits """ r = random.Random(seed) while True: yield r.getrandbits(n) def discrete_log_check(field, func, nsample=1000, seed=None, verbose=False): g = randg(field.degree, seed) m = (1 << field.degree)  1 e = field.wrap(1) for j in xrange(nsample): k = g.next() % m if verbose: print 'k=%d' % k y = e << k assert func(y) == k # 2^31  1 is a Mersenne prime... e31 = GF2Element(1, 0x80000009) F31 = e31.field f = bstepgstep(F31) print "starting discrete log check...." t0 = time.time() discrete_log_check(F31, f, seed=123, nsample=20, verbose=True) t1 = time.time() print 'elapsed time: %.3f seconds' % (t1t0)
starting discrete log check.... k=112449971 k=574832345 k=187231959 k=1651321278 k=874545027 k=572419961 k=231284485 k=1800672586 k=1935309849 k=1878673991 k=81934378 k=814163306 k=1151485118 k=1207570246 k=713389124 k=731858619 k=1829842087 k=111457957 k=342872396 k=290033686 elapsed time: 13.953 seconds
Pollard’s Rho Algorithm
In the 1970’s, John Pollard came up with two similar algorithms for attacking the difficult problems of integer factorization (in 1975) and discrete logarithms (in 1978). Both are called the rho algorithm, because they create a chain of iterated deterministic function evaluations \( u_i = f(u_{i1}) \) that must eventually form a cycle — like the tail leading into a circle of the Greek letter \( \rho \) (rho). The cycle is inevitable because there is a finite set of possible outputs of the function, so at worst it will visit each possible output exactly once before repeating. Once the cycle is detected, the algorithm yields a potential solution. The iterated functions are deterministic but “uncorrelated” to the input data, and appear somewhat “random” in a statistical sense; this ends up yielding similar \( O(\sqrt{M}) \) time costs but only a fixed memory size, because there is no table storage required. (Worstcase performance is \( O(M) \) if we somehow choose a function that forms a cycle covering all the elements, but this is incredibly unlikely.)
Pollard’s paper suggests the following function (shown here with variable names changed for clarity) to solve \( g^k = y \) for \( k \), given some element \( y \) and a generator \( g \) with order \( p \):
$$ f(u) = \begin{cases} gu, & h(u) = 0 \\ u^2, & h(u) = 1 \\ yu, & h(u) = 2 \end{cases}$$
where \( h(u) \) is a hash function which has values 0, 1, 2, with roughly equal probability.
Cycle detection can be performed with tortoiseandhare algorithms like Floyd’s cyclefinding algorithm — we perform the iteration \( f(u) \) in two separate ways, one where \( u_i = f(u_{i1}) \) (the tortoise) and the next where \( v_i = f(f(v_{i1})) \) (the hare, which moves twice as fast), so that a cycle is detected when we find \( u_i = v_i = u_{2i} \). Alternatively, use Brent’s cyclefinding algorithm, which gets the “hare” to iterate until there’s a match to the “tortoise”, but instead of the tortoise moving half as slow, whenever the “hare” gets to a power of two iterations, the tortoise jumps to where the hare is, and stays put until the next power of two.
(Below is an illustration of both methods; for Brent’s method I’ve renamed them “snail” and “tortoise” instead of “tortoise” and “hare”.)
Pollard’s idea is that we can express the iterated element \( u_i = g^{a_i}y^{b_i} \), and we can keep track of exponents \( a_i \) and \( b_i \), starting at \( u_0 = 1 \) (with \( a_i = b_i = 0 \)), where of the three cases:
 if \( h(u) = 0 \), then \( a_i = a_{i1} + 1 \) and \( b_i = b_{i1} \) (we increment only \( a \) when we multiply \( u \) by \( g \))
 if \( h(u) = 2 \), then \( b_i = b_{i1} + 1 \) and \( a_i = a_{i1} \) (we increment only \( b \) when we multiply \( u \) by \( y \))
 if \( h(u) = 1 \), then \( a_i = 2a_{i1} \) and \( b_i = 2b_{i1} \) (we double both \( a \) and \( b \) when we square \( u \))
Once a cycle is detected and \( u_i = u_j \), the idea is that we then have the equation \( g^{a_i}y^{b_i} = g^{a_j}y^{b_j} \) which can be rewritten as \( y^{b_j  b_i} = g^{a_i  a_j} \). We can solve this by computing \( c \) such that \( c(b_j  b_i) \equiv 1 \pmod M \) — there’s that multiplicative inverse again! — and then raising both sides to the \( c \)th power, computing \( y^{c(b_j  b_i)} = y = g^{c(a_i  a_j)} = g^k \) where \( k = c(a_i  a_j) \bmod M \).
With that background, let’s implement Pollard’s rho method in Python:
def cycle_detect_floyd(f, x0, matchfunc=None, include_count=False): u = x0 v = x0 k = 0 if matchfunc is None: matchfunc = lambda u,v: u == v while True: k += 1 u = f(u) v = f(f(v)) if matchfunc(u,v): result = (u, v, k, 2*k) return result + (3*k,) if include_count else result def cycle_detect_brent(f, x0, matchfunc=None, include_count=False): u = x0 v = x0 ku = 0 kv = 0 if matchfunc is None: matchfunc = lambda u,v: u == v kpow2 = 1 while True: kv += 1 v = f(v) if matchfunc(u,v): result = u, v, ku, kv return result + (kv,) if include_count else result if kv == kpow2: ku = kv u = v kpow2 *= 2 def blankinship(a,b, verbose=False): arow = [a,1,0] brow = [b,0,1] if verbose: print arow while brow[0] != 0: q,r = divmod(arow[0], brow[0]) if verbose: print brow, q if r == 0: break rrow =[r, arow[1]q*brow[1], arow[2]q*brow[2]] arow = brow brow = rrow return brow[0], brow[1], brow[2] def log_rho(y, g, p, hashfunc, cycle_detector, verbose=False): """ Executes Pollard's rho algorithm for determining discrete logarithm. The generator g must have order p. """ assert g ** (p+1) == g def f(uab): u, a, b = uab h = hashfunc(u) if h == 0: u *= g a += 1 elif h == 1: u *= u a *= 2 b *= 2 elif h == 2: u *= y b += 1 else: raise ValueError("Expected hash value 0, 1, 2") return u, a, b def matchfunc(uab, vab): return uab[0] == vab[0] uab0 = (g, 1, 0) uab, vab, ku, kv, nf = cycle_detector(f, uab0, matchfunc, include_count=True) if verbose: if verbose is True: print "Found cycle in %d function evaluations" % nf else: # allow verbose to be a function verbose(evaluation_count=nf) _, ai, bi = uab _, aj, bj = vab delta_a = aiaj delta_b = bjbi gcd, c, _ = blankinship(delta_b, p) assert gcd == 1 return (c*delta_a) % p def simple_hash(y): return (y.coeffs & 255) % 3
field = F31 M = (1 << 31)  1 print "sqrt(p) = %.3f" % math.sqrt(M) g = field.wrap(2) # this is x for k in [123, 4567, 89102]: y = g ** k for detector in [cycle_detect_floyd, cycle_detect_brent]: print detector print log_rho(y, g, M, simple_hash, detector, verbose=True)
sqrt(p) = 46340.950 <function cycle_detect_floyd at 0x108bd9d70> Found cycle in 352020 function evaluations 123 <function cycle_detect_brent at 0x103492a28> Found cycle in 248412 function evaluations 123 <function cycle_detect_floyd at 0x108bd9d70> Found cycle in 171060 function evaluations 4567 <function cycle_detect_brent at 0x103492a28> Found cycle in 71238 function evaluations 4567 <function cycle_detect_floyd at 0x108bd9d70> Found cycle in 300168 function evaluations 89102 <function cycle_detect_brent at 0x103492a28> Found cycle in 231128 function evaluations 89102
Not bad! No table and we’ve solved discrete log of a 2billionelement field in a few hundred thousand function evaluations. This can be used in conjunction with the SilverPohligHellman algorithm we saw last time by using the larger prime factors; for example let’s look at a 64bit LFSR, which has a largest factor of 6700417. In this case we can use brute force for the smaller prime factors and Pollard’s rho method for factors above an arbitrary threshold (rho_threshold
\( =64 \) in the example below)
import primefac def crt(M, factors): """ Chinese Remainder Theorem Precalculates cofactors and weights, and returns a function that can be used to calculate a value $x$ given its residues modulo each factor. """ n = len(factors) cofactors = [M//mj for mj in factors] weights = [blankinship(M//mj, mj)[1] % mj for mj in factors] def f(residues): x = 0 for aj, bj, cj in zip(residues, weights, cofactors): x += aj*bj*cj return x % M return f def log_silver_pohlig_hellman(y, rho_threshold = 64, verbose=False, somewhat_verbose=False): """ Discrete log, using SilverPohligHellman, and for each factor, uses brute force under the given threshold, otherwise uses Pollard's rho method """ field = y.field e = field.wrap(1) n = field.degree p = (1 << n)1 factors = primefac.factorint(p) factorlist = [] if verbose: print "factors", factors residues = [] eval_counts = [] for j, f in enumerate(factors.keys()): exponent = factors[f] fe = f**exponent if verbose: print "Investigating factor %s" % (f if exponent == 1 else "%d^%d" % (f,exponent)) factorlist.append(fe) c = p // fe assert c*fe == p g = e << c z = y ** c # generator g of an appropriate subgroup has order f^exponent if fe < rho_threshold: # brute force for k in xrange(fe): if g ** k == z: residues.append(k) eval_counts.append(k) break else: # use Pollard rho def vfunc(evaluation_count): eval_counts.append(evaluation_count) if verbose: print "Found cycle in %d function evaluations" % evaluation_count k = log_rho(z, g, fe, simple_hash, cycle_detect_brent, verbose=vfunc if verbose or somewhat_verbose else False) residues.append(k) if verbose: print "y^%d == x^(%d*%d)" % (c, c, k) if verbose or somewhat_verbose: print "Total # of function evaluations: %d" % sum(eval_counts) return crt(p, factorlist)(residues) # here we have a 64bit primitive polynomial e = GF2Element(1, 0x1000000000000001b) for i, k in enumerate([1234567, 9876543210, 123456789, 31]): y = e << k verbose = i==0 if verbose: print "Attempting to solve y = x^k for k=%d (y.coeffs = 0x%x)" % (k,y.coeffs) k_detected = log_silver_pohlig_hellman(y, verbose=verbose, somewhat_verbose=True ) print k, k_detected
Attempting to solve y = x^k for k=1234567 (y.coeffs = 0x6d012ff078140de7) factors {641: 1, 3: 1, 5: 1, 65537L: 1, 257: 1, 6700417L: 1, 17: 1} Investigating factor 641 Found cycle in 116 function evaluations y^28778071877862015 == x^(28778071877862015*1) Investigating factor 3 y^6148914691236517205 == x^(6148914691236517205*1) Investigating factor 5 y^3689348814741910323 == x^(3689348814741910323*2) Investigating factor 65537 Found cycle in 791 function evaluations y^281470681808895 == x^(281470681808895*54901) Investigating factor 257 Found cycle in 57 function evaluations y^71777214294589695 == x^(71777214294589695*196) Investigating factor 6700417 Found cycle in 4081 function evaluations y^2753074036095 == x^(2753074036095*1234567) Investigating factor 17 y^1085102592571150095 == x^(1085102592571150095*10) Total # of function evaluations: 5058 1234567 1234567 Total # of function evaluations: 8971 9876543210 9876543210 Total # of function evaluations: 4392 123456789 123456789 Total # of function evaluations: 6354 31 31
Again, the important thing to note is that discrete logarithm computation utilizing SilverPohligHellman and Pollard’s rho method has an execution time of \( O(\sqrt{p_{max}}) \) where \( p_{max} \) is the largest prime factor of the group order. We were able to solve discrete logarithms in the much larger field \( GF(2^{64}) \) in only a few thousand function evaluations, compared with a few hundred thousand function evaluations for \( GF(2^{31}) \), because \( 2^{64}  1 \) is smooth (maximum prime factor 6700417) and \( 2^{31}  1 \) is a prime.
Pollard’s Kangaroo Algorithm
The other method in Pollard’s 1978 paper on discrete logarithms is called the “kangaroo algorithm”, because when Pollard was reading Martin Gardner’s legendary August 1977 “Mathematical Games” column on RSA encryption in Scientific American, he noticed the cover art and article in the same issue, on kangaroos, and made a number of tongueincheek references to kangaroos in his algorithm, mentioning in an article published in 2000 (Kangaroos, Monopoly, and Discrete Logarithms) that
The original name [5] was the “lambda method for catching kangaroos”. This lengthy title matched that of the preceding section. We need a shorter name. Here we prefer “kangaroo method” to “lambda method”. The latter is perhaps confusing since the parallel version of the rho method is shaped like a lambda, not a rho.
The kangaroos came from a fascinating article [3] in Scientific American — “Energetic cost of locomotion, measured in terms of oxygen consumption at various speeds, was determined by placing kangaroos on a treadmill… ” (The same issue contained Martin Gardner’s exposition of the RSA method).
The kangaroo algorithm, unlike the rho method, can solve the “interval discrete logarithm problem” where \( y = x^k \) with some known restriction \( 0 \leq k \leq L \). (It also works for any interval of length \( L \), since if we know \( k_1 \leq k \leq k_1+L \) then we can solve \( y’ = x^{k’} \) with \( k’ = kk_1 \) and \( y’ = yx^{k_1} \), so that \( 0 \leq k’ \leq L \).) It works roughly as follows:
 We define an iteration process \( u_{i+1} = u_i g^{f(u_i)} \) — in other words, at each step we multiply the value \( u_i \) by a known power of \( g \) that is a deterministic function of \( u_i \), in effect “hopping” ahead by this known amount.
 We start at a known power of \( g \) and run the iteration process, stopping when we get to some termination criteria after the end of the interval \( L \). The resulting value \( u_T = g^{k_T} \) is is called the “trap”, and this sequence of values is called the “tame kangaroo”.
 Then we do the same thing, starting at \( y \) (our unknown power of \( g \)) until we find the same trap, or we decide to terminate. This sequence of values is called the “wild kangaroo”.
 If the wild kangaroo falls into the trap, then we succeed, because we know the exponent of the trap \( u_T = g^{k_T} \), and we know how far the wild kangaroo has “hopped”: \( u_T = g^{k_T} = g^{k_W}y \) so therefore \( y = g^k \) where \( k = k_T  k_W \).
 In the event of a failure, choose another “wild kangaroo” \( y’ = yg^j \) for some value of \( j \) (the implementation below uses a fixed \( j \) called
kretry
), and try again.
Let’s look at our degree7 quotient ring example again, where we want to find \( k \) such that \( x^k = y = x^5 + x^4 + x + 1 \) (bit vector 0x33
) in \( H_{83} = GF(2)[x]/x^7 + x + 1 \). Suppose we know that \( 0 \leq k \leq 40 \). First we find our kangaroo trap by starting a tame kangaroo at \( x^{40} \) and hopping forward. In this simple example we’ll pick a hash function based on the least significant two bits of the bit vector \( y_{01} \), going forward 1 step if the least significant bits are 10 or 00, 6 steps if the least significant bits are 01, and 16 steps if the least significant bits are 11:
$$f(y) = \begin{cases} 1 & y_{01} = 00, \cr 1 & y_{01} = 10, \cr 6 & y_{01} = 01, \cr 16 & y_{01} = 11 \end{cases}$$
For a termination criteria, let’s just decide on taking 7 iterations through the process. I picked these numbers because the average hop length is 6 which is somewhat close to \( \sqrt{40} \), so that 7 iterations will advance us by 42 spaces on average.
Let’s see what happens:
def kangaroo_hop_example(start_index, field, nmax=7, trap=None, hop_on_0=1, hop_on_01 = 6, hop_on_11=16): y = field.wrap(1) << start_index k = start_index locations = [start_index] for _ in xrange(nmax): if (y.coeffs & 1) == 0: hop = hop_on_0 elif (y.coeffs & 2) == 0: hop = hop_on_01 else: hop = hop_on_11 k += hop y <<= hop locations.append(k) if k == trap: break return locations tame_kangaroo = kangaroo_hop_example(40, e83.field) tame_kangaroo_set = set(tame_kangaroo) def fsquare3(nx,ny,x,y,i): result = {'text': ['%d' % i,'%02x' % (e83 << i).coeffs] } if i == tame_kangaroo[1]: result['class'] = ['redhighlight'] elif i in tame_kangaroo_set: result['class'] = ['orangehighlight'] return result def arrowpath(k1, k2, d, cfunc): if k1 > 127 or k2 > 127: return '' x1,y1 = cfunc(k1) x4,y4 = cfunc(k2) y1 = 0.1*d y4 = 0.1*d if x4 > x1: x1 += 0.25*d x4 = 0.25*d dx = 0 elif x4 < x1: x1 = 0.25*d x4 += 0.25*d dx = 0 else: x1 = 0.33*d x4 = 0.33*d dx = 0.2*d x2 = (2*x1+x4)/3.0 + dx x3 = (x1+2*x4)/3.0 + dx if y4 == y1: dy = 0 if (k2k1) == 1 else d/3 dy2 = 0 else: y4 = 0.25*d y1 += 0.25*d dy = 0 dy2 = 0.125*d y2 = y1  dy + dy2 y3 = y4  dy  dy2 return ('<path d="M %d %d C %d %d %d %d %d %d" class="hop" />' % (x1,y1,x2,y2,x3,y3,x4,y4)) def gsquare3(nx,ny,d,cfunc): result = [] for j in xrange(len(tame_kangaroo)1): k1 = tame_kangaroo[j] k2 = tame_kangaroo[j+1] result.append(arrowpath(k1,k2,d,cfunc)) return result grid(128,16,8,f=fsquare3, g=gsquare3)
There, now we’ve set our trap, let’s see which wild kangaroos in the \( [x^0, x^{40}] \) interval fall into it. We’ll allow 8 more hops (expected length = 48 spaces) so that a kangaroo starting at \( x^0 \) is likely to get past \( x^{40} \) and then into the trap.
trapped = set() paths = {} trap = tame_kangaroo[1] for k in xrange(40+1): if k in trapped: continue wild_kangaroo = kangaroo_hop_example(k, e83.field, nmax=7+8, trap=trap) for k in xrange(len(wild_kangaroo)  1): paths[wild_kangaroo[k]] = wild_kangaroo[k+1] if wild_kangaroo[1] == trap: trapped.update(wild_kangaroo) def fsquare4(nx,ny,x,y,i): result = {'text': ['%d' % i,'%02x' % (e83 << i).coeffs] } if i == trap: result['class'] = ['redhighlight'] elif i in trapped: result['class'] = ['orangehighlight'] return result def gsquare4(nx,ny,d,cfunc): result = [] for k1,k2 in paths.iteritems(): result.append(arrowpath(k1,k2,d,cfunc)) return result grid(128,16,8,f=fsquare4, g=gsquare4)
Tada! We successfully “caught” most of the kangaroos in the \( [x^0, x^{40}] \) interval because they eventually make it to \( x^{77} \). Of the ones that didn’t reach the trap, there are two reasons why the method failed:
 We would have caught the kangaroo starting at \( x^1 \) also, but it would have taken 16 hops and we gave up after 15.
 The kangaroos from \( x^8 \) to \( x^{13} \) and from \( x^{29} \) to \( x^{31} \) “slipped by” the trap.
In case of a failure, we have to decide what to do, and one way to retry differently is to take the wild kangaroo’s starting point and either move forward or backwards by a fixed amount and try again.
There are numerous variants of the kangaroo method and they include using socalled “distinguished values” as a stopping criterion (stop when the N least significant bits are 0), using kangaroo calculations in parallel, how you handle failures, etc.
There are also tradeoffs: the more hops you allow, the more likely a wild kangaroo will fall into the “trap”, but that means more calculation work. It’s not guaranteed that the trap is reached; there are probabilistic arguments that give good heuristics for how to set some of the algorithm parameters.
Anyway let’s use this for some real work, on a degree31 field, where \( M = 2^{31}  1 \) which has about 2 billion values.
def kangaroo(field, L, alpha=1, kretry=83, max_retries=64): """ Pollard's Kangaroo Method for discrete logarithms field: field to use L: interval length (find dlog of x^k for 0 <= k < L) alpha: tuning factor; larger values are slower, but raise probability of detection kretry: we move the wild kangaroo back to try to catch it again max_retries: after this number of retries, we give up. """ e = field.wrap(1) l = int(math.floor(math.log(L)/math.log(2))) W = 1 << (l//2) # W is a power of two somewhat close to sqrt(L) trap = e << L ktrap = L def h(x): return 1 if (x.coeffs & 1) else W nhops_tame = int(alpha*W) expected_hop = W//2 for j in xrange(nhops_tame): jumplength = h(trap) trap <<= jumplength ktrap += jumplength print "Trap: %s = x^%d, W=%d" % (trap, ktrap, W) assert (e << ktrap) == trap def f(y): nhops_wild = nhops_tame + L//expected_hop # The tame kangaroo starts from the end of the interval, # so we need to allow extra hops to allow the wild kangaroo to reach the trap z0 = y total_nhops = 0 u0 = 0 for ntries in xrange(max_retries): z = z0 u = u0 nhops_wild += 1.0*(kretry)/expected_hop all_hops_1 = True for j in xrange(int(nhops_wild)): if z == trap: return ktrap  u, ntries, total_nhops jumplength = h(z) z <<= jumplength u += jumplength total_nhops += 1 z0 <<= kretry u0 += kretry return f r = random.Random(123) for L in [5000, 50000, 500000]: print "Interval [0,%d]" % L f = kangaroo(F31, L, alpha=2.0) H = 0 nsamples = 25 for _ in xrange(nsamples): k = r.randint(0,L) solution = f(e31<<k) if not solution: print "k=%d: solution not found" % k else: kfound, nretries, total_nhops = solution H += total_nhops print "k=%d: found k=%d in %d retries (%d hops total)" % (k,kfound,nretries, total_nhops) print "avg number of hops: ", H/nsamples print "ratio of avg hops to sqrt(L):", (H/math.sqrt(L)/nsamples) print ""
Interval [0,5000] Trap: GF2Element(0b0001000001101000000110100101000,0x80000009) = x^9538, W=64 k=261: found k=261 in 2 retries (820 hops total) k=436: found k=436 in 1 retries (525 hops total) k=2036: found k=2036 in 0 retries (194 hops total) k=538: found k=538 in 3 retries (1106 hops total) k=4506: found k=4506 in 1 retries (424 hops total) k=190: found k=190 in 1 retries (519 hops total) k=2681: found k=2681 in 0 retries (179 hops total) k=1661: found k=1661 in 1 retries (497 hops total) k=4261: found k=4261 in 5 retries (1604 hops total) k=798: found k=798 in 0 retries (235 hops total) k=1686: found k=1686 in 2 retries (781 hops total) k=1669: found k=1669 in 1 retries (489 hops total) k=1226: found k=1226 in 2 retries (800 hops total) k=8: found k=8 in 2 retries (821 hops total) k=2181: found k=2181 in 1 retries (481 hops total) k=438: found k=438 in 1 retries (523 hops total) k=2988: found k=2988 in 3 retries (1050 hops total) k=349: found k=349 in 0 retries (243 hops total) k=1577: found k=1577 in 0 retries (212 hops total) k=2242: found k=2242 in 1 retries (483 hops total) k=4527: found k=4527 in 2 retries (712 hops total) k=463: found k=463 in 2 retries (807 hops total) k=711: found k=711 in 2 retries (811 hops total) k=3951: found k=3951 in 2 retries (721 hops total) k=106: found k=106 in 0 retries (234 hops total) avg number of hops: 610 ratio of avg hops to sqrt(L): 8.6385821244  Interval [0,50000] Trap: GF2Element(0b0111101100001111011111111110111,0x80000009) = x^67274, W=128 k=45558: found k=45558 in 1 retries (1374 hops total) k=28670: found k=28670 in 7 retries (7869 hops total) k=13266: found k=13266 in 14 retries (15461 hops total) k=41892: found k=41892 in 5 retries (5599 hops total) k=38440: found k=38440 in 13 retries (14041 hops total) k=17173: found k=17173 in 2 retries (2814 hops total) k=40075: found k=40075 in 0 retries (402 hops total) k=10343: found k=10343 in 13 retries (14452 hops total) k=30445: found k=30445 in 2 retries (2623 hops total) k=26354: found k=26354 in 4 retries (4771 hops total) k=40467: found k=40467 in 8 retries (8759 hops total) k=15763: found k=15763 in 10 retries (11215 hops total) k=19912: found k=19912 in 1 retries (1747 hops total) k=38464: found k=38464 in 4 retries (4599 hops total) k=25638: found k=25638 in 0 retries (615 hops total) k=34242: found k=34242 in 9 retries (9892 hops total) k=31918: found k=31918 in 12 retries (13077 hops total) k=33333: found k=33333 in 1 retries (1534 hops total) k=44505: found k=44505 in 4 retries (4527 hops total) k=24307: found k=24307 in 1 retries (1670 hops total) k=33424: found k=33424 in 13 retries (14104 hops total) k=18726: found k=18726 in 10 retries (11173 hops total) k=1592: solution not found k=37709: found k=37709 in 1 retries (1476 hops total) k=9073: found k=9073 in 7 retries (8162 hops total) avg number of hops: 6478 ratio of avg hops to sqrt(L): 28.9715700291  Interval [0,500000] Trap: GF2Element(0b0111100101101101100110111000011,0x80000009) = x^761634, W=512 k=410095: found k=410095 in 4 retries (13234 hops total) k=456823: found k=456823 in 5 retries (16068 hops total) k=329037: found k=329037 in 5 retries (16615 hops total) k=85063: found k=85063 in 1 retries (5622 hops total) k=168458: found k=168458 in 4 retries (14190 hops total) k=195597: found k=195597 in 4 retries (14134 hops total) k=36645: found k=36645 in 4 retries (14676 hops total) k=228383: found k=228383 in 3 retries (10991 hops total) k=286584: found k=286584 in 0 retries (1864 hops total) k=168053: found k=168053 in 0 retries (2354 hops total) k=94531: found k=94531 in 3 retries (11472 hops total) k=44078: found k=44078 in 1 retries (5727 hops total) k=314159: found k=314159 in 5 retries (16674 hops total) k=6414: found k=6414 in 5 retries (17819 hops total) k=175957: found k=175957 in 2 retries (8235 hops total) k=114565: found k=114565 in 4 retries (14428 hops total) k=265999: found k=265999 in 0 retries (2009 hops total) k=361832: found k=361832 in 4 retries (13463 hops total) k=454577: found k=454577 in 2 retries (7088 hops total) k=319401: found k=319401 in 0 retries (1751 hops total) k=495897: found k=495897 in 0 retries (1039 hops total) k=482030: found k=482030 in 0 retries (1109 hops total) k=314995: found k=314995 in 2 retries (7678 hops total) k=116686: found k=116686 in 5 retries (17412 hops total) k=241906: found k=241906 in 0 retries (2085 hops total) avg number of hops: 9509 ratio of avg hops to sqrt(L): 13.4484355871 
The kangaroo method can also be used with \( L=M \) for the entire group rather than a subinterval; Pollard’s paper Kangaroos, Monopoly and Discrete Logarithms states that the kangaroo method was about 1.6 times slower than the rho method, but they seem to compute a result in about the same average time, and the kangaroo method seems conceptually simpler.
Index Calculus Methods
Can we do better than \( O(\sqrt{M}) \)? In some representations of finite fields, the answer is yes, using the idea of index calculus methods. The idea is fairly simple here, and it’s based on the idea that you can turn products of exponents into linear combinations of logarithms; if \( a^r=b^sc^t \) for polynomials \( a= x^{k_a}, b=x^{k_b}, c=x^{k_c} \) and unknown integers \( r, s, t \) then you can take logs and turn it into \( r k_a = s k_b + t k_c \bmod M \) (again, where \( M = 2^n1 \) is the order of the finite field). Index calculus methods identify a number of equations like that, and then use linear algebra to solve for the exponents. In particular:
 we choose a “factor base” \( B \) of irreducible polynomials up to some degree \( d \)
 we determine the discrete logarithm (or “index”) of each polynomial in the factor base, and record in a table
 if we can factor \( yg^j \) for some value of \( j \) into a product of powers of the irreducible polynomials in \( B \), then we can use the table to determine the discrete logarithm of \( y \) by linear algebra in the exponents.
This may sound kind of abstract, so let’s look at an example using the primitive polynomial \( p(x) = x^{31} + x^3 + 1 \), which has a period that is the Mersenne prime \( 2^{31}  1 \), and suppose we want to find the index \( k \) such that \( x^k = y = x^{26} + x^{22} + x^5 + x^3 \).
First we need a factor base, and let’s just say for argument’s sake that we choose \( d=8 \), which means our factor base includes all the irreducible polynomials \( q(x) \) up to 8th degree, so that’s \( x \), \( x+1 \), \( x^2+1 \), \( x^3+x+1 \), \( x^3+x^2+1 \), …, and for each of those polynomials we need to determine its index \( k_q \). With suitable computation (and we’ll say a bit more about this in a minute) we can come up with a table like this:
from IPython.core.display import display, HTML factor_base_table = ( (2,1),(3,262143),(7,2147221535),(11,2047), (13,536606721),(19,8388576),(25,698328479),(31,1440767584), (37,1647576957),(41,1377673127), (0,0),(239,436453215), (0,0),(333,777394914), (0,0)) def polylatex(q): terms = [] y = q.coeffs for j in xrange(q.field.degree): if y & (1 << j): terms.append('x^{%d}' % j if j > 1 else 'x' if j == 1 else '1') return '\(' + '+'.join(terms[::1]) + '\)' def make_html_table(ftable, field): rows = [] e = field.wrap(1) for q, kq in ftable: if q == 0: rows.append('<td>...</td><td>...</td><td>...</td>') else: qcalc = e << kq assert qcalc.coeffs == q rows.append('<td>%s</td><td><code>%08x</code></td><td>%d</td>' % (polylatex(qcalc), q, kq)) return HTML('<table><tr><th>\(q(x)\)</th><th><code>hex(</code>\(q(x)\)<code>)</code></th><th>\(k_q\)</th></tr>' +'\n'.join('<tr>%s</tr>' % row for row in rows)+'</table>') display(make_html_table(factor_base_table, F31))
\(q(x)\)  hex( \(q(x)\))  \(k_q\) 

\(x\)  00000002  1 
\(x+1\)  00000003  262143 
\(x^{2}+x+1\)  00000007  2147221535 
\(x^{3}+x+1\)  0000000b  2047 
\(x^{3}+x^{2}+1\)  0000000d  536606721 
\(x^{4}+x+1\)  00000013  8388576 
\(x^{4}+x^{3}+1\)  00000019  698328479 
\(x^{4}+x^{3}+x^{2}+x+1\)  0000001f  1440767584 
\(x^{5}+x^{2}+1\)  00000025  1647576957 
\(x^{5}+x^{3}+1\)  00000029  1377673127 
...  ...  ... 
\(x^{7}+x^{6}+x^{5}+x^{3}+x^{2}+x+1\)  000000ef  436453215 
...  ...  ... 
\(x^{8}+x^{6}+x^{3}+x^{2}+1\)  0000014d  777394914 
...  ...  ... 
Great. We have this nice table of small irreducible factors and their logarithms. Now what?
Well, we take \( y = x^{26} + x^{22} + x^5 + x^3 \), and we see if we can factor it into the set of irreducible factors of degree 8 or smaller. No dice. Then we multiply by \( x \) to get \( yx = x^{27} + x^{23} + x^6 + x^4 \) and try again. Not factorable into our base factors. We keep going. Eventually we get to \( j=398 \) and we find out that \( x^{398}y = (x^3 + x + 1)(x^5 + x^3 + 1)^2(x^7 + x^6 + x^5 + x^3 + x^2 + x + 1)(x^8 + x^6 + x^3 + x^2 + 1) \):
factors = {0xb: 1, 0x29: 2, 0xef: 1, 0x14d: 1} y = F31.wrap((1 << 26) + (1 << 22) + (1 << 5) + (1 << 3)) print "y = ", y print "y^398 = ", (y << 398) y2 = F31.wrap(1) for factor, multiplicity in factors.iteritems(): y2 *= F31.wrap(factor) ** multiplicity print "product=", y2
y = GF2Element(0b0000100010000000000000000101000,0x80000009) y^398 = GF2Element(0b0011110001000111111101100100101,0x80000009) product= GF2Element(0b0011110001000111111101100100101,0x80000009)
Finally, for each of the factors, we look up its logarithm and multiply by the multiplicity, and we take the sum modulo \( M \), then subtract \( j \):
$$\begin{align}k &= 398 + \log q_{b} + 2\log q_{29} + \log q_{ef} + \log q_{14d} \cr &= 398 + 2047 + 2\times 1377673127 + 436453215 + 777394914 \cr &= 3969196032 \cr &\equiv 1821712385 \pmod {2^{31}1} \end{align}$$
mylogtable = dict(factor_base_table) k = 398 + sum(m*mylogtable[q] for q,m in factors.iteritems()) k = k % ((1<<31) 1) print " y = ", y print "calculated log y = %d" % k print "double check: ", (F31.wrap(1) << k)
y = GF2Element(0b0000100010000000000000000101000,0x80000009) calculated log y = 1821712385 double check: GF2Element(0b0000100010000000000000000101000,0x80000009)
Okay, now what did we skip over to get here?
 We have to figure out what degree \( d \) to use as the maximum degree of irreducible polynomials, which influences how many elements there are in the factor base
 We have to generate a factor base of all the irreducible polynomials of degree \( d \) or less
 We have to figure out how to get their logarithms (which itself is a discrete logarithm problem!)
 We have to figure out how to factor a polynomial in the polynomial ring \( GF(2)[x] \)
 We have to determine whether a polynomial can be factored into irreducible polynomials of degree \( d \) (= whether the polynomial is \( d \)smooth) — this is easier than actually factoring it; another case where existence is easier than construction.
 What kind of time complexity does all this have? (given a finite field that is representable by a quotient ring of degree \( n \) of order \( M=2^n1 \), how long does it take to calculate a discrete logarithm?)
These are unfortunately beyond the scope of my explanatory capabilities, but I will give some references to further reading, and give some handwaving arguments on how they work:
The last two points dealing with factoring can be handled by specialized factoring tricks like distinctdegreefactorization of polynomials over finite fields.
Generating the factor base of irreducible polynomials can be done by sieve methods or using the distinctdegree factorization algorithm to test for smoothness: for example start with \( x \) and \( x+1 \) in degree 1 as known irreducible polynomials. Then let’s assume we have all the irreducible polynomials up to degree \( N \) and try to get the polynomials of the next degree, \( N+1 \) — this is an induction argument. For each polynomial of degree \( N+1 \) with a constant term (polynomials with zero constant terms are divisible by \( x \)), we test to see whether it has any factors that are irreducible polynomials of degree \( \lfloor \frac{N+1}{2} \rfloor \) (divide by two and round down — for \( N=2 \) we need to test factors up to degree \( d=1 \), for \( N=3 \) we need to test factors up to degree \( d=1 \)) and if not, then the polynomial is irreducible.
If we’re repeatedly working with the same finite field representation (e.g. \( GF(2)[x]/x^{31} + x^3 + 1 \)) to find logarithms of multiple elements in that field, we can precompute the factor base and its logarithm table, and reuse it each time. Finding logarithms of the factor base is still a timeconsuming effort, and the idea is to generate many relations of the form \( 1 = q_0(x)^{k_{q_0}}q_1(x)^{k_{q_1}}q_2(x)^{k_{q_2}} \ldots = \prod\limits_j q_j(x)^{k_{q_j}} \) where each \( q_j(x) \) is an irreducible polynomial in the factor base; this is equivalent to \( \sum\limits_j k_{q_j} \log q_j(x) = 0 \). Methods of generating these relations are described in the various index calculus methods; see for example Coppersmith’s algorithm in the list of references below. Then after you have this list of relations, you have a big matrix equation that can be used to calculate the logarithms by using Gaussian elimination.
Computational complexity is also described in the references and is quasipolynomial in \( n \), meaning the computation time increases faster than \( O(n^2) \) or \( O(n^3) \) or any other polynomial order of growth, but slower than exponential \( O(e^n) \).
I’m sorry if that explanation seems vague; I wasn’t able to construct an implementation of Coppersmith’s algorithm in Python. I’m not familiar enough with how and why it works to explain it, but some of the references seem to do a decent job in doing so.
Kangaroo Fences
Let’s say that you live near wild kangaroos and are deathly afraid of them. You probably want to surround your property with a fence that is higher than any kangaroo can jump.
Now suppose the kangaroos discover a new method of jumping higher. All of a sudden, you’re forced to build the fence higher. Here’s the interesting point: the kangaroos don’t even have to jump that high — they just have to jump once to demonstrate their ability, and then remember how to do so. Furthermore you have to extrapolate and guess their rate of technological advance, so that your fence is expected to be sufficient for some reasonable length of time.
Okay, maybe we’re not really talking about kangaroos; that was just a metaphor for cryptography and discrete logarithms. I find it interesting that the threat of a discrete logarithm algorithm is enough to raise the bit size requirement of cryptographic protocols that rely on discrete logarithms as an intractable oneway computation.
This idea is really important for embedded systems that rely on cryptography. If you choose a fixed bit size for your cryptographic implementations, you have to be ready to fieldupgrade your firmware later, or be prepared to throw those devices away when people can start cracking your security.
For certain representations of finite fields, namely elliptic curves over finite fields, the index calculus methods can’t be used, and the squareroot algorithms are the only known methods for computing discrete logarithms… at least that’s the stateoftheart for now. Keep an eye on those kangaroos, though.
Since LFSRs are based on polynomial quotient rings, and the index calculus methods apply, LFSRbased techniques really shouldn’t be used for cryptographic implementations where security relies on the difficulty of calculating discrete logarithms. Alternatively, be prepared to use very high bit counts.
I wish I had a happier message here, but cryptography is inherently a sort of arms race between the makers and the breakers.
Wrapup
 We looked at Pollard’s rho and kangaroo methods, which are squareroot methods for determining the discrete logarithm in a finite field.
 Pollard’s rho method uses hash functions to discover cycles, and once a cycle is discovered it can be used to compute a discrete logarithm
 Pollard’s kangaroo method uses iterated hash functions to discover a known power of \( x \) that is multiplied by the unknown input to equal another known power of \( x \).
 We touched upon index calculus methods, which are faster (quasipolynomial time) and apply to certain representations of finite fields, including LFSRs.
The next article will be on the BerlekampMassey algorithm for determining finite field representations from LFSR outputs.
References:
 J. M. Pollard, Monte Carlo Methods for Index Computation (mod p). Mathematics of Computation, vol 32, issue 143, pp. 918924, July 1978.
 Richard P. Brent, An improved Monte Carlo factorization algorithm. BIT Numerical Mathematics, vol 20, issue 2, pp. 176–184, June 1980.

Don Coppersmith, Fast evaluation of logarithms in fields of characteristic two. IEEE Transactions on Information Theory, vol 30, issue 4, pp. 587594, July 1984.

Daniel J. Bernstein, Tanja Lange, Computing Small Discrete Logarithms Faster. International Conference on Cryptology in India, INDOCRYPT 2012: Progress in Cryptology  INDOCRYPT 2012 pp 317338.

Jason S. Howell, The Index Calculus Algorithm for Discrete Logarithms. Master’s thesis, Clemson University Department of Mathematical Sciences, March 1998.

Kevin S. McCurley, The Discrete Logarithm Problem. Cryptology and Computational Number Theory (Proceedings of Symposia in Applied Mathematics) vol 42, pp. 4974, 1990.
I also just got my copy of Lidl & Niederreiter’s Introduction to Finite Fields and their Applications (revised edition, 1994) which is one of the definitive textbooks on the subject, and has about a dozen pages on discrete logarithms, touching on both SilverPohligHellman and the index calculus methods — no kangaroos though! It’s very heavy on the theory and light on illustrative examples. A lot of it goes over my head, but then so does much of Knuth’s The Art of Computer Programming. If you have a strong mathematical mind and don’t mind hearing about homomorphisms or multiplicative characters, this is probably the book for you.
© 2017 Jason M. Sachs, all rights reserved.
Previous post by Jason Sachs:
Linear Feedback Shift Registers for the Uninitiated, Part IV: Easy Discrete Logarithms and the SilverPohligHellman Algorithm
Next post by Jason Sachs:
Linear Feedback Shift Registers for the Uninitiated, Part VI: Sing Along with the BerlekampMassey Algorithm
Comments:
 Comments
 Write a Comment Select to add a comment
To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments.
Registering will allow you to participate to the forums on ALL the related sites and give you access to all pdf downloads.