EmbeddedRelated.com
Blogs

Linear Feedback Shift Registers for the Uninitiated, Part IV: Easy Discrete Logarithms and the Silver-Pohlig-Hellman Algorithm

Jason SachsSeptember 16, 20174 comments

Last time we talked about the multiplicative inverse in finite fields, which is rather boring and mundane, and has an easy solution with Blankinship’s algorithm.

Discrete logarithms, on the other hand, are much more interesting, and this article covers only the tip of the iceberg.

What is a Discrete Logarithm, Anyway?

Regular logarithms are something that you’re probably familiar with: let’s say you have some number \( y = b^x \) and you know \( y \) and \( b \) but not \( x \). To solve for \( x \), you calculate \( x = \log_b y = \ln y / \ln b \) where \( \ln y \) is the natural logarithm function. For example, consider \( 1048576 = 16^x \). We can compute \( x = \ln 1048576 / \ln 16 \approx 13.8629 / 2.7726 = 5.0000 \), and we’re as happy as a clam.

This works because \( \ln y \) is easy to calculate numerically for any value of \( y > 0 \), and it’s easy because our representation of floating-point numbers is “friendly” to logarithms: IEEE-754 binary representation is \( (1-2s) (1+2^{-k}m) \cdot 2^{e+b} \) where the sign bit \( s \), the mantissa bits \( m \), and the exponent bits \( e \) are integers comprising the representation of the number, and for each type of representation, \( k \) is the fixed bit size of the mantissa and \( b \) is the fixed exponent bias. Ignoring negative numbers, if we take \( \log_2 \) of this, it’s just \( (e+b) \log_2 (1+2^{-k}m) \), and the \( \log_2 \) term ranges from 1.0 to just under 2.0, so computing the log of the mantissa is just an exercise in function approximation over a constrained interval. (And \( \ln y = \log_2 y / \log_2 e \) where \( \log_2 e \approx 1.4426950408889634 \), so natural logarithms aren’t any more difficult to calculate than \( \log_2 \); you just multiply by a fixed constant \( 1/\log_2 e \).)

For groups and finite fields, there is an analogous definition of a logarithm: if \( y = b^k \) and you know the base element \( b \) and some other element \( y \), then the exponent \( k \) can be calculated as the discrete logarithm: \( k = \log_b y \). In this case \( k \) is always an integer. The problem is that this operation is either very easy to calculate or very difficult, depending on the element representation. We could represent all elements \( y \) of a cyclic group as \( y = g^{k_y} \) by storing the exponent \( k_y \) of some generator \( g \), in which case \( k = \log_b y = k_y / k_b \bmod M \) where \( b = g^{k_b} \) is also represented by its exponent \( k_b \) of the generator \( g \), and \( M \) is the order of the generator (\( g^M = 1 \)). Very easy.

In reality, we have other methods of representing elements of groups and finite fields, and discrete logarithms are not so easy to compute, because they’re kind of a scrambling of bits. The naive brute-force way is to try exponents one by one: see if \( y = g^0 \), then see if \( y = g^1 \), then see if \( y = g^2 \), and so on. This takes half of the group order on average, which is kind of slow, especially for large groups. If I have \( GF(2^{127}) \), this has \( 2^{127} - 1 \approx 1.7 \times 10^{38} \) elements in the multiplicative group, and even if I could check a billion of them each second, it would take \( 5.39 \times 10^{21} \) years to check them all, which is something like a trillion times longer than the age of the universe.

Is this a major obstacle for us?

Well, discrete logarithm problems are one of those things that generally don’t just pop up in front of you each day. You want to paint your house, and want to figure out how much paint you need? Then geometry gets involved and you can calculate the area of the paint times its thickness to get a rough estimate of total paint volume. You want to get a car loan and figure out the payment? This is just an example of amortization and it’s simple algebra and arithmetic. If you want to model springs and pulleys and things, or circuits, sometimes there are differential equations or matrices that show up. But I guarantee you that no matter what real-world problem you face, it won’t involve discrete logarithms unless they’re part of a human-designed system, and in this case, a discrete logarithm calculation is either going to be intentionally easy, or it’s going to be completely intractable. Let’s talk about these separately.

Intentionally-easy Discrete Logs

The easy case pops up when you want to use LFSRs as a counter. Why do this? Because the propagation delays of an n-bit LFSR are smaller than the equivalent delays of an n-bit counter, so you can clock an LFSR more quickly. Incrementing the count for a Galois representation of the LFSR is just a matter of a single shift left and a couple of independent 2-input XORs for the taps. Remember the <iframe> showing \( H_{12345} \), the finite field based around \( GF(2)/p(x) \) where \( p(x) \) is an appropriate 16-bit primitive polynomial with nonzero coefficients located at the 1 bits of 0x12345:

Anyway, it’s an easy operation to update to the next element of the multiplicative group, but it can be a very difficult operation to compute the discrete logarithm. If we’re implementing a counter, we want to make this easy, so we can recover the count value; the way to do it is to ensure that the group order is a composite number that factors into a lot of small values, and none of the prime factors are large. Mersenne primes \( M_p = 2^p-1 \) don’t fall into this category, but if we take composite values of \( n \) like \( n=16 \) or \( n=48 \) then the group order \( M_n = 2^n-1 \) tends to be a smooth number with small prime factors.

import primefac

def find_duration(t0, m):
    t = t0*m
    if t < 1:
        u,desc = 1e-3, "milliseconds"
    elif t < 60:
        u,desc = 1, "seconds"
    elif t < 3600:
        u,desc = 60, "minutes"
    elif t < 86400:
        u,desc = 3600, "hours"
    elif t < 86400*365:
        u,desc = 86400, "days"
    else:
        u,desc = 86400*365, "years"
    return '%.2f %s' % (t*1.0/u, desc)    

for n in [16,32,48,60,64,72,84]:
    period = 1.0*((1<<n)-1)
    print "n=%d, period=%.4g (%s),\n      factors=%s" % (n, period, find_duration(1e-9, period), 
                                             ','.join('%d^%d' % (p,e) if e > 1 
                                              else '%d' % p 
                                              for p,e in primefac.factorint((1<<n) - 1).items()))
n=16, period=6.554e+04 (0.07 milliseconds),
      factors=17,3,5,257
n=32, period=4.295e+09 (4.29 seconds),
      factors=17,3,65537,5,257
n=48, period=2.815e+14 (3.26 days),
      factors=97,3^2,5,7,241,257,13,17,673
n=60, period=1.153e+18 (36.56 years),
      factors=3^2,5^2,7,41,11,13,1321,151,331,61,31
n=64, period=1.845e+19 (584.94 years),
      factors=641,3,5,65537,257,6700417,17
n=72, period=4.722e+21 (149745.26 years),
      factors=3^3,5,7,73,241,13,109,17,19,433,38737,37
n=84, period=1.934e+25 (613356580.22 years),
      factors=3^2,5,7^2,43,13,337,113,14449,1429,5419,29,127

(By the way, Clark and Weng’s paper “Maximal and Near-Maximal Shift Register Sequences: Efficient Event Counters and Easy Discrete Logarithms” is a good reference, and I’ve included a link to it at the end of this article.)

There’s a little bit of a conflict here, however. For a counter that is large enough not to repeat quickly, we need higher bit counts; let’s say we have a clock tick frequency of 1 GHz, in which case a 32-bit counter would repeat after about 4.3 seconds, a 48-bit counter would repeat after 3.25 days, and a 64-bit counter would repeat after 585 years. Great, let’s throw a 64-bit counter at it. But then we have a prime factor of 6700417, which is a little bit of an impediment, as we’ll see later. A 60-bit counter gets us a largest prime factor of 1321 and repeats after 36 years; a 72-bit counter gets us a largest prime factor of 38737 and repeats after almost 150000 years; an 84-bit counter gets us a largest prime factor of 14449 and takes more than 600 million years to repeat. (Good luck getting an electronic circuit to work for that long, however.)

It turns out the more bits we add, the larger the prime factors get — there are two theorems that apply here, Bang’s Theorem (no, this has nothing to do with fireworks; there was some person named A.S. Bang who published an article in 1886 in the Danish journal Taltheoretiske Undersøgelser ), and its later generalization of Zsigmondy’s Theorem; what it means in practice is that for each successive \( M_n = 2^n - 1 \), there is a prime factor that doesn’t appear in any previous values of \( M_n \), and therefore these new prime factors can’t avoid getting larger. The examples of \( n= \) 60, 72, and 84 are “anomalous” because they are composite and manage to find unused prime factors like 1321, 38737, and 14449, but that just means those values aren’t available as new prime factors to later values of \( n \).

Anyway, let’s look at an example. For illustration purposes, let’s use the 16-bit case where \( 2^{16}-1 = 3 \cdot 5 \cdot 17 \cdot 257. \)

from libgf2.gf2 import GF2Element

e12345 = GF2Element(1, 0x12345)
y_unknown = e12345 << 3955
y_unknown
GF2Element(0b0000000001000001,0x12345)

Here we have a case where \( x^{3955} = x^6 + 1 \bmod {p(x)} \).

Suppose we only knew \( x^k = x^6 + 1 \bmod {p(x)} \), and we didn’t know \( k=3955 \). How could we figure it out?

Silver-Pohlig-Hellman to the Rescue

The algorithm to use here is the Silver-Pohlig-Hellman algorithm which essentially works as follows:

  • break down the multiplicative group of the finite field in question into subgroups, one for each prime factor
  • compute the discrete logarithm for each subgroup
  • combine them together using the Chinese Remainder Theorem.

In our example, what we need to do is create subgroups of order 3, 5, 17, and 257; the way to do this to use the cofactors 65535/3, 65535/5, 65535/17, and 65535/257. Let’s look at 65535/5 = 13107:

for k in xrange(15):
    y = e12345 << k
    print "y=%s, y^13107=%s" % (y, y ** 13107)
y=GF2Element(0b0000000000000001,0x12345), y^13107=GF2Element(0b0000000000000001,0x12345)
y=GF2Element(0b0000000000000010,0x12345), y^13107=GF2Element(0b1111010011010111,0x12345)
y=GF2Element(0b0000000000000100,0x12345), y^13107=GF2Element(0b1001111110110101,0x12345)
y=GF2Element(0b0000000000001000,0x12345), y^13107=GF2Element(0b0011010101000100,0x12345)
y=GF2Element(0b0000000000010000,0x12345), y^13107=GF2Element(0b0101111000100111,0x12345)
y=GF2Element(0b0000000000100000,0x12345), y^13107=GF2Element(0b0000000000000001,0x12345)
y=GF2Element(0b0000000001000000,0x12345), y^13107=GF2Element(0b1111010011010111,0x12345)
y=GF2Element(0b0000000010000000,0x12345), y^13107=GF2Element(0b1001111110110101,0x12345)
y=GF2Element(0b0000000100000000,0x12345), y^13107=GF2Element(0b0011010101000100,0x12345)
y=GF2Element(0b0000001000000000,0x12345), y^13107=GF2Element(0b0101111000100111,0x12345)
y=GF2Element(0b0000010000000000,0x12345), y^13107=GF2Element(0b0000000000000001,0x12345)
y=GF2Element(0b0000100000000000,0x12345), y^13107=GF2Element(0b1111010011010111,0x12345)
y=GF2Element(0b0001000000000000,0x12345), y^13107=GF2Element(0b1001111110110101,0x12345)
y=GF2Element(0b0010000000000000,0x12345), y^13107=GF2Element(0b0011010101000100,0x12345)
y=GF2Element(0b0100000000000000,0x12345), y^13107=GF2Element(0b0101111000100111,0x12345)

The values of \( y^{13107} \) repeat after every 5 values. We can prove this is the case by expressing the exponent \( k = 5q+r \) for some integer \( q \) and \( r \) with \( 0 \le r < 5 \):

$$(x^k)^{13107} = (x^{5q+r})^{13107} = x^{65535q}x^{13107r} = 1^q x^{13107r} = x^{13107r}$$

So \( (x^k)^{13107} = (x^r)^{13107} \) where \( r = k \bmod{5} \), and therefore we only have 5 values to check to determine \( k \bmod 5 \). And we can do the same thing for each of the other factors:

results = []
for factor in [3,5,17,257]:
    cofactor = 65535/factor
    z = y_unknown ** cofactor
    b = e12345 << cofactor
    for k in xrange(factor):
        if b ** k == z:
            results.append((factor, k))
            break
results
[(3, 1), (5, 0), (17, 11), (257, 100)]

This says that \( k \bmod 3 = 1, k \bmod 5 = 0, k \bmod 17 = 11, k \bmod 257 = 100 \), and we just need to combine them to figure out \( k \bmod 65535 \). Now we need to explain the Chinese Remainder Theorem for solving simultaneous congruences.

The Chinese Remainder Theorem

Let’s say we have some number \( M = m_1 \cdot m_2 \cdot \ldots \cdot m_r \) where each of the \( m_j \) are relatively prime. In our example, \( m_1 = 3, m_2 = 5, m_3 = 17, m_4 = 257 \). Furthermore, we know \( a_j = x \bmod m_j \) for each of the \( m_j \).

A method of calculating \( x \bmod M \) is as follows:

  • Precalculate weights \( b_j \) (assuming we know \( M \) in advance but not the \( a_j \) values) — for each of the factors \( m_j \), calculate \( b_j \) such that \( b_jc_j \bmod m_j = 1 \), where \( c_j = M / m_j \) is the corresponding cofactor. In other words, \( b_j \) is the multiplicative inverse of the cofactor \( c_j \) for each factor \( m_j \). We learned how to do this previously with Blankinship’s algorithm. The fact that you can do this, even if \( m_j \) is a composite number, is part of the theorem: \( c_j \) is the product of all the other factors \( m_i \) for \( i \neq j \), and since each of them have no common factor with \( m_j \), then \( \gcd(c_j, m_j) = 1 \).
def blankinship(a,b, verbose=False):
    arow = [a,1,0]
    brow = [b,0,1]
    if verbose:
        print arow
    while brow[0] != 0:
        q,r = divmod(arow[0], brow[0])
        if verbose:
            print brow, q
        if r == 0:
            break
        rrow =[r, arow[1]-q*brow[1], arow[2]-q*brow[2]]
        arow = brow
        brow = rrow
    return brow[0], brow[1], brow[2]

M = 65535
mlist = [3,5,17,257]
blist = [blankinship(M//mj, mj)[1] % mj for mj in mlist]
for bj,mj in zip(blist,mlist):
    assert (bj*(M//mj)) % mj == 1
blist
[2, 3, 4, 128]
  • Then calculate \( x = \sum a_j b_j c_j \bmod M \).
def crt(M, factors):
    """ 
    Chinese Remainder Theorem 
    
    Precalculates cofactors and weights, 
    and returns a function that can be used to calculate a value $x$ 
    given its residues modulo each factor.
    """
    n = len(factors)
    cofactors = [M//mj for mj in factors]
    weights = [blankinship(M//mj, mj)[1] % mj for mj in factors]
    def f(residues):
        x = 0
        for aj, bj, cj in zip(residues, weights, cofactors):
            x += aj*bj*cj
        return x % M
    return f

f = crt(65535, [3,5,17,257])
f([1,0,11,100])
3955

That’s really all you need to take discrete logarithms of “easy” systems. As long as the factors of the LFSR period are small, we can use a hash table for \( O(1) \) lookup to calculate the discrete logarithm modulo each of the factors. The rest of the Silver-Pohlig-Hellman algorithm — raising to a power of each of the prime cofactors, and using the Chinese Remainder Theorem to combine the discrete logarithm within each of the subgroups — is very straightforward.

The libgf2 module has the libgf2.lfsr.GF2DiscreteLog class to facilitate this calculation:

import libgf2.lfsr
Hlog = libgf2.lfsr.GF2DiscreteLog(0x12345)
Hlog.log(0b1000001)
3955

Wrapup

  • We defined what a discrete logarithm is: calculation of \( k \) from knowing \( y = g^k \) and the values \( g \) and \( y \).

  • We discussed that there are easy and difficult discrete logarithms within a group, depending on the largest prime power factor of the group size.

  • We mentioned that the easy discrete logarithm has applications using an LFSR as a fast counter.

  • We described the Silver-Pohlig-Hellman algorithm and the Chinese Remainder Theorem to find \( k \) such that \( y = g^k \):

    • Factor the group order \( M \) into its prime factors \( m_j \)
    • For each prime factor \( m_j \):
      • Calculate \( y_j = y^{c_j} \) where \( c_j = M/m_j \) is the corresponding cofactor
      • Calculate the discrete logarithm \( k_j \) such that \( y_j = y^{c_j} = (g^{c_j})^{k_j} \) — there are only \( m_j \) distinct values for \( y_i \) here, so if the prime factors are small, this is fast, and it means that \( k_j = k \bmod m_j \). (Once we have all the \( k_j \), at this point we are ready to apply the Chinese Remainder Theorem.)
      • Calculate \( b_j \) such that \( b_jc_j \bmod m_j = 1 \)
    • Calculate \( k = \sum\limits_j k_j b_j c_j \bmod M \)
  • We pointed out the libgf2.lfsr.GF2DiscreteLog class for implementing discrete logarithm using Silver-Pohlig-Hellman in \( GF(2)[x]/p(x) \).

The next article will be on the difficult discrete logarithm.

References:


© 2017 Jason M. Sachs, all rights reserved.



[ - ]
Comment by texsalimOctober 4, 2019

sir,

this very good artical 

im a student and learn about eliptic curve ecdlp on small

like my mod is 199 i generate the sekcp256k1 curve and multiply by my k

k    x    y
1    2    24
2    108    49
3    72    166
4    18    80
5    42    35
6    195    107
7    22    120
8    7    151
9    71    94
10    131    122
11    60    97
12    66    67
13    32    74
14    13    24
15    184    175
16    158    74
17    137    170

like i have only 108,49  how i find the k 

im not undersand the what is your power x^? in lfsr my x and on 2 (108,49) means ???

how i serialised my mode and values according above table using lsfsr and how to get 

https://www.embeddedrelated.com/showarticle/1093.p...

on last and kangaroo method and genrate the table of my 199 and get the k

please reply me


[ - ]
Comment by jms_nhOctober 4, 2019

Sorry, I don't have much experience with elliptic curves or elliptic curve cryptography. There are analogues between EC elements and groups, you just have to consider that each element in the group can be written as $g^k$ instead of $x^k$ as in my article, since ECs use x and y for other reasons and it's confusing.

You're not going to be able to break ECDSA though with the kangaroo method. The group size is too large. (which is the point!)

Suggest reading https://www.johndcook.com/blog/2018/08/14/bitcoin-elliptic-curves/

[ - ]
Comment by texsalimOctober 7, 2019

thank for reply

sir i not have large algorithm only my m = 199

only i want to understand the method

can i generate LFSR of seackp256k1 ? so sequance are made and find the k IT is possible ?

[ - ]
Comment by texsalimOctober 7, 2019

can i generate LFSR of seackp256k1 ? so sequance are made and find the k IT is possible ?

To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments.

Please login (on the right) if you already have an account on this platform.

Otherwise, please use this form to register (free) an join one of the largest online community for Electrical/Embedded/DSP/FPGA/ML engineers: