EmbeddedRelated.com
Blogs

Ten Little Algorithms, Part 7: Continued Fraction Approximation

Jason SachsDecember 24, 2023

It’s been a long time since my last Ten Little Algorithms post, and I need a break from long articles, so here is Part Seven, in which we show the use of continued fractions for the approximation of real numbers.

Every once in a while, mathematicians come up with some weird thing, that is both interesting in its own right, and oddly practical. Like quaternions, which can be used for expressing rotations, or generating functions, which are… well… generating functions are like the spiral slicers of the math world — you never really need to use them, but I’ve been told they can be satisfying to use sometimes.

This article is available in PDF format for easy printing

Anyway — here is a continued fraction:

$$\begin{aligned} a &= 3 + \cfrac{1}{1+\cfrac{1}{4+\cfrac{1}{1+\cfrac{1}{5+\cfrac{1}{9}}}}} \\ &= 3 + \cfrac{1}{1+\cfrac{1}{4+\cfrac{1}{1+\cfrac{9}{46}}}} \\ &= 3 + \cfrac{1}{1+\cfrac{1}{4+\cfrac{46}{55}}} \\ &= 3 + \cfrac{1}{1+\cfrac{55}{266}} \\ &= 3 + \cfrac{266}{321} \\ &= \cfrac{1229}{321} \\ \end{aligned}$$

It’s a stack of alternating additions and divisions, with the convention that the numerators in a continued fraction are 1. Kind of gimmicky, don’t you think?

There’s a shorthand notation for continued fractions. For example, we could write \( a = [3; 1, 4, 1, 5, 9] = 1229/321 \) — that is, the coefficients other than the numerators are written as a list, with a semicolon delimiting the first coefficient and commas delimiting the rest.

Why would you ever use such a thing? Well, because there is some theory behind them. You can even construct infinite continued fractions, such as

$$\begin{aligned} \phi &= [1;1,1,1,1,\ldots] \\ &=1+\cfrac{1}{1+\cfrac{1}{1+\cfrac{1}{\ldots}}} \\ &= \frac{1+\sqrt{5}}{2} \\ &= 1.61803398874\ldots. \end{aligned}$$

Wow, look at that; we got an irrational number from an infinitely repeating continued fraction. And this is where the practicality of continued fractions comes in: any real number can be expressed by an infinite continued fraction, and approximated by a truncated version of that continued fraction.

For example, we could approximate \( \phi \) by \( \phi_6 = [1;1,1,1,1,1] = 13/8 \):

def cfrac(coeffs):
    h = 1
    k = 0
    for c in reversed(coeffs):
        h,k = c*h+k, h
    return h,k

def show_cfrac(coeffs):
    h,k = cfrac(coeffs)
    print('%d/%d = %.8f' % (h,k,h/k))

show_cfrac([1,1,1,1,1,1])
13/8 = 1.62500000

Or \( \phi \approx \phi_{10} = [1;1,1,1,1,1,1,1,1,1] = 89/55 \):

show_cfrac([1]*10)
89/55 = 1.61818182

If you are mathematically inclined, perhaps you will recognize the numerator and denominator as successive elements of the Fibonacci sequence, which converges in ratio to \( \phi \).

This lets us go from the representation of a continued fraction as a list of coefficients, to a rational number \( h/k \). What about the other way?

One clue here is from Blankinship’s algorithm for computing the greatest common divisor, which I mentioned in one of the articles on linear feedback shift registers.

def blankinship(a,b, verbose=False):
    arow = [a,1,0]
    brow = [b,0,1]
    if verbose:
        print(arow)
    while brow[0] != 0:
        q,r = divmod(arow[0], brow[0])
        if verbose:
            print(brow, q)
        if r == 0:
            break
        rrow =[r, arow[1]-q*brow[1], arow[2]-q*brow[2]]
        arow = brow
        brow = rrow
    return brow[0], brow[1], brow[2]
blankinship(89,55,verbose=True)
[89, 1, 0]
[55, 0, 1] 1
[34, 1, -1] 1
[21, -1, 2] 1
[13, 2, -3] 1
[8, -3, 5] 1
[5, 5, -8] 1
[3, -8, 13] 1
[2, 13, -21] 1
[1, -21, 34] 2
(1, -21, 34)
blankinship(1229,321,verbose=True)
[1229, 1, 0]
[321, 0, 1] 3
[266, 1, -3] 1
[55, -1, 4] 4
[46, 5, -19] 1
[9, -6, 23] 5
[1, 35, -134] 9
(1, 35, -134)
cfrac([1,1,1,1,1,1,1,1,2])
(89, 55)

It turns out that the sequence of quotient values \( q \) in Blankinship’s algorithm are the coefficients of the continued fraction representation of the fraction \( a/b \). But we got a nine-term continued fraction ending in 2 rather than ten 1’s! That’s okay; these are equal, since \( 2 = 1+\frac{1}{1} \).

Okay, well, what if we want to approximate some arbitrary real number \( x \)? We can modify Blankinship’s algorithm a bit to get the coefficients \( q_n \):

$$\begin{aligned} x_{-2},\, h_{-2},\, k_{-2} &= 1, 0, 1 \\ x_{-1},\, h_{-1},\, k_{-1} &= x, 1, 0 \\ q_n &= \lfloor x_{n-1} \rfloor \\ r_n &= x_{n-1} - q_n \\ x_n &= \frac{1}{r_n} = \frac{1}{x_{n-1} - q_n} \\ h_n &= q_nh_{n-1} + h_{n-2} \\ k_n &= q_nk_{n-1} + k_{n-2} \\ \end{aligned}$$

and then continue until we have accumulated enough coefficients, or \( r_n \) is close to zero.

The half-brackets are the floor function (round down to the nearest integer); what we’re basically doing at each step is:

  • find the greatest integer \( q \le x \) (for positive numbers, this is the integer part; for negative numbers you have to round down) and save \( q \) in a list of coefficients
  • get the remainder \( r = x - q \) (for positive numbers, this is the fractional part)
  • update \( x \) with the reciprocal of \( r \).
  • do some other math (see equations above) to update \( h \) and \( k \) for the numerator and denominator.

This gives both the continued fraction coefficients, and the numerator and denominator.

Let’s see today’s algorithm in action:

import math

def cfrac_approx(x,kmax=None,nmax=None,include_next=False,verbose=False):
    if kmax is None and nmax is None:
        raise ValueError('Need to specify either max denominator kmax or max number of coefficients nmax')
    if kmax is None:
        kmax = float('inf')
    if nmax is None:
        nmax = float('inf')
    arow = [1,0,1]
    brow = [x,1,0]
    coeffs = []
    n = 0
    while n < nmax:
        q = math.floor(x)
        r = x - q
        if r == 0:
            break
        n += 1
        x = 1/r
        xrow = [x, arow[1]+q*brow[1], arow[2]+q*brow[2]]
        if (xrow[2] > kmax and not include_next) or brow[2] > kmax:
            break
        arow = brow
        brow = xrow
        coeffs.append(q)
        if verbose:
            print(brow, q)
    if len(coeffs) > 1 and coeffs[-1] == 1:
        coeffs = coeffs[:-1]
        coeffs[-1] += 1
    return coeffs, brow[1], brow[2]

e = math.exp(1)
coeffs, h, k = cfrac_approx(e,nmax=12,verbose=True)
coeffs, h, k
[1.3922111911773332, 2, 1] 2
[2.549646778303843, 3, 1] 1
[1.8193502435980868, 8, 3] 2
[1.2204792856454274, 11, 4] 1
[4.535573476086956, 19, 7] 1
[1.867157438987213, 87, 32] 4
[1.1531931285372343, 106, 39] 1
[6.527707930169631, 193, 71] 1
[1.8949876301433488, 1264, 465] 6
[1.1173338785026912, 1457, 536] 1
[8.52268767350995, 2721, 1001] 1
[1.9131884118957796, 23225, 8544] 8
([2, 1, 2, 1, 1, 4, 1, 1, 6, 1, 1, 8], 23225, 8544)
h/k, cfrac(coeffs)
(2.7182818352059925, (23225, 8544))

It turns out that the rational approximations \( h_n/k_n \) from this method, known as the convergents, are the best approximations to \( x \) for a given denominator. There are ways of defining “best” here; one is the following theorem:

Let \( h/k \) be a convergent for the nonnegative real \( x \). Then if \( h’/k’ \) is a closer rational approximation, then \( k’ > k \).

Which says that if you want a better approximation than the convergent \( h_n/k_n \), you have to increase the denominator.

(For consistent notation, I’ve used \( h/k \) and \( x \) rather than the \( p/q \) and \( a \) used in these original notes from Stanford)

The error bound of convergents is quadratic in the denominator:

$$\frac{1}{2k_n k_{n+1}} \lt \frac{1}{k_n(k_n + k_{n+1})} \le \left| x - \frac{h_n}{k_n} \right| \le \frac{1}{k_n k_{n+1}} \lt \frac{1}{{k_n}^2}$$

If we want a metric of how good a particular approximation is, we can state it as \( \left| x - \frac{h_n}{k_n} \right| = \frac{\alpha}{{k_n}^2} \) for some \( \alpha < 1 \).

class Convergent(object):
    def __init__(self, coeffs, x, fmt='%.6f'):
        self.coeffs = coeffs[:]
        self.x = x
        self.fmt = fmt
    def describe_error(self):
        h,k = cfrac(self.coeffs)
        alpha = k*k*self.x - k*h
        return '  = %s %s %s/k^2' % (
            self.fmt % self.x,
            '+' if alpha <= 0 else '-',
            self.fmt % abs(alpha)
        )
    def __str__(self):
        h,k = cfrac(self.coeffs)
        def helper(coeffs):
            yield '%d' % coeffs[0]
            for i,c in enumerate(coeffs[1:]):
                yield '; ' if i == 0 else ', '
                yield '%d' % c
        return '%d/%d = [%s]' % (h,k,''.join(helper(self.coeffs)))
    
def print_convergents(coeffs,x):
    for i in range(1,len(coeffs)+1):
        c = Convergent(coeffs[:i], x, fmt='%.9f')
        print(c)
        print(c.describe_error())
        
x = math.exp(1)
coeffs, h, k = cfrac_approx(x,nmax=12)
print_convergents(coeffs, x)
2/1 = [2]
  = 2.718281828 - 0.718281828/k^2
3/1 = [2; 1]
  = 2.718281828 + 0.281718172/k^2
8/3 = [2; 1, 2]
  = 2.718281828 - 0.464536456/k^2
11/4 = [2; 1, 2, 1]
  = 2.718281828 + 0.507490745/k^2
19/7 = [2; 1, 2, 1, 1]
  = 2.718281828 - 0.195809594/k^2
87/32 = [2; 1, 2, 1, 1, 4]
  = 2.718281828 + 0.479407658/k^2
106/39 = [2; 1, 2, 1, 1, 4, 1]
  = 2.718281828 - 0.506661086/k^2
193/71 = [2; 1, 2, 1, 1, 4, 1, 1]
  = 2.718281828 + 0.141302738/k^2
1264/465 = [2; 1, 2, 1, 1, 4, 1, 1, 6]
  = 2.718281828 - 0.488358557/k^2
1457/536 = [2; 1, 2, 1, 1, 4, 1, 1, 6, 1]
  = 2.718281828 + 0.503811030/k^2
2721/1001 = [2; 1, 2, 1, 1, 4, 1, 1, 6, 1, 1]
  = 2.718281828 - 0.110397791/k^2
23225/8544 = [2; 1, 2, 1, 1, 4, 1, 1, 6, 1, 1, 8]
  = 2.718281828 + 0.492526740/k^2

There are many other properties of continued fractions, which you can find in most textbooks on number theory — I like Ivan Niven’s An Introduction to the Theory of Numbers — but I’ll stop there, and get into a few interesting applications.

Using a PWM output as a DAC

Way back in 1998 or 1999, when I was working at a medical electronics company, I had a test fixture using a Microchip PIC16C73A, and I needed an adjustable analog output. I had a PC connected to the PIC16C73A via its UART and a MAX232, and the PIC would respond to UART commands from the PC.

I don’t remember the exact purpose of this analog output, other than that it was a pulse-width modulation (PWM) output, and I probably put this signal through a two-pole RC filter and an op-amp buffer.

But I do remember how I used the PWM.

Remember that pulse-width modulation generally is implemented as follows:

  • a “period register” containing some value \( P \)
  • a “duty cycle register” containing some value \( D \)
  • a digital counter that increments each time a specified clock signal is received, from 0 to \( P \)
  • a digital output
  • when the counter value matches \( P \), the counter resets to zero at the next clock, and the digital output is set
  • when the counter value matches \( D \), the digital output is cleared (overriding the “set” point if \( D = 0 \))

This produces a pulse waveform with average voltage \( \frac{D}{P+1}\cdot V_{DD} \) where \( V_{DD} \) is the supply voltage. (We’re neglecting voltage drops in the output transistors here.)

The key aspect here is that \( P \) can be an arbitrary unsigned integer of some bit width, as long as timing constraints are met. So you can basically create an average voltage that is the product of the supply voltage \( V_{DD} \) and some rational number \( h/k \) between 0 and 1, where \( h=D \) and \( k=P+1 \). (The PIC16C73A has a PWM generator with an 8-bit period register PR2 and a 10-bit duty cycle register made up of the 8-bit register CCPR1L and two bits of another register, with some chicanery to get a 10-bit PWM even though the period register is 8 bits wide — but for the purposes of this discussion, let’s just assume we’re using \( D \) and \( P \) which are both between 0 and 255.)

What I ended up doing was (roughly) taking a desired real number between 0 and 1, and using the PC to calculate the best approximation \( h/k \) where \( k \le 256 \), using continued fractions.

The appropriate theoretical bound here from \( \left| x - \frac{h_n}{k_n} \right| \le \frac{1}{k_n k_{n+1}} \lt \frac{1}{{k_n}^2} \) is

$$\left| x - \frac{h_n}{k_n} \right| \le \frac{1}{k_n k_{n+1}}.$$

That is, if \( k_n \le 256 < k_{n+1} \), then \( \left| x - \frac{h_n}{k_n} \right| \le \frac{1}{257k_n} \), so we get an accuracy “multiplier” of at least 257, no matter what the denominator \( k_n \) actually is.

This theoretical bound may be a bit conservative, though; I’m not really sure how to construct a definite proof of worst-case accuracy for \( h/k \) where \( k \le k_{\max} \). With \( k_{\max} = 256 \) we can just construct all the rational numbers exhaustively and see where the largest gap is:

import numpy as np

klist = np.arange(1,257)
h,k = np.meshgrid(klist,klist)
h = h.reshape(256*256)
k = k.reshape(256*256)
# throw out h/k > 1
# throw out h/k where h and k have common factors
accept = (h <= k) & (np.gcd(h,k) == 1)
h = h[accept]
k = k[accept]
iorder = np.argsort(h/k)
imaxgap = np.diff((h/k)[iorder]).argmax()
h1,k1 = h[iorder[imaxgap]], k[iorder[imaxgap]]
h2,k2 = h[iorder[imaxgap+1]], k[iorder[imaxgap+1]]
((h1,k1),(h2,k2))
((255, 256), (1, 1))

Oh. Right. There’s a \( 1/k_{\max} \) gap at the ends. What if we exclude that end range? Where is the 2nd-largest gap?

accept = (h < k) & (h > 0)  
h = h[accept]
k = k[accept]
iorder = np.argsort(h/k)
imaxgap = np.diff((h/k)[iorder]).argmax()
h1,k1 = h[iorder[imaxgap]], k[iorder[imaxgap]]
h2,k2 = h[iorder[imaxgap+1]], k[iorder[imaxgap+1]]
((h1,k1),(h2,k2))
((127, 255), (1, 2))

OK, the next largest gaps are between \( 1/2 \) and either \( 127/255 \) or \( 128/255 \), so the values that are hardest to approximate are in the middle of those gaps, namely \( x = 509/1020 \) and \( x=511/1020 \), which have errors of \( 1/1020. \)

for p in [509,511]:
    print("%.3f/1020" % p)
    x = p/1020
    coeffs, h, k = cfrac_approx(x, kmax=1020)
    print_convergents(coeffs, x)
509.000/1020
0/1 = [0]
  = 0.499019608 - 0.499019608/k^2
1/2 = [0; 2]
  = 0.499019608 + 0.003921569/k^2
254/509 = [0; 2, 254]
  = 0.499019608 - 0.499019608/k^2
509/1020 = [0; 2, 254, 2]
  = 0.499019608 + 0.000000000/k^2
511.000/1020
0/1 = [0]
  = 0.500980392 - 0.500980392/k^2
1/1 = [0; 1]
  = 0.500980392 + 0.499019608/k^2
1/2 = [0; 1, 1]
  = 0.500980392 - 0.003921569/k^2
255/509 = [0; 1, 1, 254]
  = 0.500980392 + 0.499019608/k^2
511/1020 = [0; 1, 1, 254, 2]
  = 0.500980392 + 0.000000000/k^2

There is one problem, though, which is that if you stop between two convergents \( h_1/k_1 \) and \( h_2/k_2 \) because the denominator \( k_2 \) is too large, there could still be another fraction \( h’/k’ \) where \( k_1 < k’ < k_2 \) and \( h’/k’ \) is closer to the desired number \( x \) than \( h_1/k_1 \).

Take 12701/25500, for example, which is really close to 127/255, but the basic continued fraction algorithm skips 127/255 in favor of 129/259:

p0,q0=(12701,25500)

coeffs, h, k = cfrac_approx(p0/q0, kmax=256, include_next=True)
print_convergents(coeffs, p0/q0)
print("")
for p,q in [(1,2),(127,255),(h,k)]:
    print("%d/%d = %d/%d %+.7f" % (p0,q0,p,q,p0/q0-p/q))
0/1 = [0]
  = 0.498078431 - 0.498078431/k^2
1/2 = [0; 2]
  = 0.498078431 + 0.007686275/k^2
129/259 = [0; 2, 129]
  = 0.498078431 - 0.599254902/k^2

12701/25500 = 1/2 -0.0019216
12701/25500 = 127/255 +0.0000392
12701/25500 = 129/259 +0.0000089

Yes, the fraction 129/259 is even closer, but it has a denominator that is too large.

We can find the closest rational number with bounded denominator, even if it is not one of the convergents, if we reduce the last coefficient of the continued fraction until the denominator is within bounds. It won’t meet the \( 1/k^2 \) error bound, but it’s still better than if we just stop at the previous convergent.

def cfrac_forward(coeffs, predicate=None):
    """
    Like cfrac(), but without the coefficient reversal.
    Also, we allow a termination criteria; we stop as soon as it is not True.
    """
    hprev, kprev = 0,1
    h,k = 1,0
    for i,c in enumerate(coeffs):
        hnext = h*c + hprev
        knext = k*c + kprev
        if predicate is not None:
            retval = predicate(i,h,k,hnext,knext)
            if retval is not True:
                return retval
        hprev = h
        kprev = k
        h = hnext
        k = knext
    return h,k

def cfrac_bound_denominator(coeffs, x, kmax):
    """
    Find the closest continued fraction to x, with bounded denominator <= kmax
    """
    trunccoeffs = coeffs
    def predicate(i,h,k,hnext,knext):
        if knext <= kmax:
            return True
        
        # Uh oh. knext is too large.
        # Lower the last coefficient (delete if necessary)
        backtrack = -((kmax - knext) // k)
        c = coeffs[i]
        trunccoeffs[:] = coeffs[:i+1]
        if c == backtrack:
            del trunccoeffs[i]
            return h,k
        else:
            hcorr = hnext - backtrack*h
            kcorr = knext - backtrack*k
            trunccoeffs[i] = c-backtrack
            if trunccoeffs[i] == 1:
                del trunccoeffs[i]
                if abs(h/k - x) < abs(hcorr/kcorr - x):
                    return h,k
                
                trunccoeffs[i-1] += 1
            return hcorr, kcorr
    h,k = cfrac_forward(coeffs, predicate)
    if trunccoeffs[-1] == 1:
        del trunccoeffs[-1]
        trunccoeffs[-1] += 1
    return trunccoeffs, h, k

x = 12701/25500
coeffs, h, k = cfrac_approx(x, kmax=256, include_next=True)
print(coeffs,h,k,x-h/k)
coeffs, h, k = cfrac_bound_denominator(coeffs, x, 256)
print(coeffs,h,k,x-h/k)
[0, 2, 129] 129 259 8.93330305096196e-06
[0, 2, 127] 127 255 3.9215686274518546e-05

Can we say anything else about the full series of approximations between 0 and 1, with denominator less than or equal to a maximum?

The sequence of such rational numbers is called a Farey sequence, and is possible to generate, in order, using a particularly strange formula mentioned as an exercise for the reader to prove in Graham, Knuth, and Patashnik’s Concrete Mathematics (in 2nd edition, chapter 4 exercise 61 page 150):

61. Prove that if \( m/n, m’/n’ \), and \( m’'/n’' \) are consecutive elements of \( \mathcal{F}_N \), then

$$\begin{aligned} m’' &= \lfloor(n+N)/n’\rfloor m’ - m \\ n’' &= \lfloor(n+N)/n’\rfloor n’ - n \\ \end{aligned}$$

(This recurrence allows us to compute the elements of \( \mathcal{F}_N \) in order, starting with \( \frac{0}{1} \) and \( \frac{1}{N} \).)

Let’s try it:

def farey_sequence(N):
    """
    Generate all pairs (p,q) such that 0 <= p/q <= 1 and q < N,
    in increasing order of p/q.
    
    Implementation based on Graham, Knuth, and Patashnik exercise
    """
    mprev = 0
    nprev = 1
    m = 1
    n = N
    yield mprev,nprev
    yield m,n
    while m < n:
        r = math.floor((nprev+N)/n)
        mnext = r*m - mprev
        nnext = r*n - nprev
        mprev = m
        nprev = n
        m = mnext
        n = nnext
        yield m,n
    
list(farey_sequence(7))
[(0, 1),
 (1, 7),
 (1, 6),
 (1, 5),
 (1, 4),
 (2, 7),
 (1, 3),
 (2, 5),
 (3, 7),
 (1, 2),
 (4, 7),
 (3, 5),
 (2, 3),
 (5, 7),
 (3, 4),
 (4, 5),
 (5, 6),
 (6, 7),
 (1, 1)]

So let’s graph the denominator and worst-case error for the intervals surrounding the closest members of the Farey sequence:

import matplotlib.pyplot as plt
%matplotlib inline

def segments(x,y):
    """
    Given x,y as N x k matrices, 
    with each row of k values representing individual continuous segments of a line,
    return xs, ys as vectors of length (Nk + N - 1) representing a plottable series of points,
    with NaN values as spacers.
    """
    N,k = x.shape
    nans = np.zeros((N,1))
    nans[:] = np.nan
    xaug = np.hstack([x,nans])
    yaug = np.hstack([y,nans])
    return (xaug.reshape(N*k+N,1)[:-1,0],
            yaug.reshape(N*k+N,1)[:-1,0])

def graph_farey_sequence(N, plot_points=False):
    fig,axes = plt.subplots(figsize=(8,6), nrows=2, facecolor='white',
                            gridspec_kw=dict(hspace=0.05))
    fN = np.array(list(farey_sequence(N)))
    h = fN[:,0]
    k = fN[:,1]
    x = h/k
    mid = (x[:-1] + x[1:])/2
    # define intervals [x0,x1] around x
    x0 = np.hstack([0, mid])
    x1 = np.hstack([mid, 1])
    
    ax = axes[0]
    denx, deny = segments(np.vstack([x0,x1]).T,
                          np.vstack([k,k]).T)
    ax.plot(denx,deny)
    if plot_points:
        ax.plot(x,k,'.')
    ax.set_ylabel('Denominator')
    
    ax = axes[1]
    xstack = np.vstack([x0,x,x1]).T
    errstack = np.abs((xstack.T-x).T)
    errx, erry = segments(xstack, errstack)
    ax.plot(errx,erry)
    ax.set_ylabel('Error')
    
    for ax in axes:
        ax.set_xlim(0,1)
        if ax != axes[-1]:
            ax.set_xticklabels([])
    axes[0].set_title('Farey sequence $\\mathcal{F}_N$, with $N=%d$' % N)
    
    
graph_farey_sequence(7, plot_points=True)

OK, let’s look at the graph for \( \mathcal{F}_{256} \):

graph_farey_sequence(256)

Well, that’s quite the trippy pair of fractal graphs. The error graph shows us that there are two peaks at the ends which have error up to 1/512; in the middle, there’s a maximum error of 1/1020 (approximately 1/1024), and near each rational value \( p/q \) with \( q \le 256 \), the maximum error looks approximately \( \frac{1}{512q} \). (I’ll leave confirmation and a proof as an exercise for the reader.)

Anyway, that’s continued fraction PWM for you.

Finding numerical patterns

I recently had an unknown function of \( N \) that was the result of a complex calculation, but I knew — for various reasons that would take too long to describe here — that it was a rational function of low-degree polynomials.

And I could calculate it empirically for some values of \( N \); here are a few sample values, listed out to the 6th decimal place:

 f(4) = 1.400000
 f(6) = 0.942857
 f(8) = 0.714286
f(10) = 0.575758
f(12) = 0.482517
f(14) = 0.415385

I knew \( f(4) \) was probably 7/5, but wasn’t sure about the others. The easiest way to get a sense of these patterns was to use continued fraction approximation to help me:

for N,f in [( 4,1.400000),
            ( 6,0.942857),
            ( 8,0.714286),
            (10,0.575758),
            (12,0.482517),
            (14,0.415385)]:
    coeffs, h, k = cfrac_approx(f, kmax=10000)
    print("N=%2d -> %f ~= %d/%d" % (N,f,h,k))
N= 4 -> 1.400000 ~= 7/5
N= 6 -> 0.942857 ~= 33/35
N= 8 -> 0.714286 ~= 5/7
N=10 -> 0.575758 ~= 19/33
N=12 -> 0.482517 ~= 69/143
N=14 -> 0.415385 ~= 27/65

No immediate pattern there, but for \( N=12 \) you might notice that the denominator \( 143 = N^2 - 1 \), and if I look at \( N^2-1 \) for other values of N as a denominator, I get

$$\begin{aligned} N=4:\quad& f(N) = 7/5 &&= 21/15 \\ N=6:\quad& f(N) &&= 33/35 \\ N=8:\quad& f(N) = 5/7 &&= 45/63 \\ N=10:\quad& f(N) = 19/33 &&= 57/99 \\ N=12:\quad& f(N) &&= 69/143 \\ N=14:\quad& f(N) = 27/65 &&= 81/195 \\ \end{aligned}$$

and it’s a short jump from there to notice the numerator fits \( 6N-3, \) giving me

$$f(N) = \frac{6N-3}{N^2-1}.$$

I couldn’t prove this was correct, but I had reasons to believe it, and it matched my empirical calculations. And that’s the difference that sets apart applied mathematicians from their theoretical counterparts....

Wrapup

We looked at the use of continued fractions for approximating any real number. An algorithm for determining both the continued fraction’s coefficients to approximate any particular value \( x \), along with the rational number \( h/k \) equal to the continued fraction, is as follows:

  • Initialization: $$\begin{aligned} x_{-2},\, h_{-2},\, k_{-2} &= 1, 0, 1 \\ x_{-1},\, h_{-1},\, k_{-1} &= x, 1, 0 \\ \end{aligned}$$
  • Iteration: Repeat for each successive value of \( n=0,1,2,\ldots \) until the remainder \( r_n \) reaches zero, or the denominator \( k_n \) becomes larger than you’d like: $$\begin{aligned} q_n &= \lfloor x_{n-1} \rfloor \\ r_n &= x_{n-1} - q_n \\ x_n &= \frac{1}{r_n} = \frac{1}{x_{n-1} - q_n} \\ h_n &= q_nh_{n-1} + h_{n-2} \\ k_n &= q_nk_{n-1} + k_{n-2} \\ \end{aligned}$$
  • End:
    • The values \( q_0, q_1, \ldots , q_n \) are the coefficients of the continued fraction
    • They are equal to \( h_n/k_n \)

The successive rational numbers \( h_0/k_0, h_1/k_1, \ldots \) are called the convergents of \( x \), because they converge to \( x \) as \( n \to \infty. \) (Rational values of \( x \) are equal to finite continued fractions; irrational values of \( x \) are the limit of infinite continued fractions.) The convergents are the best rational approximations, in the sense that to find any closer approximation to \( x \), you have to increase the denominator.

The error of any convergent \( \left| x - h_n/k_n\right| < \frac{1}{{k_n}^2} \) — that is, quadratic in the denominator.

We looked at two applications of continued fractions:

  • Determining PWM period and duty cycle to produce the nearest average voltage to any desired fraction of the PWM supply voltage, with this fraction representable as the rational number \( \frac{D}{P+1} \) where \( D \) and \( P \) are the duty cycle and period registers in typical microcontroller PWM modules. This required a minor correction to the continued fraction algorithm, to find the fraction \( h/k \) that best approximates \( x \) but has a denominator \( k \le k_{\max} \).

  • Determining rational numbers from empirical calculations that are suspected to be rational numbers.

Thanks for reading; here’s wishing you all season’s greetings and a Happy New Year!


© 2023 Jason M. Sachs, all rights reserved.



To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments.

Please login (on the right) if you already have an account on this platform.

Otherwise, please use this form to register (free) an join one of the largest online community for Electrical/Embedded/DSP/FPGA/ML engineers: