Linear Feedback Shift Registers for the Uninitiated, Part XII: Spread-Spectrum Fundamentals

Jason Sachs December 29, 2017

Last time we looked at the use of LFSRs for pseudorandom number generation, or PRNG, and saw two things:

  • the use of LFSR state for PRNG has undesirable serial correlation and frequency-domain properties
  • the use of single bits of LFSR output has good frequency-domain properties, and its autocorrelation values are so close to zero that they are actually better than a statistically random bit stream

The unusually-good correlation properties...


Linear Feedback Shift Registers for the Uninitiated, Part XI: Pseudorandom Number Generation

Jason Sachs December 20, 2017

Last time we looked at the use of LFSRs in counters and position encoders.

This time we’re going to look at pseudorandom number generation, and why you may — or may not — want to use LFSRs for this purpose.

But first — an aside:

Science Fair 1983

When I was in fourth grade, my father bought a Timex/Sinclair 1000. This was one of several personal computers introduced in 1982, along with the Commodore 64. The...


Linear Feedback Shift Registers for the Uninitiated, Part X: Counters and Encoders

Jason Sachs December 10, 2017

Last time we looked at LFSR output decimation and the computation of trace parity.

Today we are starting to look in detail at some applications of LFSRs, namely counters and encoders.

Counters

I mentioned counters briefly in the article on easy discrete logarithms. The idea here is that the propagation delay in an LFSR is smaller than in a counter, since the logic to compute the next LFSR state is simpler than in an ordinary counter. All you need to construct an LFSR is


Linear Feedback Shift Registers for the Uninitiated, Part IX: Decimation, Trace Parity, and Cyclotomic Cosets

Jason Sachs December 3, 2017

Last time we looked at matrix methods and how they can be used to analyze two important aspects of LFSRs:

  • time shifts
  • state recovery from LFSR output

In both cases we were able to use a finite field or bitwise approach to arrive at the same result as a matrix-based approach. The matrix approach is more expensive in terms of execution time and memory storage, but in some cases is conceptually simpler.

This article will be covering some concepts that are useful for studying the...


Linear Feedback Shift Registers for the Uninitiated, Part VIII: Matrix Methods and State Recovery

Jason Sachs November 21, 2017

Last time we looked at a dsPIC implementation of LFSR updates. Now we’re going to go back to basics and look at some matrix methods, which is the third approach to represent LFSRs that I mentioned in Part I. And we’re going to explore the problem of converting from LFSR output to LFSR state.

Matrices: Beloved Historical Dregs

Elwyn Berlekamp’s 1966 paper Non-Binary BCH Encoding covers some work on


Linear Feedback Shift Registers for the Uninitiated, Part VII: LFSR Implementations, Idiomatic C, and Compiler Explorer

Jason Sachs November 13, 2017

The last four articles were on algorithms used to compute with finite fields and shift registers:

Today we’re going to come back down to earth and show how to implement LFSR updates on a microcontroller. We’ll also talk a little bit about something called “idiomatic C” and a neat online tool for experimenting with the C compiler.


Linear Feedback Shift Registers for the Uninitiated, Part VI: Sing Along with the Berlekamp-Massey Algorithm

Jason Sachs October 18, 2017

The last two articles were on discrete logarithms in finite fields — in practical terms, how to take the state \( S \) of an LFSR and its characteristic polynomial \( p(x) \) and figure out how many shift steps are required to go from the state 000...001 to \( S \). If we consider \( S \) as a polynomial bit vector such that \( S = x^k \bmod p(x) \), then this is equivalent to the task of figuring out \( k \) from \( S \) and \( p(x) \).

This time we’re tackling something...


Linear Feedback Shift Registers for the Uninitiated, Part V: Difficult Discrete Logarithms and Pollard's Kangaroo Method

Jason Sachs October 1, 2017

Last time we talked about discrete logarithms which are easy when the group in question has an order which is a smooth number, namely the product of small prime factors. Just as a reminder, the goal here is to find \( k \) if you are given some finite multiplicative group (or a finite field, since it has a multiplicative group) with elements \( y \) and \( g \), and you know you can express \( y = g^k \) for some unknown integer \( k \). The value \( k \) is the discrete logarithm of \( y \)...


Linear Feedback Shift Registers for the Uninitiated, Part IV: Easy Discrete Logarithms and the Silver-Pohlig-Hellman Algorithm

Jason Sachs September 16, 2017

Last time we talked about the multiplicative inverse in finite fields, which is rather boring and mundane, and has an easy solution with Blankinship’s algorithm.

Discrete logarithms, on the other hand, are much more interesting, and this article covers only the tip of the iceberg.

What is a Discrete Logarithm, Anyway?

Regular logarithms are something that you’re probably familiar with: let’s say you have some number \( y = b^x \) and you know \( y \) and \( b \) but...


Linear Feedback Shift Registers for the Uninitiated, Part III: Multiplicative Inverse, and Blankinship's Algorithm

Jason Sachs September 9, 2017

Last time we talked about basic arithmetic operations in the finite field \( GF(2)[x]/p(x) \) — addition, multiplication, raising to a power, shift-left and shift-right — as well as how to determine whether a polynomial \( p(x) \) is primitive. If a polynomial \( p(x) \) is primitive, it can be used to define an LFSR with coefficients that correspond to the 1 terms in \( p(x) \), that has maximal length of \( 2^N-1 \), covering all bit patterns except the all-zero...


Chebyshev Approximation and How It Can Help You Save Money, Win Friends, and Influence People

Jason Sachs September 30, 201218 comments

Well... maybe that's a stretch. I don't think I can recommend anything to help you win friends. Not my forte.

But I am going to try to convince you why you should know about Chebyshev approximation, which is a technique for figuring out how you can come as close as possible to computing the result of a mathematical function, with a minimal amount of design effort and CPU power. Let's explore two use cases:

  • Amy has a low-power 8-bit microcontroller and needs to compute \( \sqrt{x} \)...

Ten Little Algorithms, Part 1: Russian Peasant Multiplication

Jason Sachs March 22, 20155 comments

This blog needs some short posts to balance out the long ones, so I thought I’d cover some of the algorithms I’ve used over the years. Like the Euclidean algorithm and Extended Euclidean algorithm and Newton’s method — except those you should know already, and if not, you should be locked in a room until you do. Someday one of them may save your life. Well, you never know.

Other articles in this series:

  • Part 1:

How to Estimate Encoder Velocity Without Making Stupid Mistakes: Part II (Tracking Loops and PLLs)

Jason Sachs November 18, 201310 comments

Yeeehah! Finally we're ready to tackle some more clever ways to figure out the velocity of a position encoder. In part I, we looked at the basics of velocity estimation. Then in my last article, I talked a little about what's necessary to evaluate different kinds of algorithms. Now it's time to start describing them. We'll cover tracking loops and phase-locked loops in this article, and Luenberger observers in part III.

But first we need a moderately simple, but interesting, example...


Elliptic Curve Cryptography

Mike November 16, 20156 comments

Secure online communications require encryption.  One standard is AES (Advanced Encryption Standard) from NIST.  But for this to work, both sides need the same key for encryption and decryption.  This is called Private Key encryption.  Public Key encryption is used to create a private key between two sides that have not previously communicated.  Compared to the history of encryption, Public Key methods are very recent having been started in the 1970's.  Elliptic...


Bad Hash Functions and Other Stories: Trapped in a Cage of Irresponsibility and Garden Rakes

Jason Sachs January 28, 20141 comment

I was recently using the publish() function in MATLAB to develop some documentation, and I ran into a problem caused by a bad hash function.

In a resource-limited embedded system, you aren't likely to run into hash functions. They have three major applications: cryptography, data integrity, and data structures. In all these cases, hash functions are used to take some type of data, and deterministically boil it down to a fixed-size "fingerprint" or "hash" of the original data, such that...


Slew Rate Limiters: Nonlinear and Proud of It!

Jason Sachs October 6, 2014

I first learned about slew rate limits when I was in college. Usually the subject comes up when talking about the nonideal behavior of op-amps. In order for the op-amp output to swing up and down quickly, it has to charge up an internal capacitor with a transistor circuit that’s limited in its current capability. So the slew rate limit \( \frac{dV}{dt} = \frac{I_{\rm max}}{C} \). And as long as the amplitude and frequency aren’t too high, you won’t notice it. But try to...


Ten Little Algorithms, Part 5: Quadratic Extremum Interpolation and Chandrupatla's Method

Jason Sachs November 11, 20153 comments

Other articles in this series:

Today we will be drifting back into the topic of numerical methods, and look at an algorithm that takes in a series of discretely-sampled data points, and estimates the maximum value of...


Round Round Get Around: Why Fixed-Point Right-Shifts Are Just Fine

Jason Sachs November 22, 20163 comments

Today’s topic is rounding in embedded systems, or more specifically, why you don’t need to worry about it in many cases.

One of the issues faced in computer arithmetic is that exact arithmetic requires an ever-increasing bit length to avoid overflow. Adding or subtracting two 16-bit integers produces a 17-bit result; multiplying two 16-bit integers produces a 32-bit result. In fixed-point arithmetic we typically multiply and shift right; for example, if we wanted to multiply some...


Second-Order Systems, Part I: Boing!!

Jason Sachs October 29, 2014

I’ve already written about the unexciting (but useful) 1st-order system, and about slew-rate limiting. So now it’s time to cover second-order systems.

The most common second-order systems are RLC circuits and spring-mass-damper systems.

Spring-mass-damper systems are fairly common; you’ve seen these before, whether you realize it or not. One household example of these is the spring doorstop (BOING!!):

(For what it’s worth: the spring...


Fluxions for Fun and Profit: Euler, Trapezoidal, Verlet, or Runge-Kutta?

Jason Sachs September 30, 20132 comments

Today we're going to take another diversion from embedded systems, and into the world of differential equations, modeling, and computer simulation.

DON'T PANIC!

First of all, just pretend I didn't bring up anything complicated. We're exposed to the effects of differential equations every day, whether we realize it or not. Your car speedometer and odometer are related by a differential equation, and whether you like math or not, you probably have some comprehension of what's going on: you...