Linear Feedback Shift Registers for the Uninitiated, Part XII: Spread-Spectrum Fundamentals

Jason Sachs December 29, 20171 comment

Last time we looked at the use of LFSRs for pseudorandom number generation, or PRNG, and saw two things:

  • the use of LFSR state for PRNG has undesirable serial correlation and frequency-domain properties
  • the use of single bits of LFSR output has good frequency-domain properties, and its autocorrelation values are so close to zero that they are actually better than a statistically random bit stream

The unusually-good correlation properties...


Linear Feedback Shift Registers for the Uninitiated, Part XI: Pseudorandom Number Generation

Jason Sachs December 20, 2017

Last time we looked at the use of LFSRs in counters and position encoders.

This time we’re going to look at pseudorandom number generation, and why you may — or may not — want to use LFSRs for this purpose.

But first — an aside:

Science Fair 1983

When I was in fourth grade, my father bought a Timex/Sinclair 1000. This was one of several personal computers introduced in 1982, along with the Commodore 64. The...


Linear Feedback Shift Registers for the Uninitiated, Part X: Counters and Encoders

Jason Sachs December 9, 2017

Last time we looked at LFSR output decimation and the computation of trace parity.

Today we are starting to look in detail at some applications of LFSRs, namely counters and encoders.

Counters

I mentioned counters briefly in the article on easy discrete logarithms. The idea here is that the propagation delay in an LFSR is smaller than in a counter, since the logic to compute the next LFSR state is simpler than in an ordinary counter. All you need to construct an LFSR is


Linear Feedback Shift Registers for the Uninitiated, Part IX: Decimation, Trace Parity, and Cyclotomic Cosets

Jason Sachs December 3, 2017

Last time we looked at matrix methods and how they can be used to analyze two important aspects of LFSRs:

  • time shifts
  • state recovery from LFSR output

In both cases we were able to use a finite field or bitwise approach to arrive at the same result as a matrix-based approach. The matrix approach is more expensive in terms of execution time and memory storage, but in some cases is conceptually simpler.

This article will be covering some concepts that are useful for studying the...


Linear Feedback Shift Registers for the Uninitiated, Part VIII: Matrix Methods and State Recovery

Jason Sachs November 21, 2017

Last time we looked at a dsPIC implementation of LFSR updates. Now we’re going to go back to basics and look at some matrix methods, which is the third approach to represent LFSRs that I mentioned in Part I. And we’re going to explore the problem of converting from LFSR output to LFSR state.

Matrices: Beloved Historical Dregs

Elwyn Berlekamp’s 1966 paper Non-Binary BCH Encoding covers some work on


Linear Feedback Shift Registers for the Uninitiated, Part VII: LFSR Implementations, Idiomatic C, and Compiler Explorer

Jason Sachs November 13, 2017

The last four articles were on algorithms used to compute with finite fields and shift registers:

Today we’re going to come back down to earth and show how to implement LFSR updates on a microcontroller. We’ll also talk a little bit about something called “idiomatic C” and a neat online tool for experimenting with the C compiler.


Lazy Properties in Python Using Descriptors

Jason Sachs November 7, 2017

This is a bit of a side tangent from my normal at-least-vaguely-embedded-related articles, but I wanted to share a moment of enlightenment I had recently about descriptors in Python. The easiest way to explain a descriptor is a way to outsource attribute lookup and modification.

Python has a bunch of “magic” methods that are hooks into various object-oriented mechanisms that let you do all sorts of ridiculously clever things. Whether or not they’re a good idea is another...


Linear Feedback Shift Registers for the Uninitiated, Part VI: Sing Along with the Berlekamp-Massey Algorithm

Jason Sachs October 18, 2017

The last two articles were on discrete logarithms in finite fields — in practical terms, how to take the state \( S \) of an LFSR and its characteristic polynomial \( p(x) \) and figure out how many shift steps are required to go from the state 000...001 to \( S \). If we consider \( S \) as a polynomial bit vector such that \( S = x^k \bmod p(x) \), then this is equivalent to the task of figuring out \( k \) from \( S \) and \( p(x) \).

This time we’re tackling something...


Linear Feedback Shift Registers for the Uninitiated, Part V: Difficult Discrete Logarithms and Pollard's Kangaroo Method

Jason Sachs October 1, 2017

Last time we talked about discrete logarithms which are easy when the group in question has an order which is a smooth number, namely the product of small prime factors. Just as a reminder, the goal here is to find \( k \) if you are given some finite multiplicative group (or a finite field, since it has a multiplicative group) with elements \( y \) and \( g \), and you know you can express \( y = g^k \) for some unknown integer \( k \). The value \( k \) is the discrete logarithm of \( y \)...


Linear Feedback Shift Registers for the Uninitiated, Part IV: Easy Discrete Logarithms and the Silver-Pohlig-Hellman Algorithm

Jason Sachs September 16, 2017

Last time we talked about the multiplicative inverse in finite fields, which is rather boring and mundane, and has an easy solution with Blankinship’s algorithm.

Discrete logarithms, on the other hand, are much more interesting, and this article covers only the tip of the iceberg.

What is a Discrete Logarithm, Anyway?

Regular logarithms are something that you’re probably familiar with: let’s say you have some number \( y = b^x \) and you know \( y \) and \( b \) but...


10 Circuit Components You Should Know

Jason Sachs November 27, 20111 comment

Chefs have their miscellaneous ingredients, like condensed milk, cream of tartar, and xanthan gum. As engineers, we too have quite our pick of circuits, and a good circuit designer should know what's out there. Not just the bread and butter ingredients like resistors, capacitors, op-amps, and comparators, but the miscellaneous "gadget" components as well.

Here are ten circuit components you may not have heard of, but which are occasionally quite useful.

1. Multifunction gate (


Byte and Switch (Part 1)

Jason Sachs April 26, 201112 comments

Imagine for a minute you have an electromagnet, and a microcontroller, and you want to use the microcontroller to turn the electromagnet on and off. Sounds pretty typical, right?We ask this question on our interviews of entry-level electrical engineers: what do you put between the microcontroller and the electromagnet?We used to think this kind of question was too easy, but there are a surprising number of subtleties here (and maybe a surprising number of job candidates that were missing...


Analog-to-Digital Confusion: Pitfalls of Driving an ADC

Jason Sachs November 19, 20117 comments

Imagine the following scenario:You're a successful engineer (sounds nice, doesn't it!) working on a project with three or four circuit boards. More than even you can handle, so you give one of them over to your coworker Wayne to design. Wayne graduated two years ago from college. He's smart, he's a quick learner, and he's really fast at designing schematics and laying out circuit boards. It's just that sometimes he takes some shortcuts... but in this case the circuit board is just something...


Understanding and Preventing Overflow (I Had Too Much to Add Last Night)

Jason Sachs December 4, 2013

Happy Thanksgiving! Maybe the memory of eating too much turkey is fresh in your mind. If so, this would be a good time to talk about overflow.

In the world of floating-point arithmetic, overflow is possible but not particularly common. You can get it when numbers become too large; IEEE double-precision floating-point numbers support a range of just under 21024, and if you go beyond that you have problems:

for k in [10, 100, 1000, 1020, 1023, 1023.9, 1023.9999, 1024]: try: ...

Ten Little Algorithms, Part 1: Russian Peasant Multiplication

Jason Sachs March 21, 20155 comments

This blog needs some short posts to balance out the long ones, so I thought I’d cover some of the algorithms I’ve used over the years. Like the Euclidean algorithm and Extended Euclidean algorithm and Newton’s method — except those you should know already, and if not, you should be locked in a room until you do. Someday one of them may save your life. Well, you never know.

Other articles in this series:

  • Part 1:

Help, My Serial Data Has Been Framed: How To Handle Packets When All You Have Are Streams

Jason Sachs December 11, 201110 comments

Today we're going to talk about data framing and something called COBS, which will make your life easier the next time you use serial communications on an embedded system -- but first, here's a quiz:

Quick Diversion, Part I: Which of the following is the toughest area of electrical engineering? analog circuit design digital circuit design power electronics communications radiofrequency (RF) circuit design electromagnetic...

Important Programming Concepts (Even on Embedded Systems) Part V: State Machines

Jason Sachs January 5, 20158 comments

Other articles in this series:

Oh, hell, this article just had to be about state machines, didn’t it? State machines! Those damned little circles and arrows and q’s.

Yeah, I know you don’t like them. They bring back bad memories from University, those Mealy and Moore machines with their state transition tables, the ones you had to write up...


How to Build a Fixed-Point PI Controller That Just Works: Part I

Jason Sachs February 26, 20127 comments

This two-part article explains five tips to make a fixed-point PI controller work well. I am not going to talk about loop tuning -- there are hundreds of articles and books about that; any control-systems course will go over loop tuning enough to help you understand the fundamentals. There will always be some differences for each system you have to control, but the goals are the same: drive the average error to zero, keep the system stable, and maximize performance (keep overshoot and delay...


Which MOSFET topology?

Jason Sachs September 1, 20119 comments

A recent electronics.StackExchange question brings up a good topic for discussion. Let's say you have a power supply and a 2-wire load you want to be able to switch on and off from the power supply using a MOSFET. How do you choose which circuit topology to choose? You basically have four options, shown below:

From left to right, these are:

High-side switch, N-channel MOSFET High-side switch, P-channel MOSFET Low-side switch, N-channel...

How to Estimate Encoder Velocity Without Making Stupid Mistakes: Part II (Tracking Loops and PLLs)

Jason Sachs November 17, 201311 comments

Yeeehah! Finally we're ready to tackle some more clever ways to figure out the velocity of a position encoder. In part I, we looked at the basics of velocity estimation. Then in my last article, I talked a little about what's necessary to evaluate different kinds of algorithms. Now it's time to start describing them. We'll cover tracking loops and phase-locked loops in this article, and Luenberger observers in part III.

But first we need a moderately simple, but interesting, example...