Modulation Alternatives for the Software Engineer
Before I get to talking about modulation, here's a brief diversion.
A long time ago -- 1993, to be precise -- I took my first course on digital electronics and processors. In that class, we had to buy a copy of the TTL Data Book* from Texas Instruments.
If you have any experience in digital logic design you probably know that TTL stands for Transistor-transistor logic (thereby making the phrase "TTL Logic" an example of RAS syndrome), the first really widely successful family of digital logic components, and maybe you know that the 7400 series of logic chips -- not the first series of digital logic chips or even of TTL, but certainly the most widespread series of logic chips -- was first designed by Texas Instruments and since then has been widely second-sourced by many other companies.
What you may not know is that Texas Instruments started as an geological services company for the oil industry (look at the TI logo sometime: it's an "i" forming a drill descending a "t" shaped hole), which morphed into a producer of integrated circuits in the 1950s and 1960s, and that back when the 7400 series was released in the mid-1960s, the reason for these chips was not as today's "glue logic" to interface more complicated devices, but instead as the basic building blocks to make computers. Before integrated circuits, a computer made of transistors would need to be made of discrete transistors. Once standard digital logic ICs were out there, instead of having to make your own AND gates or flip-flops out of transistors, you could just buy them, and then design a computer using boards with row upon row of 7400 logic chips.
Still seems kind of antiquated, what with today's single-chip microcontrollers, but times have changed.
Anyway, I bought my copy of the TTL Data Book in 1993, and started browsing through it. The book described the whole series of different digital chips in the 7400, 74S00, and 74LS00 families, everything from the garden-variety gates, flip-flops, multiplexers, counters, and shift registers, to some weird little special-purpose chips like the 4x4 register file (670), the addressable latch (259). And in my digital logic class, we actually built a rudimentary computer out of nothing but 74LS00 series chips and a crystal oscillator and some SRAM and DRAM chips.
But there was one little oddity that I always wanted to try out: the 7497 6-bit synchronous rate multiplier. There's no 74HC97 -- it never made it to any other logic beyond the original 7400 series, and never got produced in any smaller package than DIP, but you can still buy it from TI -- kinda pricey though.
What this chip does is take in a 6-bit binary number M and a clock signal, and it produces an output waveform that has M pulses for every 64 pulses of the input clock, with missing pulses to fill in the gaps.
If someone gave you this as a description, how would you implement it?
Well, if it were a 4-bit synchronous rate multiplier (reduced here for illustration purposes) I'd probably use a 4-bit counter and then compare the count with the number M, to multiplex between the input clock pulse and a constant logic low (or high) to produce the following pattern:
M=0: 000000000000000000000000000000000000000000000000
M=1: 100000000000000010000000000000001000000000000000
M=2: 110000000000000011000000000000001100000000000000
M=3: 111000000000000011100000000000001110000000000000
...
M=14: 111111111111110011111111111111001111111111111100
M=15: 111111111111111011111111111111101111111111111110
where a "1" represents a pulse and a "0" represents no pulse.
This is an example of modulation: we take a reference waveform (the input clock pulse) and turn it on and off depending on some parameter. In this case it's very similar to pulse-width modulation (PWM), except that for PWM a "1" would represent high and a "0" low (rather than pulse or no pulse). In any case, you'll note that the 1's and 0's are all clustered together.
But with the 7497, TI did something different. The 1s and 0s are more equally distributed. (It's easier to see this from the datasheet of the now-obsolete DM7497 from National Semiconductor.)
For M=1: (the DM7497 describes the behavior of the inverting output)
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
M=2:
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
M=3:
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
M=4:
1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1
and so on.
It's a weird way of combining the input number M and a 6-bit internal counter; take a look at the datasheet and maybe you'll understand how it works. I was always mystified by it, and wondered what kind of application would inspire such an integrated circuit.
Let's fast forward from 1993 to today.
I'm an electrical engineer working on power and signal processing, and I use PWM all the time to allow digital logic to produce a series of on/off pulses where the input duty cycle D controls the average fraction of the time a switch is turned on. Constant frequency, variable pulse width.
In software you could implement this as follows:
int modulation_state = 0; int PERIOD = 256; bool PWM(int m) { if (++modulation_state >= PERIOD) modulation_state = 0; return m > modulation_state; }
where PWM() is a function you call at a regular rate, and you get either a TRUE (on) or FALSE (off) output of the PWM.
Easy. (Usually PWM is a hardware peripheral that runs at the system clock, but sometimes you need just one more PWM output for something simple like an LED or a valve, and software gets stuck having to implement it because you run out of hardware PWM channels.)
Now suppose we change the problem. We want some way of modulating an output on and off so that its average fraction of on-time matches the input duty cycle, but we don't care that it's constant frequency. Actually we'd like the frequency to be as fast as possible.
Well, one way is to emulate the 7497 synchronous rate multiplier -- if you go through the effort of translating the AND and OR gates in its output, you could match its behavior without the pulse-gating, but it's kind of ugly for software to evaluate, looking something like a bunch of bits of a Gray code ANDed together with the low order bit of M, then OR that with the a subset of Gray code bits ANDED together with the next-to-lowest bit of M, OR'd with... and so on. Good for digital logic, cumbersome for software.
I ran into this problem a few years ago while talking with a colleague of mine at the time, Eric Jensen -- who has since retired, and a long time ago was one of the hackers at MIT who invented all sorts of clever ways to manipulate bits to accomplish certain mathematical calculation. Eric introduced me to what he called "synthetic division":
int modulation_state = 0; int q = 256; // doesn't have to be a power of two bool syntheticPWM(int p) { modulation_state += p; if (modulation_state < q) return false; modulation_state -= q; return true; }
This produces a pulse train with average on-time fraction p/q.
Let's look at a particular example: p = 3, q = 7. The table below has the first column containing the modulation state and the second column containing the output.
3 | false |
6 | false |
2 | true |
5 | false |
1 | true |
4 | false |
0 | true |
3 | false |
6 | false |
....
The modulation state increments by p, modulo q, and you get an output that is true every time the modulation state wraps around.
The value of p doesn't have to be fixed but can vary over time, thus varying the average output fraction.
It turns out that this method is essentially equivalent to the method used in first-order delta-sigma modulation, which is widely used. You have an integrator (equivalent to the modulation state that adds the value of p), and a threshold comparator (equivalent to our comparing the modulation state with q) that produces an output waveform that is used to counteract the increase in the integrator, so that when the average value of the output state balances the input state, the integrator remains within bounds.
Delta-sigma modulation has several variants (including higher order modulators) and unlike PWM, is used to produce spread-spectrum quantization noise. If you look at the spectrum of a PWM waveform, it shows up as harmonics of the PWM carrier rate, which are unwanted but necessary artifacts of modulation. (Ideally we would just output an analog signal at DC with no switching noise -- but that's not possible with on/off outputs.) The delta-sigma modulation shifts this noise upwards in spectrum, so that it's closer to the modulation frequency, and can be more easily filtered out.
When would you want to use PWM instead of delta-sigma? When you need fixed-frequency output and more frequent switching has negative consequences (e.g. power electronics where switching losses are a function of how often you turn a switch on or off).
When would you want to use delta-sigma instead of PWM? When you have a relatively slow modulation rate (e.g. a software routine that runs at 400Hz) that's slow enough that switching every other modulation period doesn't have negative consequences, and you can't afford to run the modulator fast enough to get the resolution you'd like with PWM but not have a terribly slow PWM period.
For example, if you needed 8-bit resolution for a software-implemented PWM waveform that runs at 400Hz, that would mean a PWM period of 640msec (or frequency of 1.5625Hz). Yikes! That's slow.
In contrast, running a delta-sigma modulator at 400Hz with a duty cycle of 50%, you get an output waveform that's 200Hz (which generally produces very low switching losses in power electronics; you start to worry about switching losses in IGBTs at 10-50kHz and in MOSFETs at hundreds of kHz). At 25% and 75% duty cycle, the output waveform is at 100Hz; at 10% and 90% duty cycle, the output waveform is at 40Hz. The closer you are to 50%, the closer the output frequency is to half the modulation rate. The closer you are to 0% or 100%, the slower the output frequency.
Happy modulating!
*p.s. if you want a copy of the 1988 edition of the TI TTL Data Book, TI recently started selling them for 10 cents apiece on their website. Quite a few of the components are out of date, and all the series (7400, 74S00, 74LS00) are legacy series better replaced by 74HC, 74LVC, etc., but I still occasionally browse the yellowed pages of my paper databook to try to figure out if there's a glue logic chip that might solve my problem.
- Comments
- Write a Comment Select to add a comment
In the area of competing with PWM, an advantage is that you can change the "on" fraction within a cycle (even multiple times) and the filtered output will follow; PWM is slower to respond and less stable.
The experience with independently "inventing" Bresengham's algorithm is one reason I detest software patents - even if you solve some puzzle yourself without copying or knowing of related work, you can be prohibited from using your design if somebody else has patented it. And to really cover yourself, you'd need a patent attorney who make way more money than you do watching over your shoulder all the time. But that's an aside.
To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments.
Please login (on the right) if you already have an account on this platform.
Otherwise, please use this form to register (free) an join one of the largest online community for Electrical/Embedded/DSP/FPGA/ML engineers: