EmbeddedRelated.com
Forums

Poor man's PWM

Started by Don Y January 27, 2020
On 1/27/2020 5:53 PM, Don Y wrote:
> With "few" intensity levels desired, I plan to drive LED > with a crude PWM signal directly from an IRQ (i.e., push > bits out a hardware port at each IRQ).
...
> What problems might I encounter with this approach? (I'm > mainly worrying about visual artifacts)
This turned out better than I'd expected! I prototyped a 240 channel device (hacked a previous FPGA design so I didn't have to BUILD anything in order to capture 'scope traces) Latency and overruns are non-problems -- immeasurable! (I added an "overrun flag" to the design to detect if an ISR failed to finish updating ALL of the outputs before the *next* ISR came along) But, that's largely because my foreground is slicker than snot! The ISR could easily be replaced by a DMA but that would be a foolish waste of resources (and an unnecessary constraint on future CPU implementations). As it stands, all I need is a parallel interface and some number of discrete, consecutive addresses to be updated by the ISR -- so, I can quickly port the hardware to a variety of different CPUs. No PWM controllers, I2C interfaces, programmed hardware, etc. [In the short term, I'll port the code to an ISA PC running a bloated FOSS OS and see if that slows things down enough to be a problem! I'm also going to hack together a sloppy PIO PATA (slow!) interface to compete with this ISR to see what sort of problems a deliberately sloppy synthetic ISR manifests.] Jitter and skew are also nonexistent due to the double-buffered design. Often the easiest solutions are the best! :> Next, layout a preproduction board and increase the drive to ~500mA per channel. (And, contemplate a 5A/channel implementation as the approach seems applicable to a variety of different I/Os that I'd not previously considered!) Sadly, the guy at the local university who was going to do the testing of the visual aspects of this approach has backed out of that commitment -- the University is largely shutdown due to COVID19. But, he's offered his "test plan" for me to use along with some specialized equipment to visually capture artifacts. Attached, an excerpt from an announcement email to those of my colleagues who are presently using my codebase... ======================================================================== The BD documents cover the full API; this just gives a quick overview. Remember, this is the equivalent of a "hardware interface"; you'll want to build an abstraction layer atop it. E.g., use symbolics for different cycle patterns: OFF/DIM/BRIGHT, COOL/WARM/HOT/SCORCHING, STOPPED/SLOW/FAST, etc. I'll send along the handler for the indicator array I designed, as an example, under separate cover. ----8<----8<---- [Error handling elided -- you already know the potential issues!] # instantiate a copy of the PWM controller on the targeted node factory := MyContext=>resolve("ORB Factory") pwmA := factory=>instantiate("PWM Controller", nodeA) # drop the Handle to effectively create an invariant factory = nil # N.B. the PWM controller determines number of signals supported by the # "local" hardware -- along with their "I/O" addresses -- and configures # its local ISR to service them. This frees the client from having to know # what a particular Node's hardware can do. # *** THE ISR MUST BE HOSTED ON THE SAME NODE AS THE PWM CONTROLLER!! *** # It's not practical to make real-time guarantees across processor nodes # for such closely-coupled activities. Of course, layers *atop* that can # be hosted anywhere a Handle is available! # No need to bind the Handle for the PWM instance to a Name as you're # the only one referencing it. And, holding onto the Handle is just as # easy as retrieving that Handle from a Context! Additionally, holding # the Handle ensures you will be notified of pertinent events "as they # happen" (e.g., if the object dies or faults) # configure the PWM for the desired number of intervals-per-PWM-cycle # as well as their relative weights and the overall cycle frequency rate := 100.0 intervals := 13 :: 5 :: 2 :: 2 :: 2 :: 1 pwmA=>configure(rate, intervals) # forces all outputs off; stops PWM pwmA=>start() # if you never export the pwmA Handle, then guaranteed exclusive use! # otherwise, you'll need locks or rely on first-come, first served. # Individual methods are atomic but method sequences make no guarantees! # As usual, no *policy* decisions imposed by the implementation. # As you'll undoubtedly interface to this through an upper-layer handler, # the locks at this level can be hidden. But, sharing Handles to *that* # handler would likewise require coordination, of some sort. # The service is connection based so different clients could, potentially, # compete to specify different configurations and drive states. You decide # how that should be handled in your respective applications -- you could # create specific capabilities for each output to allow independant control # of their desired states while the controller does the actual updates, # /en masse/ ... # check how many discrete outputs are being controlled by this instance signals := pwmA=>drives() # set all outputs to 20% duty cycle for (output := 0; output < signals; output++) pwmA=>duty(output, 20.0) # make new settings available to the hardware (via its ISR) # this allows multiple outputs to be altered while ensuring # the "update" appears in synchrony. # As a result, if you want to expose changes to individual # outputs, you must publish() after each change! pwmA=>publish() ... duty := pwmA=>ratio(0) # s.b. 20.0 (or, as close as hardware can get!) pwmA=>set(0, " ## #") duty = pwmA=>ratio(0) # s.b. 20.0 pwmA=>set(0, " X ") duty = pwmA=>ratio(0) # s.b. 20.0 pwmA=>set(0, " a bc") duty = pwmA=>ratio(0) # s.b. 20.0 wave := pwmA=>intervals(0) # s.b. " a bc" # Note that none of these duty cycle changes have been publish()ed! # I'm not fond of the whitespace use; open to suggestions!! Would be # nice if compiler could verify proper data type ("width") but it's # size is constrained by the hardware, not the API. No practical # limit on intervals-per-cycle so bit-arrays aren't a good fit! ... if (pwmA=>opens()) fault("load(s) dropped!") do_something() if (pwmA=>overrun()) fault("service failure!") ... # In the event of overrun, one might reconfigure the service to support # greater latency. I've not been able to trip this error, yet, in practice, # even at "rate"s of 1KHz (i.e., intervals as small as 40us, for this config)! # But, an application with a "sloppier" foreground might have problems. # Note that there is no guarantee that the shortest interval will be the # most likely to be overrun, in any particular application! # Think of this in a manner similar to a UART overrun or any other comm # channel overrun. You may NOT be able to recover the lost data so your # remedies are limited! OTOH, you LIKELY don't want to just ignore the # problem as it may be chronic -- a design flaw or some other system aspect rate := 90.0 intervals := 2 :: 2 :: 2 :: 2 :: 2 :: 2 :: 2 :: 2 :: 2 :: 2 :: 2 :: 2 :: 1 pwmA=>configure(rate, intervals) # forces all outputs off; stops PWM pwmA=>acknowledge() # clear signaled overrun pwmA=>start() # N.B. this configuration increases the static load on the node as the # total number of IRQs per second (unit time) is now 1170. That was just # 600 in the previous configuration. The amount of work ("cost") per IRQ # is constant -- the number of "outputs" transferred to the hardware in each # IRQ -- so this represents a doubling of that cost, despite increasing the # shortest interval by ~10%. (see the BD documents for specific examples # and analysis regarding the advantages of this configuration) pwmA=>duty(0, 20.0) duty = pwmA=>ratio(0) # s.b. 20.0 wave = pwmA=>intervals(0, '#') # s.b. "## #" (or similar) # N.B. server can build waveform from any appropriate combination of intervals! # So, if you have a preference, use the set() method instead of duty()! # I'm also unhappy with that variadic implementation; s.b. one way or another? # But, without inventing a new data type, I think this is as good as it gets.