EmbeddedRelated.com
Forums

Poor man's PWM

Started by Don Y January 27, 2020
On Tuesday, January 28, 2020 at 10:11:59 PM UTC-5, Don Y wrote:
> Hi Rick, > > On 1/27/2020 8:29 PM, Rick C wrote: > > On Monday, January 27, 2020 at 7:53:54 PM UTC-5, Don Y wrote: > >> Then, at the level above the driver, build a schedule for which intervals > >> to activate the LED. > > > > What controls the IRQ period? I assume each IRQ will modify a hardware > > timer in the CPU? > > The timer will either be a "spare" internal timer or something like an > SOIC8 -- whose *sole* purpose will be to output a set of edges at the > desired interval spacings (amusing to think of "wasting" an entire > MCU just for such a banal task!) The "main" processor can synchronize > to this signal by detecting the longest such interval wrt its internal > "system timebase".
If you were to use a auxiliary MCU to generate the interrupt, you might as well let the MCU control the LEDs itself. In fact, at that point it would be simpler to use a LED driver chip with dimming capability. They often have very nice features.
> > The "list" items you describe are just bits in the value you wish to specify > > the duty cycle. An 8 bit byte will give you 255 brightness levels with 8 > > different period interrupts. A nybble gives 15 brightness levels with 4 > > periods. Each bit in the value determines if the LED is off or on during > > the corresponding IRQ. A simple bit mask shifted with each IRQ can be anded > > with each LED value to indicate if that LED should be on during that IRQ. > > When the bit mask reaches zero, start over with the longest IRQ and highest > > order bit mask. > > N.B. You always have the "off" intensity available so > 8b yields 25_6_ levels, etc.
Sure, but off is not really a brightness setting and having it doesn't actually impact the math of pulse density since that will always be based on 2^n-1 with weighted pulses unless you have two intervals of one period width. Not a big deal, just an observation since in is instinctive to use 2^n as the divisor when calculating the pulse density.
> However, as the number of levels increases, the period > of the shortest inevitably decreases. This has direct > consequences on the *peak* IRQ rate -- which, in turn, > has implications on maximum tolerable latency (and > complexity of the ISR itself). Double-buffering the > outputs (see my reply to Dimiter) eases the timeliness > constraints on the ISR (e.g., 100us at 10KHz)
If you are doing this job in software using an IRQ, I assume you have plenty of processing power available. Many MCUs have hardware to do this sort of job in the traditional linear way. But I suppose you might be trying to squeeze 10 lbs of software into a 5 lb chip without hardware features.
> [Note that there is nothing that forces the periods to > be multiples of 2; that's just the easiest way to > describe this approach. I could, instead, pick intervals > of 2, 3 and 4 to yield duty cycles of {0, 22, 33, 44, > 55, 67, 78, 100}%. Or, 2, 2 and 2 to yield {0, 33, > a different 33, yet another 33, 67, a different 67, > yet another 67, 100}%]
Sure, buy why implement an uneven scheme or something like 2,2,2? In particular the 2,2,2 is just three periods with arbitrary assignments of on an off. In fact, it is literally no different from a linear run other than possibly phasing with the other LEDs which can also be done with a linear run.
> Consider, also, you want an implementation that scales > to more than a single LED. So, doing that work *in* the > ISR quickly becomes the bottleneck. Hence "scheduling" > the ISR's actions in the background (as a non-time-critical > activity)
If you think controlling the LEDs in the ISR is too much work, you must be using a very slow processor. Even if there is the possibility of two interrupts at some minimum spacing, typically that is not a significant issue unless you have other, more time consuming uses of interrupts. Without more info on your system there's no way to determine what will work and what won't. If you are only looking for a small number of brightnesses like three (and off) and are willing to dedicate hardware to the task, then you would do well to switch in different current limiting resistors for each LED and avoid any issues with EMC problems or CPU usage. -- Rick C. -- Get 1,000 miles of free Supercharging -- Tesla referral code - https://ts.la/richard11209
On Tuesday, January 28, 2020 at 10:34:55 PM UTC-5, Don Y wrote:
> Hi David, > > On 1/28/2020 3:29 AM, David Brown wrote: > > On 28/01/2020 01:53, Don Y wrote: > >> Then, at the level above the driver, build a schedule for which > >> intervals to activate the LED. > > > > It sounds doable, but complicated. I think you'd need a good many > > different intensity levels before this would pay off - the overhead of > > tracking and setting the different time intervals won't make sense until > > you are using a huge number of levels. And if you wanted a huge number > > of levels, you'd be using a better quality method of generating the PWM > > signal that doesn't have as much jitter. > > There is a high parts cost to using, e.g., a programmable current > source/sink per indicator.
Not really. You can drive at least 8 LEDs from a single chip, it may be 16 and that is still a pretty low cost chip.
> And, more data that needs to reside > "outside" the CPU (i.e., *in* those devices). So, any practical > or economical solution has to be implemented in software.
Eh??? The data in the driver chips comes from the CPU. So I'm not sure what you are talking about. Bottom line is you can do it in software if you can do it in software without loading the system too much. Otherwise an LED driver chip can be a very practical solution relieving the CPU of the task of handling the timer interrupt for this and the real time processing overhead required.
> The IRQ overhead is negligible; "do NOTHING in an ISR if it can be > done OUTSIDE the ISR!"
Being in or out of the ISR only has to do with interactions with other ISRs. The work has to be done somewhere and the cost to the CPU is no less either way.
> Hence my comment re: building a schedule > (to feed the ISR): > > /* XXX PORTME -- hardware dependent (all) */ > typedef void (*isr_t)() > typedef ulonglong LED_t // for example > > LED_t schedule[4]; > isr_t LED_isr; > > isr_t period1() { > *LED_LATCH = schedule[0]; > LED_isr = &period2; > } > > isr_t period2() { > *LED_LATCH = schedule[1]; > LED_isr = &period4; > } > > isr_t period4() { > *LED_LATCH = schedule[2]; > LED_isr = &period8; > } > > isr_t period8() { > *LED_LATCH = schedule[3]; > LED_isr = &period1; > // refresh schedule[] > } > > If using an internal (programmable) timer, each of these > ISRs would also jam the appropriate CONSTANTS into the > timer hardware to setup the NEXT period. > > You can now build the schedule[] in a non-real-time context.
The schedule still has to be built in real time. It just won't be in the interrupt. Interrupt routines aren't the only real time software in the CPU.
> > As far as processor load goes, there are two key points. One is the > > worst case rate of the IRQs - and that is determined by the lowest > > digit, and will be the same for a regular pulse or for this complex > > scheme. The other is the amount of processor work done in each IRQ, and > > that will be massively more than with a simple system, outweighing it in > > total work until you have a huge number of levels. > > There is no need to do any "work" in the ISR -- other > than arranging for the PRECOMPUTED set of outputs to > be delivered to the actual hardware. All of the > "thinking" can be done when the application "decides" > what intensity level it wants for a particular indicator > (and the "schedule" is built). > > Latency *can* be a potential problem -- but one easily > (and economically) solved by the addition of a second > level of (hardware) buffering -- see my earlier replies.
If your hardware supports that. What hardware are you talking about using? I'm not familiar with hardware that will work that way on MCUs other than I/Os controlled by DMA and/or timer interrupts.
> Lean ISRs make it possible to drive the IRQ rate up (period > down) to minimize the "consequences" of saccades. (though > I'm not designing a *display* so the user isn't expected to > be fixating on the presentation so much that this would > be an annoyance)
If there is any real time adjustments to the intensities, that still has to be done in real time and in coordination with the IRQ routines.
> > Pick a refresh frequency of 100 Hz, which is fast enough that you won't > > see blinking even if you move your eyes quickly. We'll have 16 levels, > > which is more than enough for normal circumstances. > > "16" *seems* like a good number -- note that ANSI3.64 gave us > just a few colors and intensities to "present information". I > don't recall anyone complaining that they wish they had more > colors or intensities!
I guess the guys who came up with VGA were spinning their wheels.
> I may find that 4 is more appropriate > esp if they don't have to be geometrically related to each other > in terms of durations -- like {0, 20, 80, 100%}. > > [Anything done with *hardware* "LED drivers" puts constraints > on what you can do, going forward.]
Just as using software puts constraints on what you can practically do.
> As I said, originally, I'm trying to anticipate what *visual* > consequences might arise from this approach before settling > on an implementation. If it's not possible for a user to reliably > differentiate between 16 levels, then why implement that many?
That totally depends on your application. You've told us nothing about that.
> Or, if two "level 7" indicators appear to have dramatically > differing intensities, then the expectation of them being > "the same" (or even "similar") is foolhardy.
Again, that totally depends on the application. If two indicators aren't on at the same time in locations where they can easily be compared, etc.
> [N.B. The same could be true of a pure hardware implementation > that sought to display similar intensities] > > > That puts your timer interrupt at 1600 Hz - not an onerous burden for > > most systems. And being a regular timer interrupt, you can use the same > > tick for other purposes, unlike your specialised one. > > Note that you, *still*, have a REGULAR, PERIODIC INTERRUPT > (e.g., suitable for a system timer) by selecting ANY *one* > of these to also invoke that service (obviously, you'd pick > the isr_t that gives you the most latency tolerance before > the NEXT scheduled invocation). E.g., in your example, there > would be FOUR candidates for a "100Hz periodic interrupt"; > pick the one that is least "crowded" by the following ISRs > activation.
It would be useful to have more info on what you wish to do. I think we have covered all the generic ground there is. -- Rick C. -+ Get 1,000 miles of free Supercharging -+ Tesla referral code - https://ts.la/richard11209
On Tuesday, January 28, 2020 at 10:54:26 PM UTC-5, Don Y wrote:
> Hi Hans-Bernhard, > > On 1/28/2020 6:43 AM, Hans-Bernhard Bröker wrote: > > Am 28.01.2020 um 01:53 schrieb Don Y: > >> But, instead of having N different times at which the IRQ > >> might be signalled (for the N+1 different duty cycles) in > >> each refresh interval, I plan on having log2(N) times, each > >> delimiting an period "twice" as long as the previous. This > >> keeps the hardware cheaper than dirt > > > > I rather much doubt that. If there are indeed "few" intensity levels, i.e. 16 > > rather than 1024, this approch will at most reduce the overall software-PWM > > interrupt rate by a factor of 4; > > Which means the timer can be four times FASTER!
No, it isn't a factor of four. 1023 levels would be 10 different periods of interrupt. 15 levels would be 4 different interrupt periods. So 2.5 times more IRQs, not four. This only matters if your IRQ is a major factor in your CPU budget.
> > and the hardware still has to be capable of > > handling each of those interrupts fast enough, consistently, that you there > > will be no flickering caused by the shortest possible time slot being prolonged > > or shortened by delays in interrupt processing. This puts a load on the > > overall design that this scheme cannot reduce. > > Obviously you're not a hardware person.
I don't want to get pissy about it, but obviously you aren't a systems person. The way to deal with jitter in the timing of the pulse edges is to make the IRQ for this interrupt highest priority and not lock out interrupts for any reason. Or the actual I/O can be done by timer driven DMA which would be independent of the software as long as the DMA and timer are always programmed in time to keep running without gaps. This would still need to be done in an IRQ.
> Double buffering gives you > the entire IRQ period to process the data for the NEXT IRQ.
That's actually not required. As long as the code has no variable execution times in it it doesn't matter where in the interrupt code the write of the I/O port is done. It's about jitter, not delay.
> And, > the critical path is ONLY the shortest timer interval, not all of them. > E.g., with a 10KHz *peak* IRQ rate, you'd have 100us to respond to THAT > IRQ and get the next data ready -- simply COPYING it from a FIXED > LOCATION to another FIXED I/O LOCATION -- for the timer to transfer > to the LED "(hammer) drivers" coincident with the *next* IRQ. > That 100us period would correspond to a ~600Hz refresh rate (assuming > a 1/2/4/8 weighting of timer periods)
600 Hz * 4 is 2400 Hz or 416 uS.
> That's something you could do 40 years ago with a 6MHz 8085! > > [A set of latches are cheap -- and, occupy very little board space > in SOIC packages.
Lol!!! If you are going to add hardware just use the durn LED driver chip with more features than you will ever need.
> Note that they don't need high current capacity > as they are just acting to buffer *data*; the hammer drivers that > follow them do the real *work*! Furthermore, if the MCU doesn't have > enough dedicated I/O pins for each of the indicators, you're already > dealing with external packages to "store" the drive data]
Yep, clearly a job for an LED driver.
> > Overall, hardware that can implement this scheme will thus hardly be any > > cheaper than hardware that can implement the usual one. > > Really? How much for 20 PWM channels? 200? 2000? (I have a 2000 LED > "moving message" sign at my feet, here -- wanna bet it DOESN'T have > 2000 PWM channels? Or, 2000 programmable current sources?)
If you get to 2000 or even 200 PWM channels it should be an FPGA chip doing all the timing in excruciating detail, any way you would like. No point in using messy sequential software and always having to deal with the hassles of interrupt problems.
> >> Then, at the level above the driver, build a schedule for which > >> intervals to activate the LED. > > > > Building this schedule doesn't actually come for free, either. > > It's dirt cheap! Take N "intensity codes" and map them (statically) > to N sequences of timer intervals. Pack these into an array of structures > that can be passed to an ISR that *simply* emits the next structure > in sequence. See pseudocode posted in other replies. > > >> What problems might I encounter with this approach? (I'm > >> mainly worrying about visual artifacts) > > > > The most obvious one, which would earn me a stern veto from the HW department > > at may outfit, is an increase in the overall number of LED switching edges, > > increasing, among other things, EM emissions. > > > > Your plan also destroys any possibilities to distribute those EM emissions (and > > power supply loads) more evenly by staggering the PWM cycles of several LEDs. > > That aspect becomes more important as the loads get heavier, of course: 20 mA > > LEDs are a piece of cake, 20 Ampere heaters not so much. > > They are *indicators*, not *illuminators*. > > I'm going to give you 20 (or 200 or 2000) DEDICATED PWM channels. How are YOU > going to stagger theur cycles? Tell me how your implementation is going to be > any better? And *simpler*??! > > EMI is relatively easy to control -- watch edge transition times.
Lol! Ok, I guess you have it figured out then. :) -- Rick C. +- Get 1,000 miles of free Supercharging +- Tesla referral code - https://ts.la/richard11209
On 1/27/2020 5:53 PM, Don Y wrote:
> What problems might I encounter with this approach? (I'm > mainly worrying about visual artifacts)
(sigh) I've been considerate and responded to many comments UNRELATED TO THE QUESTION THAT I POSED (see above). But, aside from Dimiter's comments, most have completely ignored this. Gee, let's talk about processor choice, next! Or, programming language! Or, component selection! Or, vendor selection! Or... I am fully capable of implementing the algorithm proposed -- in various different ways for various different "loads" (LEDs aren't the sole application... but, hey, let's digress into even more unrelated areas to avoid the question posed!). Where I need assistance is in anticipating how LEDs driven in this SORT OF manner could potentially have undesirable appearances. Moving from incandescent and magnetic ballast lighting to CFLs and LEDs leaves a great many different visual environments in which a device is viewed. It would really be annoying if my scheme suffered in some particular office/lab/home/outdoor environment that I hadn't anticipated during the design phase. If you can offer some concrete comments wrt that issue, great! I get it -- it's not a run-of-the-mill sort of knowledge domain. No shame in admitting your ignorance! I've done so with my question, here. Feel free to pursue your own implementations on your own dime. Great if you choose to share them -- in their final form, ready for critique -- here. We can then play endless "what if" games: what if 10 indicators? 100? what if they are replaced with solenoids? or resistive heaters? what if more levels? less levels? what if we try and drive 30 servomotors while running the displays? 60? replace servo motors with stepper motors? run a full network stack? *multiple* network interfaces? live packet routing? IPSEC? etc. Gee, the fun we can have (and the time we can waste)! The local University is supposedly renowned for its optics department. I will assume that I can find someone there who will have, at least, a theoretical knowledge of how it relates to vision. Unfortunately, it is a VERY large organization so it will take me time to find the right folks -- I had hoped someone HERE might have direct experience in the subject :<
On 1/28/2020 10:54 PM, Rick C wrote:
>>> and the hardware still has to be capable of handling each of those >>> interrupts fast enough, consistently, that you there will be no >>> flickering caused by the shortest possible time slot being prolonged or >>> shortened by delays in interrupt processing. This puts a load on the >>> overall design that this scheme cannot reduce. >> >> Obviously you're not a hardware person. > > I don't want to get pissy about it, but obviously you aren't a systems > person. The way to deal with jitter in the timing of the pulse edges is to > make the IRQ for this interrupt highest priority and not lock out interrupts > for any reason. Or the actual I/O can be done by timer driven DMA which > would be independent of the software as long as the DMA and timer are always > programmed in time to keep running without gaps. This would still need to > be done in an IRQ.
Bwahahahaha... says "The Systems Guy" -- with NO KNOWLEDGE OF THE REST OF THE SYSTEM! Priceless!
On Wednesday, January 29, 2020 at 2:17:58 AM UTC-5, Don Y wrote:
> On 1/28/2020 10:54 PM, Rick C wrote: > >>> and the hardware still has to be capable of handling each of those > >>> interrupts fast enough, consistently, that you there will be no > >>> flickering caused by the shortest possible time slot being prolonged or > >>> shortened by delays in interrupt processing. This puts a load on the > >>> overall design that this scheme cannot reduce. > >> > >> Obviously you're not a hardware person. > > > > I don't want to get pissy about it, but obviously you aren't a systems > > person. The way to deal with jitter in the timing of the pulse edges is to > > make the IRQ for this interrupt highest priority and not lock out interrupts > > for any reason. Or the actual I/O can be done by timer driven DMA which > > would be independent of the software as long as the DMA and timer are always > > programmed in time to keep running without gaps. This would still need to > > be done in an IRQ. > > Bwahahahaha... says "The Systems Guy" -- with NO KNOWLEDGE OF THE REST OF > THE SYSTEM! > > Priceless!
As I was starting to think, a troll. Yes, priceless indeed. -- Rick C. ++ Get 1,000 miles of free Supercharging ++ Tesla referral code - https://ts.la/richard11209
On 1/29/2020 12:40 AM, Rick C wrote:
> On Wednesday, January 29, 2020 at 2:17:58 AM UTC-5, Don Y wrote: >> On 1/28/2020 10:54 PM, Rick C wrote: >>>>> and the hardware still has to be capable of handling each of those >>>>> interrupts fast enough, consistently, that you there will be no >>>>> flickering caused by the shortest possible time slot being prolonged or >>>>> shortened by delays in interrupt processing. This puts a load on the >>>>> overall design that this scheme cannot reduce. >>>> >>>> Obviously you're not a hardware person. >>> >>> I don't want to get pissy about it, but obviously you aren't a systems >>> person. The way to deal with jitter in the timing of the pulse edges is to >>> make the IRQ for this interrupt highest priority and not lock out interrupts >>> for any reason. Or the actual I/O can be done by timer driven DMA which >>> would be independent of the software as long as the DMA and timer are always >>> programmed in time to keep running without gaps. This would still need to >>> be done in an IRQ. >> >> Bwahahahaha... says "The Systems Guy" -- with NO KNOWLEDGE OF THE REST OF >> THE SYSTEM! >> >> Priceless! > > As I was starting to think, a troll. > > Yes, priceless indeed.
Awwww... little ricky is upset cuz I'm not going to PLAY with him! But, it's too hard teaching toddlers advanced system concepts... Anything beyond lights and bells would be too confusing! Please feel free to add me to your kill file. I'll take you OUT of mine if I ever think you might have something to add on an issue of importance to me (but, clearly, you're clueless on THIS one!) I guess if I ever need a FORTH coder... We'll have a great laugh at the next offsite discussing "make(ing) the IRQ for this interrupt highest priority" -- with NO knowledge of the rest of the system's requirements. Yup. Clueless. Bye! *plonk*
Rick C <gnuarm.deletethisbit@gmail.com> writes:
> As I was starting to think, a troll. Yes, priceless indeed.
The whole thread seemed weird to me. I can run eForth on those 60 cent dev boards using this 21 cent STM8 cpu that appears to have three PWM channels: https://www.aliexpress.com/item/33052674926.html I didn't read the whole thread, but who on earth needs to mess with doing PWM in software? Also, if you PWM LED's, they flicker. Usually not enough to notice if you light something steadily with them, but if you look at a moving one you can see the strobing. It's no big deal. Even that notorious 3 cent Padauk processor has PWM: https://cpldcpu.wordpress.com/2019/08/12/the-terrible-3-cent-mcu/ Sheesh.
On 1/29/2020 1:04 AM, Paul Rubin wrote:
> The whole thread seemed weird to me. I can run eForth on those 60 cent > dev boards using this 21 cent STM8 cpu that appears to have three PWM > channels: > > https://www.aliexpress.com/item/33052674926.html
A CPU isn't a "solution"; just a component.
> I didn't read the whole thread, but who on earth needs to mess with > doing PWM in software? Also, if you PWM LED's, they flicker. Usually > not enough to notice if you light something steadily with them, but if > you look at a moving one you can see the strobing. It's no big deal. > > Even that notorious 3 cent Padauk processor has PWM: > > https://cpldcpu.wordpress.com/2019/08/12/the-terrible-3-cent-mcu/ > > Sheesh.
Wow! All of *3* channels! So, how many of these boards for 10 channels? Or, do a 4x3 mux? (gee, what VISUAL issues does that raise? how hard do you have to overdrive the LEDs to get a visible indication? Or, opt for high efficiency LEDs?) What about 100 channels? 1000? (yes, there are products with that many "indicators" -- at various different cost points from tens of dollars to thousands of dollars) You'd think nothing of designing custom HARDWARE to do this. Why not *software*? What *else* is the 21c MCU going to do for your system? Or, do you just treat them all as "peripherals" that chat with some "master CPU"? How fat will the pipe be between the master and those slaves? (how costly to transfer data to them for display)
On 29/01/2020 04:34, Don Y wrote:
> Hi David, > > On 1/28/2020 3:29 AM, David Brown wrote: >> On 28/01/2020 01:53, Don Y wrote: >>> Then, at the level above the driver, build a schedule for which >>> intervals to activate the LED. >> >> It sounds doable, but complicated.&nbsp; I think you'd need a good many >> different intensity levels before this would pay off - the overhead of >> tracking and setting the different time intervals won't make sense until >> you are using a huge number of levels.&nbsp; And if you wanted a huge number >> of levels, you'd be using a better quality method of generating the PWM >> signal that doesn't have as much jitter. > > There is a high parts cost to using, e.g., a programmable current > source/sink per indicator.&nbsp; And, more data that needs to reside > "outside" the CPU (i.e., *in* those devices).&nbsp; So, any practical > or economical solution has to be implemented in software.
I appreciate why one might want to drive the LEDs from a microcontroller rather than an external LED driver device. If you only need simple control with unregulated current and no matching of brightness, it will be fine. (If you need matched brightness, stability over long use or with different voltages and temperatures, RGB leds, higher power LEDs, or any combination of these - then LED driver chips are the way to do it. You don't save costs by making something that isn't good enough.) However, what I don't (at least, not yet) understand is why you want some complicated scheme with varying periods in the timer, rather than a simple and obvious solution.
> > The IRQ overhead is negligible; "do NOTHING in an ISR if it can be > done OUTSIDE the ISR!"&nbsp; Hence my comment re: building a schedule > (to feed the ISR):
How negligible IRQ overhead is depends on the microcontroller (you haven't told us which you are using). And how much work it makes sense to do in in IRQ depends somewhat on the microcontroller and a lot on the structure of the code. For example, if you can make this a low-priority interrupt and ensure that higher priority interrupts can still run, you can do lots of work with in the interrupt function. However, a simple bit-banged PWM of the type I suggested takes very little work in the interrupt function (a dozen assembly instructions or so, depending on the cpu) and no work outside it.
> > /* XXX PORTME -- hardware dependent (all) */ > typedef void (*isr_t)()
(If this is C, rather than C++, you want "void (*isr_t)(void)" here.)
> typedef ulonglong LED_t&nbsp;&nbsp;&nbsp; // for example
(I strongly recommend <stdint.h> sized types for this sort of thing.)
> > LED_t schedule[4]; > isr_t LED_isr; > > isr_t period1() { > &nbsp;&nbsp;&nbsp; *LED_LATCH = schedule[0]; > &nbsp;&nbsp;&nbsp; LED_isr = &period2; > } > > isr_t period2() { > &nbsp;&nbsp;&nbsp; *LED_LATCH = schedule[1]; > &nbsp;&nbsp;&nbsp; LED_isr = &period4; > } > > isr_t period4() { > &nbsp;&nbsp;&nbsp; *LED_LATCH = schedule[2]; > &nbsp;&nbsp;&nbsp; LED_isr = &period8; > } > > isr_t period8() { > &nbsp;&nbsp;&nbsp; *LED_LATCH = schedule[3]; > &nbsp;&nbsp;&nbsp; LED_isr = &period1; > &nbsp;&nbsp;&nbsp; // refresh schedule[] > } >
A good rule for writing safe, reliable, analysable, and efficient code is never to use function pointers if you can avoid it. They are unlikely to be more efficient, compared to alternatives such as lookup tables, switches, conditionals or calculations - they cripple the compiler's optimiser. And they make it nearly impossible to automatically generate call graphs and otherwise view the flow of the code. On many microcontrollers (but not Cortex-M devices), when an ISR is entered the hardware interrupt mechanism preserves only an absolute minimum of registers (PC and flags, typically). You mark the function with compiler-specific annotation so that it knows that any volatile (aka "caller save") registers need to be preserved if they are used. When the compiler knows which registers it needs to use, it will only save the ones it needs. But it is calling an unknown function (such as calling via a function pointer), it must assume the callee will trash these registers - it needs to preserve them all. Any time you have a set of possible function pointers like this - as a sort of state machine - a switch statement is almost always more efficient in the code (including on a Cortex-M) and almost always clearer.
> If using an internal (programmable) timer, each of these > ISRs would also jam the appropriate CONSTANTS into the > timer hardware to setup the NEXT period.
I thought you wanted to support multiple LEDs? This system won't work, at least not if you have different brightnesses or phases for the LEDs.
> > You can now build the schedule[] in a non-real-time context.
Keep the whole thing simple, and there is no need for a schedule at all.
> >> As far as processor load goes, there are two key points.&nbsp; One is the >> worst case rate of the IRQs - and that is determined by the lowest >> digit, and will be the same for a regular pulse or for this complex >> scheme.&nbsp; The other is the amount of processor work done in each IRQ, and >> that will be massively more than with a simple system, outweighing it in >> total work until you have a huge number of levels. > > There is no need to do any "work" in the ISR -- other > than arranging for the PRECOMPUTED set of outputs to > be delivered to the actual hardware.&nbsp; All of the > "thinking" can be done when the application "decides" > what intensity level it wants for a particular indicator > (and the "schedule" is built).
The only reason that there is any "thinking" necessary, is because you want to introduce a scheduler. The interrupt function I wrote is going to take a similar time to the ones you have. Details of the processor, compiler, and implementation of actually turning the output on and off will affect this, but they will not hugely affect the relative performances. And if you prefer, you can cut it down even more by pre-generating the pattern mask and rotating that, instead of having a stepper mask - then the result is going to be smaller and faster than your interrupts. Which formulation is best depends on how many leds you have, and whether you need their patterns synchronised in some way.
> > Latency *can* be a potential problem -- but one easily > (and economically) solved by the addition of a second > level of (hardware) buffering -- see my earlier replies.
How can latency be an issue for an LED? What kind of system are you designing where the interrupt here might have a jitter of over 10 milliseconds? A $0.40 microcontroller is over-powered for this job by a factor of about 1000, maybe 10000. Your scheduler idea would complicate matters and mean you might need a $0.50 microcontroller to have the program space to handle the function pointers, and only be 100 times overkill.
> > Lean ISRs make it possible to drive the IRQ rate up (period > down) to minimize the "consequences" of saccades.&nbsp; (though > I'm not designing a *display* so the user isn't expected to > be fixating on the presentation so much that this would > be an annoyance)
A higher blink rate can reduce the risk of odd effects, and reduce current swings (as they will be handled by nearby capacitors). But if flickering or EMI is an issue, a capacitor and extra resistor per LED will help enormously. I fully agree that ISR's should usually be kept fast. The method I suggest has a fast ISR.
> >> Pick a refresh frequency of 100 Hz, which is fast enough that you won't >> see blinking even if you move your eyes quickly.&nbsp; We'll have 16 levels, >> which is more than enough for normal circumstances. > > "16" *seems* like a good number -- note that ANSI3.64 gave us > just a few colors and intensities to "present information".&nbsp; I > don't recall anyone complaining that they wish they had more > colors or intensities!&nbsp; I may find that 4 is more appropriate > esp if they don't have to be geometrically related to each other > in terms of durations -- like {0, 20, 80, 100%}.
This all depends on what you are looking for. There are a few possible scenarios for wanting different brightnesses on LEDs, including: 1. Indicating different states to the user. Here you want at most 3 states, perhaps 0%, 20%, 100%. 2. Letting the user pick brightness to suit ambient lighting. Four or five levels would be fine. 3. Giving the user the impression of continuous variation of levels. Something like 16 levels will do this job. 4. Matching RGB for fine colours. Here you need a lot more levels, such as 256, and should use a dedicated driver. If you only need 4 levels, can you use two microcontroller pins? Then it's just two pins, two resistors, and four combinations - no PWM or timings at all.
> > [Anything done with *hardware* "LED drivers" puts constraints > on what you can do, going forward.]
All choices put constraints on the future - there are always trade-offs.
> > As I said, originally, I'm trying to anticipate what *visual* > consequences might arise from this approach before settling > on an implementation.&nbsp; If it's not possible for a user to reliably > differentiate between 16 levels, then why implement that many? > Or, if two "level 7" indicators appear to have dramatically > differing intensities, then the expectation of them being > "the same" (or even "similar") is foolhardy. > > [N.B. The same could be true of a pure hardware implementation > that sought to display similar intensities] >
This all depends on what you are planning to do. We know nothing about your system, its users, what the LEDs will do, etc. Only you can figure this out. But these are definitely important questions that need asked and answered before you go forward. All we can do at the moment is look at your ideas for making PWM's from timer interrupts, and see that your solution appears to be significantly more complicated than a much simpler and more straightforward solution, while providing no apparent benefit.
>> That puts your timer interrupt at 1600 Hz - not an onerous burden for >> most systems.&nbsp; And being a regular timer interrupt, you can use the same >> tick for other purposes, unlike your specialised one. > > Note that you, *still*, have a REGULAR, PERIODIC INTERRUPT > (e.g., suitable for a system timer) by selecting ANY *one* > of these to also invoke that service (obviously, you'd pick > the isr_t that gives you the most latency tolerance before > the NEXT scheduled invocation).&nbsp; E.g., in your example, there > would be FOUR candidates for a "100Hz periodic interrupt"; > pick the one that is least "crowded" by the following ISRs > activation.