EmbeddedRelated.com
Forums
The 2024 Embedded Online Conference

Video Controller :: Best Practices

Started by Vladimir Vassilevsky November 19, 2012
I have to design a video subsystem for instrument. Nothing really fancy; 
just color TFT with menu interface, fonts, bitmaps, graphic primitives, 
plots and such. The graphics would be entirely CPU based; video memory is 
part of main memory. The frame buffer is DMAed to LCD directly; so the DMA 
continuously runs through memory.
Video is not  main occupation of the system; perhaps it would take less then 
10% of the total CPU workload.
What are good today's practices for hardware and software?

1. Could you recommend graphics library available as platform independent 
source code?

2.  The frame buffer is going to be a part of main memory of the system. 
Should the frame buffer be cached or not? If the buffer is cached, should 
the cache be set to write back or write through? (If cache is WB, then it 
has to be flushed at each frame).

3. Is it worth to synchronize video memory updates to frame rate or scan 
rate? Or, perhaps, have two frame buffers swapped at each frame, so there 
won't be any artifacts because of interference between the frame rate and 
the video update rate? How big of the issue synchronization really is?

4. What are the other unobvious issues, good practices, things to avoid?

Vladimir Vassilevsky
DSP and Mixed Signal Consultant
www.abvolt.com


On Mon, 19 Nov 2012 09:40:51 -0600, Vladimir Vassilevsky wrote:

> I have to design a video subsystem for instrument. Nothing really fancy; > just color TFT with menu interface, fonts, bitmaps, graphic primitives, > plots and such. The graphics would be entirely CPU based; video memory > is part of main memory. The frame buffer is DMAed to LCD directly; so > the DMA continuously runs through memory. > Video is not main occupation of the system; perhaps it would take less > then 10% of the total CPU workload. > What are good today's practices for hardware and software? > > 1. Could you recommend graphics library available as platform > independent source code?
Twelve years ago my answer would have been an emphatic "PEG, from Swell Software (http://www.swellsoftware.com/)." It would have been based on then-recent experience with happy coworkers quickly developing quality product and raving about its ease of use. My recommendation today is still the same, only less emphatic and my information is 12 years old. So take it with a grain of salt.
> 2. The frame buffer is going to be a part of main memory of the system. > Should the frame buffer be cached or not? If the buffer is cached, > should the cache be set to write back or write through? (If cache is WB, > then it has to be flushed at each frame).
You should know the answer to that: it depends. I think it depends _a lot_ on your hardware: which side of the core the DMA looks at memory (a perverse system might DMA on the "processor" side of the cache), how much you'd slow the processor down to write to un-cached memory, etc. Were it me, and were the DMA normal (i.e., on the outside of the cache from the core), I'd go ahead and cache, and leave the decision on write- through vs. making a point to flush the cache before a DMA is due as a tactical decision.
> 3. Is it worth to synchronize video memory updates to frame rate or scan > rate? Or, perhaps, have two frame buffers swapped at each frame, so > there won't be any artifacts because of interference between the frame > rate and the video update rate? How big of the issue synchronization > really is?
I think the answer to both is yes. If you just synchronize video memory updates to the frame rate, but don't implement a buffer, then you'll have to finish diddling the video memory (including any cache flushes) in the interval between the end of one video DMA and the next. If you use two buffers, then you have the overhead of copying from buffer A to buffer B when you switch, but you have the luxury of completely updating buffer B -- no matter how many frames it takes -- before you switch. That may leave your video slow, but not half-done in places. Regardless of how you synchronize, if you ever pound the graphics engine with changes that are faster than the video can keep up you need to either figure out how to intelligently throw information away, or you need to let the video update pace the human-interface part of your software without letting it bog down the rest of your system. You're smart enough not to do that.
> 4. What are the other unobvious issues, good practices, things to avoid?
Make sure that you still have enough processor and memory bandwidth _after_ all the details have been taken into account. In my (not terribly large) experience, its easy to add up all the video accesses and forget about the overhead, particularly the overhead of asynchronous accesses, and particularly if you're using SDRAM. Prototyping is never a bad idea : make some simple app (like the Windows "Mystify" screen saver) and run it concurrently with a fake (or real) app that does whatever real-time stuff you need to do, and make sure they don't stomp on each other. Prototyping is particularly important if you're using the processor's own memory management, or if you're using some shrink-wrapped FPGA interface: you have less control over what happens, so you can't necessarily schedule the accesses for the lowest overhead. Without knowing what processor you're using I can't say, but if it works out that you need separate memory chips anyway (which I assume is unlikely), consider separating the video memory from the "regular old" memory, particularly if you're using an FPGA for your video control. -- My liberal friends think I'm a conservative kook. My conservative friends think I'm a liberal kook. Why am I not happy that they have found common ground? Tim Wescott, Communications, Control, Circuits & Software http://www.wescottdesign.com
"Tim Wescott" <tim@seemywebsite.com> wrote:
> On Mon, 19 Nov 2012 09:40:51 -0600, Vladimir Vassilevsky wrote:
>> 2. The frame buffer is going to be a part of main memory of the system. >> Should the frame buffer be cached or not? If the buffer is cached, >> should the cache be set to write back or write through? (If cache is WB, >> then it has to be flushed at each frame). > > You should know the answer to that: it depends. I think it depends _a > lot_ on your hardware: which side of the core the DMA looks at memory (a > perverse system might DMA on the "processor" side of the cache), how much > you'd slow the processor down to write to un-cached memory, etc. > > Were it me, and were the DMA normal (i.e., on the outside of the cache > from the core), I'd go ahead and cache, and leave the decision on write- > through vs. making a point to flush the cache before a DMA is due as a > tactical decision.
This is what I think: Video buffer should be covered by wb cache; otherwise there will be a lot of overhead for CPU access. For best results, cache lines should be mapped to square areas on the schreen (like tiles). Which means the video address mapping should be different for the CPU and for scan DMA. Vladimir Vassilevsky DSP and Mixed Signal Consultant www.abvolt.com
On Tue, 20 Nov 2012 08:44:31 -0600, Vladimir Vassilevsky wrote:

> "Tim Wescott" <tim@seemywebsite.com> wrote: >> On Mon, 19 Nov 2012 09:40:51 -0600, Vladimir Vassilevsky wrote: > >>> 2. The frame buffer is going to be a part of main memory of the >>> system. Should the frame buffer be cached or not? If the buffer is >>> cached, should the cache be set to write back or write through? (If >>> cache is WB, then it has to be flushed at each frame). >> >> You should know the answer to that: it depends. I think it depends _a >> lot_ on your hardware: which side of the core the DMA looks at memory >> (a perverse system might DMA on the "processor" side of the cache), how >> much you'd slow the processor down to write to un-cached memory, etc. >> >> Were it me, and were the DMA normal (i.e., on the outside of the cache >> from the core), I'd go ahead and cache, and leave the decision on >> write- through vs. making a point to flush the cache before a DMA is >> due as a tactical decision. > > This is what I think: > Video buffer should be covered by wb cache; otherwise there will be a > lot of overhead for CPU access. For best results, cache lines should be > mapped to square areas on the schreen (like tiles). Which means the > video address mapping should be different for the CPU and for scan DMA.
Hmm. It seems like the oddball mapping is only going to do you well if you know from the get-go that you'll only be updating portions of the screen at a time, and then only if it doesn't slow you down otherwise. I do know that PEG will support you in your quest: you either have to, or you can, write the low-level functions to actually set pixels. This means that you can do any necessary address swizzling in those functions. Consider, though, that you may lose more time in rearranging the pixels in software than you gain in cache efficiency. -- My liberal friends think I'm a conservative kook. My conservative friends think I'm a liberal kook. Why am I not happy that they have found common ground? Tim Wescott, Communications, Control, Circuits & Software http://www.wescottdesign.com
 Tim Wescott  wrote:
> >If you use two buffers, then you have the overhead of copying from buffer >A to buffer B when you switch,
A beter approach, if possible, is to reconfigure the DMA controller on each buffer swap to transfer the data from the until now inactive buffer - no copy needed.
> but you have the luxury of completely updating buffer B ...
-- Roberto Waltman [ Please reply to the group, return address is invalid ]
On Mon, 19 Nov 2012 09:40:51 -0600, "Vladimir Vassilevsky"
<nospam@nowhere.com> wrote:

>I have to design a video subsystem for instrument....
>1. Could you recommend graphics library available as platform independent >source code?
Commercial: Heard good comments on Segger's emWin. No personal experience. http://segger.com/emwin.html Open Source: Depui - Ditto http://www.deleveld.dds.nl/depui33/depui.htm -- Roberto Waltman [ Please reply to the group, return address is invalid ]

The 2024 Embedded Online Conference