EmbeddedRelated.com
Forums

Camera interfaces

Started by Don Y December 29, 2022
On 12/29/2022 20:48, Richard Damon wrote:
> On 12/29/22 12:45 PM, Dimiter_Popoff wrote: >> On 12/29/2022 19:21, Richard Damon wrote: >>> On 12/29/22 8:33 AM, Dimiter_Popoff wrote: >>>> On 12/29/2022 15:16, Don Y wrote: >>>>> ISTR playing with de-encapsulated DRAMs as image sensors >>>>> back in school (DRAM being relatively new technology, then). >>>>> >>>>> But, most cameras seem to have (bit- or word-) serial interfaces >>>>> nowadays.  Are there any (mainstream/high volume) devices that >>>>> "look" like a chunk of memory, in their native form? >>>>> >>>> >>>> Hah, Don, consider yourself lucky if you find a camera you have >>>> enough documentation to use at all, serial or whatever. >>>> >>>> The MIPI standards are only for politburo members (last time I looked >>>> you need to make several millions annually to be able to *apply* >>>> for membership, which of course costs thousands, annually again). >>> >>> If you are looking for the very latest standards, yes. Enough data is >>> out there to handle a lot of basic MIPI operations. Since the small >>> player isn't going to be trying to implement the low level interface >>> themselves (or at least shouldn't be trying to), >> >> So how does one use a MIPI camera without using the low level interface? > > You use a chip that has a mipi interface, either a CPU or FPGA with a > built in MIPI interface or a MIPI converter chip that converts the MIPI > interface into something you can deal with.
An FPGA with MIPI would do, I have not looked for one yet.
> >> >>> unless you are trying to work with a bleeding edge camera (which you >>> probably can't actually buy if you are a small player) you can tend >>> to find enough information to use the camera. >> >> That is fair enough, as long as we are talking about some internal >> sensor specifics of the "bleeding edge" cameras. > > Bleeding Edge cameras/displays may need newer versions of MIPI than may > be easy to find in the consumer market. They may need bleeding edge > processors.
Well a 64 bit GHz range 4 or 8 core power architecture part should be plenty. But I am not after bleeding edge cameras, a decent one I can control will do.
> > As I mention below, more important are the configuration registers, > which might be harder to get for bleeding edge parts. This is often > proprietary, as knowing what is adjustable is often part of the secret > sauce for those cameras.
Do you get that sort of data for decent cameras? Sort of like how to focus it etc.? Or do you have to rely on black box "converters", like with wifi modules which won't let you get around their tcp/ip stack?
>>> >>> My experiance is if you can actually buy the camera normally, there >>> will be the data available to use it. >> >> That's really reassuring. I am more interested in talking to MIPI >> display modules than to cameras (at least the sequence is this) but >> still. > > So you want a chip with MIPI DSI capability built in, or a convert chip.
Not really, no. I want to be able to put the framebuffer data into the display like I have been doing with RGB, hsync, vsync etc., via a parallel or lvds interface. Is there enough info out there how to do this with an fpga? I think I have enough info to do hdmi this way, but no MIPI. Well, my guess is that pixel data will still be pixel data etc., can't be that hard.
On 12/29/22 2:26 PM, Don Y wrote:
> On 12/29/2022 10:06 AM, Richard Damon wrote: >> On 12/29/22 8:16 AM, Don Y wrote: >>> ISTR playing with de-encapsulated DRAMs as image sensors >>> back in school (DRAM being relatively new technology, then). >>> >>> But, most cameras seem to have (bit- or word-) serial interfaces >>> nowadays.  Are there any (mainstream/high volume) devices that >>> "look" like a chunk of memory, in their native form? >> >> Using a DRAM in that manner would only give you a single bit value for >> each pixel (maybe some more modern memories store multiple bits in a >> cell so you get a few grey levels). > > I mentioned the DRAM reference only as an exemplar of how a "true" > parallel, random access interface could exist.
Right, and cameras based on parallel random access do exist, but tend to be on the smaller and slower end of the spectrum.
> >> There are some CMOS sensors that let you address pixels individually >> and in a random order (like you got with the DRAM) but by its nature, >> such a readout method tends to be slow, and space inefficient, so >> these interfaces tend to be only available on smaller camera arrays. > > But, if you are processing the image, such an approach can lead to > higher throughput than having to transfer a serial data stream into > memory (thus consuming memory bandwidth).
My guess is that in almost all cases, the need to send the address to teh camera and then get back the pixel value is going to use up more total bandwidth than getting the image in a stream. The one exception would be if you need just a very small percentage of the array data, and it is scattered over the array so a Region of Interest operation can't be used.
> >> That is why most sensors read out via row/column shift registers to a >> pixel serial (maybe multiple pixels per clock) output, and if the >> camera includes its own A/D conversion, might serialize the results to >> minimize interconnect. > > Yes, but then you have to store it in memory in order to examine it. > I.e., if your goal isn't just to pass the image out to a display, > then having to unpack the serial stream into RAM is an added cost. >
Unless you make sure you get a camera with the same image format and timing as your display.
On 12/29/2022 2:09 PM, Richard Damon wrote:
> On 12/29/22 2:26 PM, Don Y wrote: >> On 12/29/2022 10:06 AM, Richard Damon wrote: >>> On 12/29/22 8:16 AM, Don Y wrote: >>>> ISTR playing with de-encapsulated DRAMs as image sensors >>>> back in school (DRAM being relatively new technology, then). >>>> >>>> But, most cameras seem to have (bit- or word-) serial interfaces >>>> nowadays.  Are there any (mainstream/high volume) devices that >>>> "look" like a chunk of memory, in their native form? >>> >>> Using a DRAM in that manner would only give you a single bit value for each >>> pixel (maybe some more modern memories store multiple bits in a cell so you >>> get a few grey levels). >> >> I mentioned the DRAM reference only as an exemplar of how a "true" >> parallel, random access interface could exist. > > Right, and cameras based on parallel random access do exist, but tend to be on > the smaller and slower end of the spectrum. > >> >>> There are some CMOS sensors that let you address pixels individually and in >>> a random order (like you got with the DRAM) but by its nature, such a >>> readout method tends to be slow, and space inefficient, so these interfaces >>> tend to be only available on smaller camera arrays. >> >> But, if you are processing the image, such an approach can lead to >> higher throughput than having to transfer a serial data stream into >> memory (thus consuming memory bandwidth). > > My guess is that in almost all cases, the need to send the address to teh > camera and then get back the pixel value is going to use up more total > bandwidth than getting the image in a stream. The one exception would be if you > need just a very small percentage of the array data, and it is scattered over > the array so a Region of Interest operation can't be used.
No, you're missing the nature of the DRAM example. You don't "send" the address of the memory cell desired *to* the DRAM. You simply *address* the memory cell, directly. I.e., if there are N locations in the DRAM, then N addresses in your address space are consumed by it; one for each location in the array. I'm looking for *that* sort of "direct access" in a camera. I could *emulate* it by building a module that implements <whatever> interface to <whichever> camera and deserializes the data into a RAM. Then, mapping that *entire* RAM into the address space of the host processor. (Keeping the RAM updated would require a pseudo dual-ported architecture; possibly toggling between an "active" RAM and an "updated" RAM so that the full bandwidth of the RAM was available to the host) Having the host processor (DMA, etc.) perform this task means it loses bandwidth to the "deserialization" activity.
>>> That is why most sensors read out via row/column shift registers to a pixel >>> serial (maybe multiple pixels per clock) output, and if the camera includes >>> its own A/D conversion, might serialize the results to minimize interconnect. >> >> Yes, but then you have to store it in memory in order to examine it. >> I.e., if your goal isn't just to pass the image out to a display, >> then having to unpack the serial stream into RAM is an added cost. > > Unless you make sure you get a camera with the same image format and timing as > your display.
I typically don't "display" the images captured. Rather, I use the cameras as sensors: is there anything in the path of the closing (or opening) garage door that should cause me to inhibit/abort those actions? has the mail truck appeared at the mailbox, yet, today? *who* is standing at the front door?
On 12/30/2022 0:57, Don Y wrote:
> On 12/29/2022 2:09 PM, Richard Damon wrote: >> On 12/29/22 2:26 PM, Don Y wrote: >>> On 12/29/2022 10:06 AM, Richard Damon wrote: >>>> On 12/29/22 8:16 AM, Don Y wrote: >>>>> ISTR playing with de-encapsulated DRAMs as image sensors >>>>> back in school (DRAM being relatively new technology, then). >>>>> >>>>> But, most cameras seem to have (bit- or word-) serial interfaces >>>>> nowadays.&nbsp; Are there any (mainstream/high volume) devices that >>>>> "look" like a chunk of memory, in their native form? >>>> >>>> Using a DRAM in that manner would only give you a single bit value >>>> for each pixel (maybe some more modern memories store multiple bits >>>> in a cell so you get a few grey levels). >>> >>> I mentioned the DRAM reference only as an exemplar of how a "true" >>> parallel, random access interface could exist. >> >> Right, and cameras based on parallel random access do exist, but tend >> to be on the smaller and slower end of the spectrum. >> >>> >>>> There are some CMOS sensors that let you address pixels individually >>>> and in a random order (like you got with the DRAM) but by its >>>> nature, such a readout method tends to be slow, and space >>>> inefficient, so these interfaces tend to be only available on >>>> smaller camera arrays. >>> >>> But, if you are processing the image, such an approach can lead to >>> higher throughput than having to transfer a serial data stream into >>> memory (thus consuming memory bandwidth). >> >> My guess is that in almost all cases, the need to send the address to >> teh camera and then get back the pixel value is going to use up more >> total bandwidth than getting the image in a stream. The one exception >> would be if you need just a very small percentage of the array data, >> and it is scattered over the array so a Region of Interest operation >> can't be used. > > No, you're missing the nature of the DRAM example. > > You don't "send" the address of the memory cell desired *to* the DRAM. > You simply *address* the memory cell, directly.&nbsp; I.e., if there are > N locations in the DRAM, then N addresses in your address space are > consumed by it; one for each location in the array. > > I'm looking for *that* sort of "direct access" in a camera. > > I could *emulate* it by building a module that implements <whatever> > interface to <whichever> camera and deserializes the data into a > RAM.&nbsp; Then, mapping that *entire* RAM into the address space of the > host processor. > > (Keeping the RAM updated would require a pseudo dual-ported architecture; > possibly toggling between an "active" RAM and an "updated" RAM so that > the full bandwidth of the RAM was available to the host) > > Having the host processor (DMA, etc.) perform this task means it loses > bandwidth to the "deserialization" activity.
Well of course but are you sure you can really win much? At first glance you'd be able to halve the memory bandwidth. But then you may run into problems with "doppler" kind of effects (clearly not Doppler but you get the idea) if you access the frame being acquired; so you'll want that double buffering you are talking about elsewhere (one frame being acquired and one having been acquired prior to that). Which would mean that somewhere something will have to do the copying you want to avoid... Since you have already done it with USB cameras I think the practical way is to just keep doing it this way, may be not USB if you can find some more economic way to do it, MIPI or whatever.
On 12/29/22 3:11 PM, Dimiter_Popoff wrote:
> On 12/29/2022 20:48, Richard Damon wrote: >> On 12/29/22 12:45 PM, Dimiter_Popoff wrote: >>> On 12/29/2022 19:21, Richard Damon wrote: >>>> On 12/29/22 8:33 AM, Dimiter_Popoff wrote: >>>>> On 12/29/2022 15:16, Don Y wrote: >>>>>> ISTR playing with de-encapsulated DRAMs as image sensors >>>>>> back in school (DRAM being relatively new technology, then). >>>>>> >>>>>> But, most cameras seem to have (bit- or word-) serial interfaces >>>>>> nowadays.&nbsp; Are there any (mainstream/high volume) devices that >>>>>> "look" like a chunk of memory, in their native form? >>>>>> >>>>> >>>>> Hah, Don, consider yourself lucky if you find a camera you have >>>>> enough documentation to use at all, serial or whatever. >>>>> >>>>> The MIPI standards are only for politburo members (last time I looked >>>>> you need to make several millions annually to be able to *apply* >>>>> for membership, which of course costs thousands, annually again). >>>> >>>> If you are looking for the very latest standards, yes. Enough data >>>> is out there to handle a lot of basic MIPI operations. Since the >>>> small player isn't going to be trying to implement the low level >>>> interface themselves (or at least shouldn't be trying to), >>> >>> So how does one use a MIPI camera without using the low level interface? >> >> You use a chip that has a mipi interface, either a CPU or FPGA with a >> built in MIPI interface or a MIPI converter chip that converts the >> MIPI interface into something you can deal with. > > An FPGA with MIPI would do, I have not looked for one yet. > >> >>> >>>> unless you are trying to work with a bleeding edge camera (which you >>>> probably can't actually buy if you are a small player) you can tend >>>> to find enough information to use the camera. >>> >>> That is fair enough, as long as we are talking about some internal >>> sensor specifics of the "bleeding edge" cameras. >> >> Bleeding Edge cameras/displays may need newer versions of MIPI than >> may be easy to find in the consumer market. They may need bleeding >> edge processors. > > Well a 64 bit GHz range 4 or 8 core power architecture part should be > plenty. But I am not after bleeding edge cameras, a decent one I > can control will do.
Not Bleeding Edge in processor power, but in MIPI interfaces. I don't know it the latest cameras are using faster version of the MIPI interface to move the pixels faster. If so, you need a chip with that faster grade MIPI interface.
> >> >> As I mention below, more important are the configuration registers, >> which might be harder to get for bleeding edge parts. This is often >> proprietary, as knowing what is adjustable is often part of the secret >> sauce for those cameras. > > Do you get that sort of data for decent cameras? Sort of like how > to focus it etc.? Or do you have to rely on black box "converters", > like with wifi modules which won't let you get around their tcp/ip > stack?
I haven't heard of team members having trouble getting specs for actually available product. No, we are a bit bigger than teh "hobbiest" market, but no where near the big boys. Our volumes would be in 1000s in some cases.
> >>>> >>>> My experiance is if you can actually buy the camera normally, there >>>> will be the data available to use it. >>> >>> That's really reassuring. I am more interested in talking to MIPI >>> display modules than to cameras (at least the sequence is this) but >>> still. >> >> So you want a chip with MIPI DSI capability built in, or a convert chip. > > Not really, no. I want to be able to put the framebuffer data into > the display like I have been doing with RGB, hsync, vsync etc., via > a parallel or lvds interface. Is there enough info out there how to > do this with an fpga? I think I have enough info to do hdmi this way, > but no MIPI. Well, my guess is that pixel data will still be pixel > data etc., can't be that hard. > >
(DSI is Display Serial Interface, that is the version of MIPI that a MIPI display would use) I have used Lattice Crosslink FPGAs to do that sort of work. They are small gate arrays designed for protocol conversion.
On 12/29/22 5:57 PM, Don Y wrote:
> On 12/29/2022 2:09 PM, Richard Damon wrote: >> On 12/29/22 2:26 PM, Don Y wrote: >>> On 12/29/2022 10:06 AM, Richard Damon wrote: >>>> On 12/29/22 8:16 AM, Don Y wrote: >>>>> ISTR playing with de-encapsulated DRAMs as image sensors >>>>> back in school (DRAM being relatively new technology, then). >>>>> >>>>> But, most cameras seem to have (bit- or word-) serial interfaces >>>>> nowadays.&nbsp; Are there any (mainstream/high volume) devices that >>>>> "look" like a chunk of memory, in their native form? >>>> >>>> Using a DRAM in that manner would only give you a single bit value >>>> for each pixel (maybe some more modern memories store multiple bits >>>> in a cell so you get a few grey levels). >>> >>> I mentioned the DRAM reference only as an exemplar of how a "true" >>> parallel, random access interface could exist. >> >> Right, and cameras based on parallel random access do exist, but tend >> to be on the smaller and slower end of the spectrum. >> >>> >>>> There are some CMOS sensors that let you address pixels individually >>>> and in a random order (like you got with the DRAM) but by its >>>> nature, such a readout method tends to be slow, and space >>>> inefficient, so these interfaces tend to be only available on >>>> smaller camera arrays. >>> >>> But, if you are processing the image, such an approach can lead to >>> higher throughput than having to transfer a serial data stream into >>> memory (thus consuming memory bandwidth). >> >> My guess is that in almost all cases, the need to send the address to >> teh camera and then get back the pixel value is going to use up more >> total bandwidth than getting the image in a stream. The one exception >> would be if you need just a very small percentage of the array data, >> and it is scattered over the array so a Region of Interest operation >> can't be used. > > No, you're missing the nature of the DRAM example. > > You don't "send" the address of the memory cell desired *to* the DRAM. > You simply *address* the memory cell, directly.&nbsp; I.e., if there are > N locations in the DRAM, then N addresses in your address space are > consumed by it; one for each location in the array. >
No, look at you DRAM timing again, the trasaction begins with the sending of the address over typically two clock edges with RAS and CAS, and then a couple of clock cycles and then you get back on the data bus the answer. Yes, the addresses come from an address bus, using address space out of the processor, but it is a multi-cycle operation. Typically, you read back a "burst" with some minimal caching on the processor side, but that is more a minor detail.
> I'm looking for *that* sort of "direct access" in a camera.
Its been awhile, but I thought some CMOS cameras could work on a similar basis, strobe a Row/Column address from pins on the camera, and a few clock cycles later you got a burst out of the camera starting at the address cell.
> > I could *emulate* it by building a module that implements <whatever> > interface to <whichever> camera and deserializes the data into a > RAM.&nbsp; Then, mapping that *entire* RAM into the address space of the > host processor. > > (Keeping the RAM updated would require a pseudo dual-ported architecture; > possibly toggling between an "active" RAM and an "updated" RAM so that > the full bandwidth of the RAM was available to the host) > > Having the host processor (DMA, etc.) perform this task means it loses > bandwidth to the "deserialization" activity. > >>>> That is why most sensors read out via row/column shift registers to >>>> a pixel serial (maybe multiple pixels per clock) output, and if the >>>> camera includes its own A/D conversion, might serialize the results >>>> to minimize interconnect. >>> >>> Yes, but then you have to store it in memory in order to examine it. >>> I.e., if your goal isn't just to pass the image out to a display, >>> then having to unpack the serial stream into RAM is an added cost. >> >> Unless you make sure you get a camera with the same image format and >> timing as your display. > > I typically don't "display" the images captured.&nbsp; Rather, I use the > cameras as sensors:&nbsp; is there anything in the path of the closing > (or opening) garage door that should cause me to inhibit/abort > those actions?&nbsp; has the mail truck appeared at the mailbox, yet, > today?&nbsp; *who* is standing at the front door? >
On 12/29/2022 5:40 PM, Richard Damon wrote:
> On 12/29/22 5:57 PM, Don Y wrote: >> On 12/29/2022 2:09 PM, Richard Damon wrote: >>> On 12/29/22 2:26 PM, Don Y wrote: >>>> On 12/29/2022 10:06 AM, Richard Damon wrote: >>>>> On 12/29/22 8:16 AM, Don Y wrote: >>>>>> ISTR playing with de-encapsulated DRAMs as image sensors >>>>>> back in school (DRAM being relatively new technology, then). >>>>>> >>>>>> But, most cameras seem to have (bit- or word-) serial interfaces >>>>>> nowadays.&nbsp; Are there any (mainstream/high volume) devices that >>>>>> "look" like a chunk of memory, in their native form? >>>>> >>>>> Using a DRAM in that manner would only give you a single bit value for >>>>> each pixel (maybe some more modern memories store multiple bits in a cell >>>>> so you get a few grey levels). >>>> >>>> I mentioned the DRAM reference only as an exemplar of how a "true" >>>> parallel, random access interface could exist. >>> >>> Right, and cameras based on parallel random access do exist, but tend to be >>> on the smaller and slower end of the spectrum. >>> >>>> >>>>> There are some CMOS sensors that let you address pixels individually and >>>>> in a random order (like you got with the DRAM) but by its nature, such a >>>>> readout method tends to be slow, and space inefficient, so these >>>>> interfaces tend to be only available on smaller camera arrays. >>>> >>>> But, if you are processing the image, such an approach can lead to >>>> higher throughput than having to transfer a serial data stream into >>>> memory (thus consuming memory bandwidth). >>> >>> My guess is that in almost all cases, the need to send the address to teh >>> camera and then get back the pixel value is going to use up more total >>> bandwidth than getting the image in a stream. The one exception would be if >>> you need just a very small percentage of the array data, and it is scattered >>> over the array so a Region of Interest operation can't be used. >> >> No, you're missing the nature of the DRAM example. >> >> You don't "send" the address of the memory cell desired *to* the DRAM. >> You simply *address* the memory cell, directly.&nbsp; I.e., if there are >> N locations in the DRAM, then N addresses in your address space are >> consumed by it; one for each location in the array. > > No, look at you DRAM timing again, the trasaction begins with the sending of > the address over typically two clock edges with RAS and CAS, and then a couple > of clock cycles and then you get back on the data bus the answer.
But it's a single memory reference. Look at what happens when you deserialize a USB video stream into that same DRAM. The DMAC has tied up the bus for the same amount of time that the processor would have if it read those same N locations.
> Yes, the addresses come from an address bus, using address space out of the > processor, but it is a multi-cycle operation. Typically, you read back a > "burst" with some minimal caching on the processor side, but that is more a > minor detail. > >> I'm looking for *that* sort of "direct access" in a camera. > > Its been awhile, but I thought some CMOS cameras could work on a similar basis, > strobe a Row/Column address from pins on the camera, and a few clock cycles > later you got a burst out of the camera starting at the address cell.
I don't want the camera to decide which pixels *it* thinks I want to see. It sends me a burst of a row -- but the next part of the image I may have wanted to access may have been down the same *column*. Or, in another part of the image entirely. Serial protocols inherently deliver data in a predefined pattern (often intended for display). Scene analysis doesn't necessarily conform to that same pattern. E.g., if I've imposed a mask on the field to indicate portions that are not important, then any bandwidth the camera spends delivering that data to memory is wasted. If the memory was "just there", then there would be no associated bandwidth impact.
On 12/29/2022 4:14 PM, Dimiter_Popoff wrote:
> On 12/30/2022 0:57, Don Y wrote: >> On 12/29/2022 2:09 PM, Richard Damon wrote: >>> On 12/29/22 2:26 PM, Don Y wrote: >>>> On 12/29/2022 10:06 AM, Richard Damon wrote: >>>>> On 12/29/22 8:16 AM, Don Y wrote: >>>>>> ISTR playing with de-encapsulated DRAMs as image sensors >>>>>> back in school (DRAM being relatively new technology, then). >>>>>> >>>>>> But, most cameras seem to have (bit- or word-) serial interfaces >>>>>> nowadays.&nbsp; Are there any (mainstream/high volume) devices that >>>>>> "look" like a chunk of memory, in their native form? >>>>> >>>>> Using a DRAM in that manner would only give you a single bit value for >>>>> each pixel (maybe some more modern memories store multiple bits in a cell >>>>> so you get a few grey levels). >>>> >>>> I mentioned the DRAM reference only as an exemplar of how a "true" >>>> parallel, random access interface could exist. >>> >>> Right, and cameras based on parallel random access do exist, but tend to be >>> on the smaller and slower end of the spectrum. >>> >>>> >>>>> There are some CMOS sensors that let you address pixels individually and >>>>> in a random order (like you got with the DRAM) but by its nature, such a >>>>> readout method tends to be slow, and space inefficient, so these >>>>> interfaces tend to be only available on smaller camera arrays. >>>> >>>> But, if you are processing the image, such an approach can lead to >>>> higher throughput than having to transfer a serial data stream into >>>> memory (thus consuming memory bandwidth). >>> >>> My guess is that in almost all cases, the need to send the address to teh >>> camera and then get back the pixel value is going to use up more total >>> bandwidth than getting the image in a stream. The one exception would be if >>> you need just a very small percentage of the array data, and it is scattered >>> over the array so a Region of Interest operation can't be used. >> >> No, you're missing the nature of the DRAM example. >> >> You don't "send" the address of the memory cell desired *to* the DRAM. >> You simply *address* the memory cell, directly.&nbsp; I.e., if there are >> N locations in the DRAM, then N addresses in your address space are >> consumed by it; one for each location in the array. >> >> I'm looking for *that* sort of "direct access" in a camera. >> >> I could *emulate* it by building a module that implements <whatever> >> interface to <whichever> camera and deserializes the data into a >> RAM.&nbsp; Then, mapping that *entire* RAM into the address space of the >> host processor. >> >> (Keeping the RAM updated would require a pseudo dual-ported architecture; >> possibly toggling between an "active" RAM and an "updated" RAM so that >> the full bandwidth of the RAM was available to the host) >> >> Having the host processor (DMA, etc.) perform this task means it loses >> bandwidth to the "deserialization" activity. > > Well of course but are you sure you can really win much? At first > glance you'd be able to halve the memory bandwidth.
I'd save one memory reference, per pixel, per frame; the data is "just there" instead of having to be streamed in from a USB device and DMA'ed into memory.
> But then you may > run into problems with "doppler" kind of effects (clearly not Doppler > but you get the idea) if you access the frame being acquired; so you'll > want that double buffering you are talking about elsewhere (one frame > being acquired and one having been acquired prior to that). Which would > mean that somewhere something will have to do the copying you want to > avoid...
No. The "deserializer" could (conceivably) just toggle two (or N) pointers to "captured frame" and "frame being captured". You do this when synthesizing video for similar reasons (if you update the area of the frame buffer that is being painted to the visual display AS it is being painted, objects that are "in motion" appear to "tear" (visual artifacts).
> Since you have already done it with USB cameras I think the practical > way is to just keep doing it this way, may be not USB if you can > find some more economic way to do it, MIPI or whatever.
That was the issue I was exploring. I want to see the sort of performance and cost associated with different approaches. USB (and some of the camera protocols) are supported on much silicon. But, when you start wanting to run multiple cameras from the same host.... <frown>
Hi Don,

On Thu, 29 Dec 2022 12:29:46 -0700, Don Y
<blockedofcourse@foo.invalid> wrote:

>On 12/29/2022 6:33 AM, Dimiter_Popoff wrote: >> On 12/29/2022 15:16, Don Y wrote: >>> ISTR playing with de-encapsulated DRAMs as image sensors >>> back in school (DRAM being relatively new technology, then). >>> >>> But, most cameras seem to have (bit- or word-) serial interfaces >>> nowadays.&nbsp; Are there any (mainstream/high volume) devices that >>> "look" like a chunk of memory, in their native form? >>> >> >> Hah, Don, consider yourself lucky if you find a camera you have >> enough documentation to use at all, serial or whatever. >> >> The MIPI standards are only for politburo members (last time I looked >> you need to make several millions annually to be able to *apply* >> for membership, which of course costs thousands, annually again). >> >> Not use about USB, perhaps USB cameras are covered in the standard >> (yet to deal with that one). > >I built my prototypes (proof-of-principle) using COTS USB cameras. >But, getting the data out of the serial data stream and into RAM so >it can be analyzed consumes memory bandwidth. > >I'm currently trying to sort out an approximate cost factor "per >camera" (per video stream) and looking for ways that I can cut costs >(memory bandwidth requirements) to allow greater numbers of >cameras or higher frame rates.
You aren't going to find anything low cost ... if you want bandwidth for multiple cameras, you need to look into bus based frame grabbers. They still exist, but are (relatively) expensive and getting harder to find. George
Hi George!

[Hope you are faring well... enjoying the COLD!  ;) ]

On 12/29/2022 10:29 PM, George Neuner wrote:
>>>> But, most cameras seem to have (bit- or word-) serial interfaces >>>> nowadays.&nbsp; Are there any (mainstream/high volume) devices that >>>> "look" like a chunk of memory, in their native form?
>> I built my prototypes (proof-of-principle) using COTS USB cameras. >> But, getting the data out of the serial data stream and into RAM so >> it can be analyzed consumes memory bandwidth. >> >> I'm currently trying to sort out an approximate cost factor "per >> camera" (per video stream) and looking for ways that I can cut costs >> (memory bandwidth requirements) to allow greater numbers of >> cameras or higher frame rates. > > You aren't going to find anything low cost ... if you want bandwidth > for multiple cameras, you need to look into bus based frame grabbers. > They still exist, but are (relatively) expensive and getting harder to > find.
So, my options are: - reduce the overall frame rate such that N cameras can be serviced by the USB (or whatever) interface *and* the processing load - reduce the resolution of the cameras (a special case of the above) - reduce the number of cameras "per processor" (again, above) - design a "camera memory" (frame grabber) that I can install multiply on a single host - develop distributed algorithms to allow more bandwidth to effectively be applied