Reply by Don Y July 10, 20172017-07-10
Hi Theo,

On 7/10/2017 2:09 AM, Theo Markettos wrote:
> Don Y <blockedofcourse@foo.invalid> wrote: >> In some cases, I might want high frame rates at the expense of detail. >> In other cases, the exact opposite. Most of the frames may get discarded >> to concentrate on "areas of interest" -- based on an analysis of the >> *entire* frame (at a slower rate). >> >> I use cameras to access a variety of "field conditions" that are otherwise >> hard to instrument. Tweaking the software or optics (or even the choice >> of image sensor) is easier than having to move to a different processor >> (family) to meet a different application. > > One thing worth pointing out is that talking to a camera in CSI or other > native format gives you essentially unprocessed RAW data. On a smartphone > the ISP processor does (quite a lot of) image processing to turn the > pixels into an acceptable JPEG image. This includes all of the vendor > tricks like dual cameras, HDR, white balance, etc.
Yes. But, if you are doing anything beyond just shipping that video to a remote location, then you typically *want* access to the raw video. And, often don't care about color purity, etc. E.g., if you are looking for signs of *motion* in an image, you want to be able to apply a mask to the raw image and then look for changes in the selected portions of the image. If the image has been compressed, this is harder to do in the general case. Likewise, doing facial feature extraction/recognition, you don't really care if the flesh-tones are way off hue (e.g., greenish) as you're really looking for shapes, sizes and relative placements of those "features". Again, harder to do with a JPEG than with the raw pixels from which it was distilled. In each of these cases, you might conditionally (or unconditionally) *ALSO* prepare a compressed data stream (to reduce transport/storage costs) to ship off to be recorded or post-processed elsewhere.
> If you just want JPEGs you can slap down a network connection, you might > want a chip that does all the image processing for you, which is what you > get in USB/etc land. If you want to attach to a raw sensor then be prepared > to budget for some compute (and software) to tidy up the images for you. > > Theo >
Reply by Theo Markettos July 10, 20172017-07-10
Paul <paul@pcserviceselectronics.co.uk> wrote:
> People like Connexant, Averlogic and others do chips for converting > stills or streams into JPEG, H.264, MPEG 2 or 4.. > > Makes the processing requirements in the camera a LOT less
It depends what you want the images for. For example, here's the cropped RAW output of a Raspberry Pi CSI camera against the processed output: https://www.pic-upload.de/view-30656067/2016-05-15-RAW-vs-Processed.jpg.html To produce the image on the right you have to do various noise filtering and colour compensation. If this is video you have to do it on every frame. Less than JPEG, but not insignificant. Theo
Reply by Paul July 10, 20172017-07-10
In article <0ae*nenww@news.chiark.greenend.org.uk>, 
theom+news@chiark.greenend.org.uk says...
> > Don Y <blockedofcourse@foo.invalid> wrote: > > In some cases, I might want high frame rates at the expense of detail. > > In other cases, the exact opposite. Most of the frames may get discarded > > to concentrate on "areas of interest" -- based on an analysis of the > > *entire* frame (at a slower rate). > > > > I use cameras to access a variety of "field conditions" that are otherwise > > hard to instrument. Tweaking the software or optics (or even the choice > > of image sensor) is easier than having to move to a different processor > > (family) to meet a different application. > > One thing worth pointing out is that talking to a camera in CSI or other > native format gives you essentially unprocessed RAW data. On a smartphone > the ISP processor does (quite a lot of) image processing to turn the > pixels into an acceptable JPEG image. This includes all of the vendor > tricks like dual cameras, HDR, white balance, etc. > > If you just want JPEGs you can slap down a network connection, you might > want a chip that does all the image processing for you, which is what you > get in USB/etc land. If you want to attach to a raw sensor then be prepared > to budget for some compute (and software) to tidy up the images for you. > > Theo
People like Connexant, Averlogic and others do chips for converting stills or streams into JPEG, H.264, MPEG 2 or 4.. Makes the processing requirements in the camera a LOT less -- Paul Carpenter | paul@pcserviceselectronics.co.uk <http://www.pcserviceselectronics.co.uk/> PC Services <http://www.pcserviceselectronics.co.uk/LogicCell/> Logic Gate Education <http://www.pcserviceselectronics.co.uk/fonts/> Timing Diagram Font <http://www.badweb.org.uk/> For those web sites you hate
Reply by Theo Markettos July 10, 20172017-07-10
Don Y <blockedofcourse@foo.invalid> wrote:
> In some cases, I might want high frame rates at the expense of detail. > In other cases, the exact opposite. Most of the frames may get discarded > to concentrate on "areas of interest" -- based on an analysis of the > *entire* frame (at a slower rate). > > I use cameras to access a variety of "field conditions" that are otherwise > hard to instrument. Tweaking the software or optics (or even the choice > of image sensor) is easier than having to move to a different processor > (family) to meet a different application.
One thing worth pointing out is that talking to a camera in CSI or other native format gives you essentially unprocessed RAW data. On a smartphone the ISP processor does (quite a lot of) image processing to turn the pixels into an acceptable JPEG image. This includes all of the vendor tricks like dual cameras, HDR, white balance, etc. If you just want JPEGs you can slap down a network connection, you might want a chip that does all the image processing for you, which is what you get in USB/etc land. If you want to attach to a raw sensor then be prepared to budget for some compute (and software) to tidy up the images for you. Theo
Reply by Don Y July 9, 20172017-07-09
On 7/9/2017 3:58 AM, Paul wrote:
> In article <ojr6lv$188$1@dont-email.me>, blockedofcourse@foo.invalid > says... >> >> I hacked together my "IP cameras" using some web cams for which I was >> able to track down some FOSS i/f implementation documentation. This >> was OK for a proof of concept. But, I now have to settle on "production" >> hardware. >> >> There seem to be a variety of i/f's for cameras: >> - RS170 (requires hardware to digitize the signal; inflexible) >> - parallel (video is digized, just needs to be captured at pixel rate) >> - USB (digitized but packed in proprietary transport protocols over USB) >> - CSI (digitized with specialty hardware required in host) >> - IP (digitized but packed in protocols over IP) >> >> The USB option is the easiest from the host's *hardware* point of view. >> But, seems to limit the devices that could be supported as camera >> vendors are loathe to publish details that only *their* driver implementers >> should need. >> >> RS170 is... "passe" >> >> Parallel requires lots of signals to/from the host/camera. >> >> CSI seems to be primarily supported on hosts intended to address the >> mobile market. >> >> As USB is essentially "free" (hardware-wise), adding support for it as >> an *alternative* interface seems prudent. >> >> Relying on CSI seems like it will restrict my choice of processor(s) >> going forward (I'd like to standardize on *a* host platform and not >> have to support a variety) -- though that's where the volume lies >> (think camera in cell phones). OTOH, it may make getting components >> difficult (small fish, etc.) >> >> IP seems to have more costs than USB with little/no gain. >> >> Anyone been down this road who can share experiences? Probably >> only looking at 10K/yr... > > My problem is without knowing what type of data flow (what will happen > to the viideo still or stream), rates you are requiring and for how many > sources it is difficult to suggest anything.
*One* camera paired to *one* processor. The demands made of the camera can vary, though -- and the processor does other things besides just conditioning video for delivery to a "remote". In some cases, I might want high frame rates at the expense of detail. In other cases, the exact opposite. Most of the frames may get discarded to concentrate on "areas of interest" -- based on an analysis of the *entire* frame (at a slower rate). I use cameras to access a variety of "field conditions" that are otherwise hard to instrument. Tweaking the software or optics (or even the choice of image sensor) is easier than having to move to a different processor (family) to meet a different application.
> If you want an I/F to take almost any resolution from multiple cameras > to then do something with, even as HDMI or LVDS then you could look at > > Averlogic AL360 and AL361 devices, all relatively new and they send > samples around the world. > > See > http://www.averlogic.com/AL362.asp > and > http://www.averlogic.com/AL361.asp
<frown> First glance looks like intended for a different market. I'll have to spend some more time looking at them, though. Thanks!
> Inthe past I have used various devices from people like Conexant when I > was building something with my own sensors and lenses. Aptina can be > easier to get small supplies of sensors than others.
I recall an 8 camera RS170 interface that they made some years ago. But, I think that's an obsolescent (if not already obsolete!) path for designs going forward. There (currently) seems to be a disconnect between the consumer (video/surveillance) camera market and the "cell phone" camera market. Will this persist? Or, will we start seeing 4MP (and higher) as the new normal for consumer camera kit of all types? (i.e., because the cell phone volumes end up driving ALL sensor production)
Reply by Don Y July 9, 20172017-07-09
Hi Joe,

On 7/8/2017 3:06 PM, Joe Chisolm wrote:
> On Sat, 08 Jul 2017 11:05:26 -0700, Don Y wrote: > >> I hacked together my "IP cameras" using some web cams for which I was >> able to track down some FOSS i/f implementation documentation. This >> was OK for a proof of concept. But, I now have to settle on "production" >> hardware. >> >> There seem to be a variety of i/f's for cameras: >> - RS170 (requires hardware to digitize the signal; inflexible) >> - parallel (video is digized, just needs to be captured at pixel rate) >> - USB (digitized but packed in proprietary transport protocols over USB) >> - CSI (digitized with specialty hardware required in host) >> - IP (digitized but packed in protocols over IP)
> I've been looking for a 2MP 30fps solution. For 10K units the China > guys will probably give you a deal and customize if you want to go > that way.
But, they're likely to customize a *camera*, not "my application" (camera is just one of many "peripherals" in this device).
> Alibaba as a boat load of pages of people making IP cameras based on the > HiSilicon Hi3516C chip and different image sensors.
That would, essentially, be the "buy a custom camera PRODUCT" -- typ USB I/O. I'd still talk to it as a USB peripheral (the 3516 would be a bad match for the other needs that I have; it's deigned with "I'm a camera" in mind)
> Lot of options for the CSI route. Cypress has a CSI->USB chip. > A PI zero for 10 bucks and a camera is also a easy way to get to USB > A PI zero and a USB->Enet will get you a IP camera. A PI 3 and a > camera will get you a netcam but at that price you might as well just > buy a IP camera.
Yes, its easy to QUICKLY end up at the "high level interface" approach to the "camera component".
> TI has a CSI-2 FPD-Link III Serializer which I really need but it's > still showing pre-production. With that you could plug a CSI camera > in one end, use a twisted pair and get CSI or LVDS out the other end > with one of their deserializers.
In my case, the twisted pair would just be a few inches long. But, the "camera conditioning" (SERDES) would necessitate making the "camera PCB" larger (than just the bare sensor with *its* I/O connections).
> Finally there are several articles about interfacing the CSI physical > layer to a FPGA and some open source VHDL to deal with the > link interface. I'd really like to shove the CSI,H.264 and Ethernet > in a FPGA. But the price of the PI Zero starts eating your lunch. > I need something smaller but might have to live with it for now.
In my case, I want the "local processor" to actually *deal* with the imagery, not just pull it off the camera, encode and encrypt it for "remote handling". And, the needs of one camera instance will be different than others (e.g., surveilling outdoor areas is different than recognizing faces). So, choice of camera sensor as well as code behind it will vary from one "camera" to the next.
> Price wise it seems the PI zero and a camera module is the least > expensive. I got one of the China IP camera modules and it sucked. > Ran REAL HOT. Severe video lag when moving the camera. > Interesting if you telnet to the unit the prompt is > "ak47 login:"
I'd like to be able to trade dollars for performance (as above). But, don't want to have to design different "host hardware" for each possible "video price/performance point". It seems like USB is the "safe" bet in that I can always play "big customer" by approaching a COTS camera supplier with a big buy and a request for protocol details (under NDA). Then, just worry about a four conductor cable traveling to the "sensor". The downside is there is a lot of overhead in the USB stack. So, maybe prune it down to something that KNOWS the camera is its only peripheral?
Reply by Don Y July 9, 20172017-07-09
>> [Still 40+C every day. 18 consecutive days with a high temperature >> above 40C. Those huge icicles sure would be nice right about now! :> ] > > We had 2-3 hot days (up to 38C) and then a cold wave came, now we are > below 30. Pretty good during the day but the nights get below 20C, > pretty unusual for July here. All that with the very long winter we had, > average annual temperatures will likely be way below the norm.
Our *nights* just barely fall below 30C. Average 24 hour temperature is often 30C+. Rains will start (hopefully) soon to moderate these temperatures a bit (maybe 5-7C) -- at the expense of humidity.
>> On 7/8/2017 11:30 AM, Dimiter_Popoff wrote: >>> On 08.7.2017 &#1075;. 21:05, Don Y wrote:
>>>> Relying on CSI seems like it will restrict my choice of processor(s) >>>> going forward (I'd like to standardize on *a* host platform and not >>>> have to support a variety) -- though that's where the volume lies >>>> (think camera in cell phones). OTOH, it may make getting components >>>> difficult (small fish, etc.) >>>> >>>> IP seems to have more costs than USB with little/no gain. >>>> >>>> Anyone been down this road who can share experiences? Probably >>>> only looking at 10K/yr... >>> >>> The obvious choice would be the MIPI cameras - like in every phone. >> >> Yes, MIPI being the backers of the CSI approach. Its great appeal >> is so few conductors to the camera and megapixel support! >> >>> A few years ago I looked into that; I found out they (MIPI) had some >>> special (politburo) membership fee if you were a "small enterprise", >>> "small" meant if you had < $100M annual revenue IIRC... >> >> I'd be more worried about actually getting *parts* than "specs" >> (which is what the "membership fee" provides). There's a point >> (sales volume) below which folks addressing markets with HUGE >> customers simply don't want to be bothered. >> >> Or, you worry about the part being "on allocation", etc. > > I don't see how I can get to these worries. The CSI specification is > "members only".
Yes. But paying for membership can be a one-time thing (to access specs). Or, accessed by "other means"... :>
> Then this market is so huge there will always be some stock unless > you are after the latest versions. Or if you want to go into the > volumes of say Samsung but in this case you'd have allocation > issues whatever type you go for, probably worse than with mipi.
It's often an issue of finding someone willing to *talk* to you for orders that are considered "tiny". You'd be dealing direct with the manufacturer so wouldn't really have any leverage with a local disti (you might be a "big customer" to a local disti but small-fry to the manufacturer). Dunno. It's always a crap shoot trying to guess where the *practical* market is headed so you don't design in the "right" device for the application only to discover its the *wrong* device for the "product".
Reply by Paul July 9, 20172017-07-09
In article <ojr6lv$188$1@dont-email.me>, blockedofcourse@foo.invalid 
says...
> > I hacked together my "IP cameras" using some web cams for which I was > able to track down some FOSS i/f implementation documentation. This > was OK for a proof of concept. But, I now have to settle on "production" > hardware. > > There seem to be a variety of i/f's for cameras: > - RS170 (requires hardware to digitize the signal; inflexible) > - parallel (video is digized, just needs to be captured at pixel rate) > - USB (digitized but packed in proprietary transport protocols over USB) > - CSI (digitized with specialty hardware required in host) > - IP (digitized but packed in protocols over IP) > > The USB option is the easiest from the host's *hardware* point of view. > But, seems to limit the devices that could be supported as camera > vendors are loathe to publish details that only *their* driver implementers > should need. > > RS170 is... "passe" > > Parallel requires lots of signals to/from the host/camera. > > CSI seems to be primarily supported on hosts intended to address the > mobile market. > > As USB is essentially "free" (hardware-wise), adding support for it as > an *alternative* interface seems prudent. > > Relying on CSI seems like it will restrict my choice of processor(s) > going forward (I'd like to standardize on *a* host platform and not > have to support a variety) -- though that's where the volume lies > (think camera in cell phones). OTOH, it may make getting components > difficult (small fish, etc.) > > IP seems to have more costs than USB with little/no gain. > > Anyone been down this road who can share experiences? Probably > only looking at 10K/yr...
My problem is without knowing what type of data flow (what will happen to the viideo still or stream), rates you are requiring and for how many sources it is difficult to suggest anything. If you want an I/F to take almost any resolution from multiple cameras to then do something with, even as HDMI or LVDS then you could look at Averlogic AL360 and AL361 devices, all relatively new and they send samples around the world. See http://www.averlogic.com/AL362.asp and http://www.averlogic.com/AL361.asp Inthe past I have used various devices from people like Conexant when I was building something with my own sensors and lenses. Aptina can be easier to get small supplies of sensors than others. -- Paul Carpenter | paul@pcserviceselectronics.co.uk <http://www.pcserviceselectronics.co.uk/> PC Services <http://www.pcserviceselectronics.co.uk/LogicCell/> Logic Gate Education <http://www.pcserviceselectronics.co.uk/fonts/> Timing Diagram Font <http://www.badweb.org.uk/> For those web sites you hate
Reply by Dimiter_Popoff July 8, 20172017-07-08
On 08.7.2017 &#1075;. 23:41, Don Y wrote:
> Hi Dimiter, > > [Still 40+C every day. 18 consecutive days with a high temperature > above 40C. Those huge icicles sure would be nice right about now! :> ]
We had 2-3 hot days (up to 38C) and then a cold wave came, now we are below 30. Pretty good during the day but the nights get below 20C, pretty unusual for July here. All that with the very long winter we had, average annual temperatures will likely be way below the norm.
> > On 7/8/2017 11:30 AM, Dimiter_Popoff wrote: >> On 08.7.2017 &#1075;. 21:05, Don Y wrote: >>> I hacked together my "IP cameras" using some web cams for which I was >>> able to track down some FOSS i/f implementation documentation. This >>> was OK for a proof of concept. But, I now have to settle on >>> "production" >>> hardware. >>> >>> There seem to be a variety of i/f's for cameras: >>> - RS170 (requires hardware to digitize the signal; inflexible) >>> - parallel (video is digized, just needs to be captured at pixel rate) >>> - USB (digitized but packed in proprietary transport protocols over USB) >>> - CSI (digitized with specialty hardware required in host) >>> - IP (digitized but packed in protocols over IP) >>> >>> The USB option is the easiest from the host's *hardware* point of view. >>> But, seems to limit the devices that could be supported as camera >>> vendors are loathe to publish details that only *their* driver >>> implementers >>> should need. >>> >>> RS170 is... "passe" >>> >>> Parallel requires lots of signals to/from the host/camera. >>> >>> CSI seems to be primarily supported on hosts intended to address the >>> mobile market. >>> >>> As USB is essentially "free" (hardware-wise), adding support for it as >>> an *alternative* interface seems prudent. >>> >>> Relying on CSI seems like it will restrict my choice of processor(s) >>> going forward (I'd like to standardize on *a* host platform and not >>> have to support a variety) -- though that's where the volume lies >>> (think camera in cell phones). OTOH, it may make getting components >>> difficult (small fish, etc.) >>> >>> IP seems to have more costs than USB with little/no gain. >>> >>> Anyone been down this road who can share experiences? Probably >>> only looking at 10K/yr... >> >> The obvious choice would be the MIPI cameras - like in every phone. > > Yes, MIPI being the backers of the CSI approach. Its great appeal > is so few conductors to the camera and megapixel support! > >> A few years ago I looked into that; I found out they (MIPI) had some >> special (politburo) membership fee if you were a "small enterprise", >> "small" meant if you had < $100M annual revenue IIRC... > > I'd be more worried about actually getting *parts* than "specs" > (which is what the "membership fee" provides). There's a point > (sales volume) below which folks addressing markets with HUGE > customers simply don't want to be bothered. > > Or, you worry about the part being "on allocation", etc.
I don't see how I can get to these worries. The CSI specification is "members only". Then this market is so huge there will always be some stock unless you are after the latest versions. Or if you want to go into the volumes of say Samsung but in this case you'd have allocation issues whatever type you go for, probably worse than with mipi. Dimiter
Reply by Joe Chisolm July 8, 20172017-07-08
On Sat, 08 Jul 2017 11:05:26 -0700, Don Y wrote:

> I hacked together my "IP cameras" using some web cams for which I was > able to track down some FOSS i/f implementation documentation. This > was OK for a proof of concept. But, I now have to settle on "production" > hardware. > > There seem to be a variety of i/f's for cameras: > - RS170 (requires hardware to digitize the signal; inflexible) > - parallel (video is digized, just needs to be captured at pixel rate) > - USB (digitized but packed in proprietary transport protocols over USB) > - CSI (digitized with specialty hardware required in host) > - IP (digitized but packed in protocols over IP) > > The USB option is the easiest from the host's *hardware* point of view. > But, seems to limit the devices that could be supported as camera > vendors are loathe to publish details that only *their* driver implementers > should need. > > RS170 is... "passe" > > Parallel requires lots of signals to/from the host/camera. > > CSI seems to be primarily supported on hosts intended to address the > mobile market. > > As USB is essentially "free" (hardware-wise), adding support for it as > an *alternative* interface seems prudent. > > Relying on CSI seems like it will restrict my choice of processor(s) > going forward (I'd like to standardize on *a* host platform and not > have to support a variety) -- though that's where the volume lies > (think camera in cell phones). OTOH, it may make getting components > difficult (small fish, etc.) > > IP seems to have more costs than USB with little/no gain. > > Anyone been down this road who can share experiences? Probably > only looking at 10K/yr...
I've been looking for a 2MP 30fps solution. For 10K units the China guys will probably give you a deal and customize if you want to go that way. Alibaba as a boat load of pages of people making IP cameras based on the HiSilicon Hi3516C chip and different image sensors. Lot of options for the CSI route. Cypress has a CSI->USB chip. A PI zero for 10 bucks and a camera is also a easy way to get to USB A PI zero and a USB->Enet will get you a IP camera. A PI 3 and a camera will get you a netcam but at that price you might as well just buy a IP camera. TI has a CSI-2 FPD-Link III Serializer which I really need but it's still showing pre-production. With that you could plug a CSI camera in one end, use a twisted pair and get CSI or LVDS out the other end with one of their deserializers. Finally there are several articles about interfacing the CSI physical layer to a FPGA and some open source VHDL to deal with the link interface. I'd really like to shove the CSI,H.264 and Ethernet in a FPGA. But the price of the PI Zero starts eating your lunch. I need something smaller but might have to live with it for now. Price wise it seems the PI zero and a camera module is the least expensive. I got one of the China IP camera modules and it sucked. Ran REAL HOT. Severe video lag when moving the camera. Interesting if you telnet to the unit the prompt is "ak47 login:" -- Chisolm Republic of Texas