EmbeddedRelated.com
Forums
Memfault Beyond the Launch

Tablet/phone cameras

Started by Don Y June 16, 2011
On 17.06.2011 20:08, Don Y wrote:

> I'm not claiming there is anything *wrong* with it! :> Rather, > I am wondering how much of a "fortunate circumstance" this is for > the phone implementor? I.e., knowing he only has to *display* > ~1/4 of the data that he's taking in?
None --- people do expect those images to look good shown on other displays, including ones considerably better than the phone's own one. And the image display has zooming capability, too, so you don't get away with all that much nonsense even before the image is uploaded somewhere. This mismatch between sensor resolution and on-camera display resolution has been with us since just about day 1 of digital photography. There is a reason most digicams still have viewfinders of some design.
Hi Hans-Bernhard,

On 6/17/2011 1:03 PM, Hans-Bernhard Br�ker wrote:
> On 17.06.2011 20:08, Don Y wrote: > >> I'm not claiming there is anything *wrong* with it! :> Rather, >> I am wondering how much of a "fortunate circumstance" this is for >> the phone implementor? I.e., knowing he only has to *display* >> ~1/4 of the data that he's taking in? > > None --- people do expect those images to look good shown on other > displays, including ones considerably better than the phone's own one.
My point was concerned with the fact that the camera (software) need not have to deal with *processing* all that data for *display* in real-time. Presumably, this reduces the demands on that process ("fortunate circumstance" for the phone implementor)
> And the image display has zooming capability, too, so you don't get away > with all that much nonsense even before the image is uploaded somewhere. > > This mismatch between sensor resolution and on-camera display resolution > has been with us since just about day 1 of digital photography. There is > a reason most digicams still have viewfinders of some design.
On Jun 17, 2:17=A0pm, Don Y <nowh...@here.com> wrote:

> > overlay window to perform YUV->RGB conversion. Exposure and white > > balance are typically adjusted on the fly by host-side software but > > this is certainly not universally true, especially for the USB > > But these aren't (as) important when processing monochromatic > video (e.g., barcode symbols).
These sensors don't have a heck of a lot of dynamic range, so it's easy to wash out even a barcode. I think you really would be well served by buying a couple - for example go to www.dealextreme.com or www.merimobiles.com and look at some of their shanzai phones and tablets - and just experiment with them using the apps available in their app stores. None of them that I'm aware of support realtime decode - you acquire a picture, and then they analyze it. The augmented reality apps that work on a printed orientation glyph do almost exactly what you're talking about, but they don't need to read much information out of the shape, and they also incorporate other sensor input to make inferences about how the shape is moving around in the FOV.
> > These sensors also typically require fairly long exposure times, so > > So, "motion video" is, of necessity, pretty poor quality?
Varies :) For the low end, absolutely. At the extreme low end, even the resolution is very constrained.
> have to deal with already. =A0E.g., I don't have to worry about > compression because I'm not saving images. =A0How the video is
You don't care about it, but the APIs available to you may implicitly turn on those features. Getting fancy behind the scenes by poking at the power management features of the system will likely get the drivers/OS very confused.
> need to be done *eventually*. =A0Annoying to have a successful > prototype and then have to "start over" to come up with a > *real* product -- about which the prototype will have taught you
(choosing words with some care here) With a similar though not identical problem to solve, a large corporation with which I'm familiar has decided to approach this problem entirely at the app layer, and qualify individual devices.
On 6/17/2011 11:23 AM, linnix wrote:
>>>>> - does the use of the camera (in movie mode) eat batteries? >> >>> Yes, you have to keep charging it. >> >> Every 3 minutes? 2 hours? etc.? You have to "keep charging" a >> cell phone *regardless*... the point is how much of an impact >> it makes on battery life, relatively speaking. (e.g., does >> phone w/ radio off, camera on consume as much/more/less than >> phone w/ radio on, camera off, etc.? > > I took 10 pictures and the battery drop from 66% to 65%. So, i guess > you can take 1000 pictures on a full charge.
<grin> I'm not sure if that extrapolation necesssarily follows from the empirical evidence. :> If that were the case, you could take 60 seconds of 15FPS video (?)
> larwe: USB sensors are impractical beyond VGA. Need lots of memory > and bandwidth on chip to do so. The 3M pixel sensor of VS740 has > parallel data bus into the system DSP. Pictures are first captured in > SDRAM, then write to SD card. However, in Linux, you can probably > memory map the SDRAM for processing, but you need rooted Android. The > camera software library is available in C++ (unsupported), but wrapper > in Java (supported) available.
Pointer to a URL?
On Jun 18, 2:46=A0pm, Don Y <nowh...@here.com> wrote:
> On 6/17/2011 11:23 AM, linnix wrote: > > >>>>> - does the use of the camera (in movie mode) eat batteries? > > >>> Yes, you have to keep charging it. > > >> Every 3 minutes? =A02 hours? etc.? =A0You have to "keep charging" a > >> cell phone *regardless*... =A0the point is how much of an impact > >> it makes on battery life, relatively speaking. =A0(e.g., does > >> phone w/ radio off, camera on consume as much/more/less than > >> phone w/ radio on, camera off, etc.? > > > I took 10 pictures and the battery drop from 66% to 65%. =A0So, i guess > > you can take 1000 pictures on a full charge. > > <grin> =A0I'm not sure if that extrapolation necesssarily > follows from the empirical evidence. =A0:> =A0If that were > the case, you could take 60 seconds of 15FPS video (?)
The pictures were taken over 20 to 30 seconds. So, you can probably run the phone for about an hour (pictures or video). The processors (600MHz CPU + 400MHz DSP) are the main battery killer anyway.
> > > larwe: USB sensors are impractical beyond VGA. =A0Need lots of memory > > and bandwidth on chip to do so. =A0The 3M pixel sensor of VS740 has > > parallel data bus into the system DSP. =A0Pictures are first captured i=
n
> > SDRAM, then write to SD card. =A0However, in Linux, you can probably > > memory map the SDRAM for processing, but you need rooted Android. =A0Th=
e
> > camera software library is available in C++ (unsupported), but wrapper > > in Java (supported) available. > > Pointer to a URL?
google "android camera programming" will give you plenty to read.
Hi Lewin,

On 6/18/2011 4:03 AM, larwe wrote:
> On Jun 17, 2:17 pm, Don Y<nowh...@here.com> wrote: > >>> overlay window to perform YUV->RGB conversion. Exposure and white >>> balance are typically adjusted on the fly by host-side software but >>> this is certainly not universally true, especially for the USB >> >> But these aren't (as) important when processing monochromatic >> video (e.g., barcode symbols). > > These sensors don't have a heck of a lot of dynamic range, so it's > easy to wash out even a barcode. I think you really would be well
Dunno. I've never been particularly interested in the camera aspects of these devices before. I noticed that the cameras were "slow to start" and would typically take three "repaints" to get the brightness (?) sorted out in low light conditions (I would usually play with these in the evening hours so light wasn't overly abundant). I should take one outdoors and turn it on to see how that fares...
> served by buying a couple - for example go to www.dealextreme.com or > www.merimobiles.com and look at some of their shanzai phones and > tablets - and just experiment with them using the apps available in > their app stores. None of them that I'm aware of support realtime > decode - you acquire a picture, and then they analyze it.
I have several (older) smartphones that I've accumulated over the years. I used an HTC wizard for a few months while traveling as a "portable email (wifi) client". And, I know I have an assotment of Treos (kept because they have "keyboards" -- as does the wizard). But, since I don't *use* cell phones, I confess I've never played with anything other than the built-in applications...
> The augmented reality apps that work on a printed orientation glyph do > almost exactly what you're talking about, but they don't need to read > much information out of the shape, and they also incorporate other > sensor input to make inferences about how the shape is moving around > in the FOV.
I have the luxury of being able to bastardize the code for my own purposes -- though being able to read "real" QR codes might have some advantages (and DISadvantages).
>>> These sensors also typically require fairly long exposure times, so >> >> So, "motion video" is, of necessity, pretty poor quality? > > Varies :) For the low end, absolutely. At the extreme low end, even > the resolution is very constrained. > >> have to deal with already. E.g., I don't have to worry about >> compression because I'm not saving images. How the video is > > You don't care about it, but the APIs available to you may implicitly > turn on those features. Getting fancy behind the scenes by poking at > the power management features of the system will likely get the > drivers/OS very confused.
We'll either work on bare iron or contract the development of our own drivers to give us the features we want/need. We're not trying to be a "typical phone/tablet" so doubt solutions intended for those sorts of markets would fit well.
>> need to be done *eventually*. Annoying to have a successful >> prototype and then have to "start over" to come up with a >> *real* product -- about which the prototype will have taught you > > (choosing words with some care here) With a similar though not > identical problem to solve, a large corporation with which I'm > familiar has decided to approach this problem entirely at the app > layer, and qualify individual devices.
We want to be sure the devices are *not* usable for any purpose other than our own. And, we will have other I/O's that you just can't expect to get in a phone/tablet. So, rather than design yet another set of "peripherals" and figure out how to marry them to <whatever>, it makes more sense to roll all the development into one set of devices. This should also greatly simplify the maintenance of the system as well as the software.

Memfault Beyond the Launch