EmbeddedRelated.com
Forums

PDA as "X terminal"

Started by D Yuniskis October 11, 2010
On 13/10/2010 11:21, Didi wrote:
> On Oct 13, 11:53 am, D Yuniskis<not.going.to...@seen.com> wrote: >> Hi Dimiter, >> >> >> >> Didi wrote: >>> On Oct 12, 9:08 pm, D Yuniskis<not.going.to...@seen.com> wrote: >>>> ...... >>>>> My netMCA product line relies practically only on VNC for >>>>> user access and maintaining a live 800x600, 16 bpp frame takes about >>>>> 10% of the power of a 400 MHz mpc5200b (not counting networking >>>>> overhead, just frame change detection and compression). Then it >>>> Ouch! I assume VNC sits *past* your "presentation layer" >>>> so your application can't provide hints to it to expedite >>>> the change detection? >> >>> VNC is an implementation of RFB (remote framebuffer protocol), >>> so basically this is where it belongs. It transfers pixels. >>> DPS will tell you when a window has been (or not) modified so >>> the VNC server will know that; but somehow signalling which pixels >>> got modified is obviously impractical, not to speak on how >>> this will be done with all the overlapping live windows etc. >> >> For example, my curses implementation watches all accesses >> to the "pad" -- equivalent of frame buffer except it's just >> a 2D array of (character, attribute) tuples -- and notes where >> the changes are. Then, when told to "update the physical display", >> the code examines these "change flags" and decides what the optimal >> (e.g., least number of characters sent over the serial link) >> method of getting the physical display to coincide with the >> *desired* "virtual image". >> >> A similar thing can be done by noting where accesses to >> the frame buffer are made. Of course, the cost of doing >> this is considerably more than, for example, a 2000 - 8000 >> character "text display". But, you have to look at the >> costs of transport vs. computation (e.g., with a serial >> port, you can save *lots* of transport time by cutting all >> "redundant" character data out of the transmission. > > Hi Don, > > transport time can be saved and is saved, this is exactly > what my vnc (rfb) server does compression for. Over a slow > link, say, a 500 kbpS, redrawing a blank "shell" screen > goes instantly; it takes pretty long if the image is some > landscape photo. Over a 10 MbpS link you can work normally, > although you will notice a fullscreen redraw; over a > 100 MbpS link, you just don't feel this is a remote host > (at 800x600 pixels, 16bpp; same on a 1280x1024, although > I rarely use that over VNC). > > But keeping track on which pixels have changed in a multi- > window multitask environment while drawing these is > impractical. It will take a lot more resources to do so > than to detect changes in the final framebuffer, not to > speak about the mess it will add to the entire code. > As it is, applications draw what they want to (spectra > displays, oscilloscope displays etc. live things), do > the bulk of the work (which is signal processing), and > a fraction of the resources goes on VNC, 10% is not > that much at all for what you get. > >> (you would have to look *into* the VNC implementation in >> order to let it notice the begin/end change information >> that you have been logging with each FB access) > > This is why the vnc server does these checks, the code is > highly optimized, very efficient. Not much room for improvement > there. Of course it takes decisions which parts to send, > whether compression of a particular changed region is worth it > (e.g. if you have a series of one changed and one unchanged > pixel compression will be wasting rather than saving bandwidth). > >> Imagine your "draw_window" routine *marking* what parts of the >> display it has altered. Then, your VNC implementation *knows* >> which parts of the FB have been modified and which haven't. > > Well, the end result is indeed that. But someone has to keep > track on which window has been modified, where on the "screen" > this windows origin currently is, how much of it is visible > (i.e. not covered by another window or outside of the "screen" > frame); doing this all the time - while drawing each pixel > or each line (only to add the overhead of clipping and rendering > to the vnc server as well....) by the vnc is insane, to put it > mildly :-). > This why an OS - like DPS - does this for the applications, the > VNC server being one of them - and I am sure you cannot > point me to a more efficient working implementation which has > all that functionality. In fact, I doubt you can locate > one which has all that - applications included for the netMCA-2 - > which > will fit within a 2M flash - with some free space. >
Actually, there /are/ VNC server setups which work in this "insane" way. I use VNC quite a bit on windows machines, since it is the only decent way to remotely access them (with Linux I can do most things with ssh - vastly more efficient). The VNC server gets information from Windows when parts of the screen are re-drawn. It doesn't cover everything - accelerated stuff, 3D parts, DirectX, video, etc., are always going to be a mess when you are using remote displays. Windows also doesn't seem to be able to see changes to command prompt boxes (don't ask me why!), so these are polled. But the VNC server gets information about rectangular areas that may need to be resent. It then compares the new bitmap with its copy of the frame buffer to see if it has really changed, and to determine how to send it (raw data, lossless compressed data, lossy compressed, etc). It can also go a step further. With some versions of VNC (I use tightvnc) you can install an extra screen driver (the "mirage" driver) that spies on the windows graphics GDI calls. With this in place, vnc can monitor the actual GDI calls ("draw a line here", etc.) and pass these over the link to the client. It can always fall back on a straight pixel copy if necessary, but use of "mirage" can greatly reduce the load on the VNC server, as well as the bandwidth. I have no idea if there is a similar system for Linux VNC servers - as I say, I have had little need of VNC on Linux.
On Oct 13, 12:41=A0pm, David Brown <da...@westcontrol.removethisbit.com>
wrote:
> On 13/10/2010 11:21, Didi wrote: > > > > > On Oct 13, 11:53 am, D Yuniskis<not.going.to...@seen.com> =A0wrote: > >> Hi Dimiter, > > >> Didi wrote: > >>> On Oct 12, 9:08 pm, D Yuniskis<not.going.to...@seen.com> =A0wrote: > >>>> ...... > >>>>> My netMCA product line relies practically only on VNC for > >>>>> user access and maintaining a live 800x600, 16 bpp frame takes abou=
t
> >>>>> 10% of the power of a 400 MHz mpc5200b (not counting networking > >>>>> overhead, just frame change detection and compression). Then it > >>>> Ouch! =A0I assume VNC sits *past* your "presentation layer" > >>>> so your application can't provide hints to it to expedite > >>>> the change detection? > > >>> VNC is an implementation of RFB (remote framebuffer protocol), > >>> so basically this is where it belongs. It transfers pixels. > >>> DPS will tell you when a window has been (or not) modified so > >>> the VNC server will know that; but somehow signalling which pixels > >>> got modified is obviously impractical, not to speak on how > >>> this will be done with all the overlapping live windows etc. > > >> For example, my curses implementation watches all accesses > >> to the "pad" -- equivalent of frame buffer except it's just > >> a 2D array of (character, attribute) tuples -- and notes where > >> the changes are. =A0Then, when told to "update the physical display", > >> the code examines these "change flags" and decides what the optimal > >> (e.g., least number of characters sent over the serial link) > >> method of getting the physical display to coincide with the > >> *desired* "virtual image". > > >> A similar thing can be done by noting where accesses to > >> the frame buffer are made. =A0Of course, the cost of doing > >> this is considerably more than, for example, a 2000 - 8000 > >> character "text display". =A0But, you have to look at the > >> costs of transport vs. computation (e.g., with a serial > >> port, you can save *lots* of transport time by cutting all > >> "redundant" character data out of the transmission. > > > Hi Don, > > > transport time can be saved and is saved, this is exactly > > what my vnc (rfb) server does compression for. Over a slow > > link, say, a 500 kbpS, redrawing a blank "shell" screen > > goes instantly; it takes pretty long if the image is some > > landscape photo. Over a 10 MbpS link you can work normally, > > although you will notice a fullscreen redraw; over a > > 100 MbpS link, you just don't feel this is a remote host > > (at 800x600 pixels, 16bpp; same on a 1280x1024, although > > I rarely use that over VNC). > > > But keeping track on which pixels have changed in a multi- > > window multitask environment while drawing these is > > impractical. It will take a lot more resources to do so > > than to detect changes in the final framebuffer, not to > > speak about the mess it will add to the entire code. > > As it is, applications draw what they want to (spectra > > displays, oscilloscope displays etc. live things), do > > the bulk of the work (which is signal processing), and > > a fraction of the resources goes on VNC, 10% is not > > that much at all for what you get. > > >> (you would have to look *into* the VNC implementation in > >> order to let it notice the begin/end change information > >> that you have been logging with each FB access) > > > This is why the vnc server does these checks, the code is > > highly optimized, very efficient. Not much room for improvement > > there. Of course it takes decisions which parts to send, > > whether compression of a particular changed region is worth it > > (e.g. if you have a series of one changed and one unchanged > > pixel compression will be wasting rather than saving bandwidth). > > >> Imagine your "draw_window" routine *marking* what parts of the > >> display it has altered. =A0Then, your VNC implementation *knows* > >> which parts of the FB have been modified and which haven't. > > > Well, the end result is indeed that. But someone has to keep > > track on which window has been modified, where on the "screen" > > this windows origin currently is, how much of it is visible > > (i.e. not covered by another window or outside of the "screen" > > frame); doing this all the time - while drawing each pixel > > or each line (only to add the overhead of clipping and rendering > > to the vnc server as well....) by the vnc is insane, to put it > > mildly :-). > > This why an OS - like DPS - does this for the applications, the > > VNC server being one of them - and I am sure you cannot > > point me to a more efficient working implementation which has > > all that functionality. In fact, I doubt you can locate > > one which has all that - applications included for the netMCA-2 - > > which > > will fit within a 2M flash - with some free space. > > Actually, there /are/ VNC server setups which work in this "insane" way. > =A0 I use VNC quite a bit on windows machines, since it is the only decen=
t
> way to remotely access them (with Linux I can do most things with ssh - > vastly more efficient). > > The VNC server gets information from Windows when parts of the screen > are re-drawn. =A0It doesn't cover everything - accelerated stuff, 3D > parts, DirectX, video, etc., are always going to be a mess when you are > using remote displays. =A0Windows also doesn't seem to be able to see > changes to command prompt boxes (don't ask me why!), so these are > polled. =A0But the VNC server gets information about rectangular areas > that may need to be resent. =A0It then compares the new bitmap with its > copy of the frame buffer to see if it has really changed, and to > determine how to send it (raw data, lossless compressed data, lossy > compressed, etc). > > It can also go a step further. =A0With some versions of VNC (I use > tightvnc) you can install an extra screen driver (the "mirage" driver) > that spies on the windows graphics GDI calls. =A0With this in place, vnc > can monitor the actual GDI calls ("draw a line here", etc.) and pass > these over the link to the client. =A0It can always fall back on a > straight pixel copy if necessary, but use of "mirage" can greatly reduce > the load on the VNC server, as well as the bandwidth. > > I have no idea if there is a similar system for Linux VNC servers - as I > say, I have had little need of VNC on Linux.
All these internal tricks are done not to save time, but because access to the windows framebuffer is not an option for the applications. I remember the tightvnc people had at some point some post which said why it would not be easy at all to support vista IIRC (reason was no access to display memory). This is part of the reason why I said DPS was particularly vnc friendly. Telling the client to draw primitives on the screen is not part of the RFB protocol, so this must be some other flavour of vnc, if it really works this way. I don't know a tightvnc which does that (but I have not recently checked the news there). BTW tightvnc is a bit sluggish to me (I use only vnc clients on windows), realVNC works better for me. Dimiter ------------------------------------------------------ Dimiter Popoff Transgalactic Instruments http://www.tgi-sci.com ------------------------------------------------------ http://www.flickr.com/photos/didi_tgi/sets/72157600228621276/
Hi Dimiter,

Didi wrote:
> On Oct 13, 11:53 am, D Yuniskis <not.going.to...@seen.com> wrote:
>>>>> My netMCA product line relies practically only on VNC for >>>>> user access and maintaining a live 800x600, 16 bpp frame takes about >>>>> 10% of the power of a 400 MHz mpc5200b (not counting networking >>>>> overhead, just frame change detection and compression). Then it >>>> Ouch! I assume VNC sits *past* your "presentation layer" >>>> so your application can't provide hints to it to expedite >>>> the change detection? >>> VNC is an implementation of RFB (remote framebuffer protocol), >>> so basically this is where it belongs. It transfers pixels. >>> DPS will tell you when a window has been (or not) modified so >>> the VNC server will know that; but somehow signalling which pixels >>> got modified is obviously impractical, not to speak on how >>> this will be done with all the overlapping live windows etc. >> For example, my curses implementation watches all accesses >> to the "pad" -- equivalent of frame buffer except it's just >> a 2D array of (character, attribute) tuples -- and notes where >> the changes are. Then, when told to "update the physical display", >> the code examines these "change flags" and decides what the optimal >> (e.g., least number of characters sent over the serial link) >> method of getting the physical display to coincide with the >> *desired* "virtual image". >> >> A similar thing can be done by noting where accesses to >> the frame buffer are made. Of course, the cost of doing >> this is considerably more than, for example, a 2000 - 8000 >> character "text display". But, you have to look at the >> costs of transport vs. computation (e.g., with a serial >> port, you can save *lots* of transport time by cutting all >> "redundant" character data out of the transmission. > > transport time can be saved and is saved, this is exactly > what my vnc (rfb) server does compression for. Over a slow > link, say, a 500 kbpS, redrawing a blank "shell" screen > goes instantly; it takes pretty long if the image is some > landscape photo. Over a 10 MbpS link you can work normally, > although you will notice a fullscreen redraw; over a > 100 MbpS link, you just don't feel this is a remote host > (at 800x600 pixels, 16bpp; same on a 1280x1024, although > I rarely use that over VNC). > > But keeping track on which pixels have changed in a multi- > window multitask environment while drawing these is > impractical. It will take a lot more resources to do so > than to detect changes in the final framebuffer, not to > speak about the mess it will add to the entire code.
You have to look at what you end up doing "in" the display. I.e., the nature of changes that you make between "updates". (you also have to decide if *you* can control the update() or if you are at the mercy of an asynchronous update process, etc. You don't have to look at individual pixels. For example, in many text-based UI's, I present a layered window interface. I.e., a hierarchy of menus implemented as pop-ups layered atop each other. So, I *know* that the changes to any particular screen image will be roughly rectangular in nature. (there may be some coincidental "non-changes" within this region but trying to optimize those is an insignificant saving) As such, I just track begin/end changes for a rectangular region: Point begin; Point end; if (row < begin.row) { begin.row = row; } else if (row > end.row) { end.row = row; } if (column < begin.column) { begin.column = column; } else if (column > end.column) { end.column = column; } Now, when update() is called, I just repaint the RECTANGULAR region between "begin" and "end". If you are updating the display after *a* new window is created (or old window destroyed), then your changes will be confined to a rectangular region somewhere (i.e., where the window is/was). So, you don't have to look at the individual positions *within* that region -- just update it in its entirety. If your application is aware of the underlying display mechanisms, it can exploit that knowledge to give improved performance at little cost. For example, in my example, the application deliberately builds one window at a time before update(). Otherwise, if the application updated *two* windows in very different portions of the screen (consider a small window in the upper left corner and another small window in the lower right corner), then the simplified begin/end tracking would cause overly large portions of the display to be updated (in the 2 small window case, the entire screen would be redrawn even though just two small "corners" have changed -- and nothing *else*!). E.g., I often have a clock on-screen tucked into a corner. When I display a new time, the begin/end markers reflect the start and end of that "clock" region. I *then* update the display *before* I draw any new windows -- because the windows will typically be "far away" from that "clock display" and I don't want to have to update all the unchanged portions of the display in that larger region. In other cases, I have a fancier UI package that draws exploding/collapsing windows, etc. Using the begin/end simplification results in the entire contents of the exploding window being redrawn each time it is (rapidly!) resized. This really looks bad because the time required to repaint the window in its most condensed representation is considerably less than in its fully expanded representation. When using this UI, I track more details so that I can update smaller portions of the physical display at each stage (e.g., just the borders of the expanding window as they "move" outward).
> As it is, applications draw what they want to (spectra > displays, oscilloscope displays etc. live things), do > the bulk of the work (which is signal processing), and > a fraction of the resources goes on VNC, 10% is not > that much at all for what you get.
<grin> You (and your customers) have to be the judge of that.
>> (you would have to look *into* the VNC implementation in >> order to let it notice the begin/end change information >> that you have been logging with each FB access) > > This is why the vnc server does these checks, the code is > highly optimized, very efficient. Not much room for improvement > there. Of course it takes decisions which parts to send, > whether compression of a particular changed region is worth it > (e.g. if you have a series of one changed and one unchanged > pixel compression will be wasting rather than saving bandwidth).
Yes. This is analagous to the begin/end tracking I mention above. The cost of the test is greatly reduced based on the *assumption* that there will be enough changes within the begin/end region that it isn't worth trying to further optimize *within* that region.
>> Imagine your "draw_window" routine *marking* what parts of the >> display it has altered. Then, your VNC implementation *knows* >> which parts of the FB have been modified and which haven't. > > Well, the end result is indeed that. But someone has to keep > track on which window has been modified, where on the "screen" > this windows origin currently is, how much of it is visible > (i.e. not covered by another window or outside of the "screen" > frame); doing this all the time - while drawing each pixel > or each line (only to add the overhead of clipping and rendering > to the vnc server as well....) by the vnc is insane, to put it > mildly :-).
No, you don't look at individual pixels. You look at "drawing objects" (lines, circles, regions, etc.) and track *their* "extents". Then, find the smallest enclosing object (depends on what types of objects you can transfer in your protocol -- my curses only deals with one and two dimensional regions) and track *that* as the description of what must be "update()-ed".
> This why an OS - like DPS - does this for the applications, the > VNC server being one of them - and I am sure you cannot > point me to a more efficient working implementation which has > all that functionality. In fact, I doubt you can locate > one which has all that - applications included for the netMCA-2 - > which will fit within a 2M flash - with some free space.
On Oct 13, 1:31=A0pm, D Yuniskis <not.going.to...@seen.com> wrote:
> .... > You have to look at what you end up doing "in" the > display. =A0I.e., the nature of changes that you make > between "updates". =A0(you also have to decide if *you* > can control the update() or if you are at the mercy of > an asynchronous update process, etc.
In a multitask multi-window OS there is no "you", different tasks can do different things in various windows. Of course DPS tasks can control window update (draw for a while and signal the change afterwards, this with a timeout).
> You don't have to look at individual pixels. =A0For example, > in many text-based UI's, I present a layered window interface. > I.e., a hierarchy of menus implemented as pop-ups layered > atop each other. > > So, I *know* that the changes to any particular screen > image will be roughly rectangular in nature.
But how do you know there is no other change done by another application. Or do all applications draw into the framebuffer _and_ forward their doings to the vnc server? That would at least double the application overhead so at the end of the day you will be less efficient.
> If you are updating the display after *a* new > window is created (or old window destroyed), then > your changes will be confined to a rectangular region > somewhere (i.e., where the window is/was).
Oh no. The window can be only partially visible, parts of it covered by other windows' edges etc., so you don't know that. Been there considered all that :-).
> > Well, the end result is indeed that. But someone has to keep > > track on which window has been modified, where on the "screen" > > this windows origin currently is, how much of it is visible > > (i.e. not covered by another window or outside of the "screen" > > frame); doing this all the time - while drawing each pixel > > or each line (only to add the overhead of clipping and rendering > > to the vnc server as well....) by the vnc is insane, to put it > > mildly :-). > > No, you don't look at individual pixels. =A0You look at > "drawing objects" (lines, circles, regions, etc.) and > track *their* "extents".
So what happens if the application draws a line in one window (it clips it to its limits and draws it into its offscreen buffer) and the line is only half visible because of a covering window. You clip the line to all possible windows and do that with every line? Give me a break :-). Let's put it in numbers: an 800x600, 16 bpp buffer is roughly 1 megabyte. On the system I run DPS currently, DDRAM is about 1 Gbyte/second fast (32 bit 266 MHz data rate). Somewhat less in reality but close enough for our purposes. Comparing 1M frames 16 times per second makes 32 megabytes transferred, or 3% of the memory bandwidth; I'll put that against any multitask multiwindow OS running a VNC server, just point me to one to compare to. Assuming your forwarding method doubles the application graphics overhead it will take you to reach 1.6% load for graphics to be beaten. IOW, having a _constant_, reasonably low overhead for change detection can only be beaten on very simple systems (no multi-window etc.). Finally, I was wrong on my 10%. In fact not me, my tt command (which I wrote :-) was wrong. It lists real % overhead only when there is some significant system load, otherwise it includes idling, task switch etc. I now tried it again - ran the "hog" task, which just does "bra *" - and the results can be seen at http://tgi-sci.com/misc/vnc.gif . Clearly my tt hack does not work all that well, percentages add up only to 99 in this case (I have seen 100 and I have seen 98 :-) ). But for the sake of my own usage, to estimate stuff etc. it is OK. The tt task itself takes up that much time because it scrolls the entire window up (graphically, in fact dps does it for it when it sees the LF at the bottom line). Dimiter ------------------------------------------------------ Dimiter Popoff Transgalactic Instruments http://www.tgi-sci.com ------------------------------------------------------ http://www.flickr.com/photos/didi_tgi/sets/72157600228621276/
On Mon, 11 Oct 2010 19:18:20 -0700, D Yuniskis wrote:

> Ah, but *I* write the applications. We're not talking about a > general purpose appliance -- rather, a "remote display for > product 'foo'". No need to support multimedia, font service, > etc.
From what you've said, I would still be inclined to go the VNC/RDP route, although maybe that's just because I don't know enough about the specifics. The application server is probably running off mains, while the PDA is running off batteries. If the PDA is mostly idle, to me that's "conserving battery life" rather than "wasting processing capacity". Also, server-side processing power is a commodity; requiring a given level of PDA-side processing power may reduce the choices and/or increase the price. And graphics processing power is a fairly cheap commodity nowadays; I'd guess that an 8400GS with 256MiB of RAM could do the rendering for 100 clients @800x600 both faster and more cheaply than upping the PDA spec to a version with accelerated graphics.
On 2010-10-12, D Yuniskis <not.going.to.be@seen.com> wrote:
> Hi Robert, > > Robert Swindells wrote: >> On Mon, 11 Oct 2010 22:48:00 -0700, D Yuniskis wrote: >> >>> Ulf Samuelsson wrote: >>>> Have a look at the "www.openembedded.org" project. This will run Linux >>>> and X-Windows on a lot of different targets, including some Compaq >>>> PDAs. >>> Hmmm... I didn't see any devices that I recognized. I *think* I could go >>> the NetBSD route if I wanted to wipe the PDAs. I was hoping to find >>> something that would coexist, intially, with the native OS so I don't >>> have to do major surgery just to evaluate the technology... >> >> Try <http://www.handhelds.org> as well. > > Thanks, I'll poke around there... > >> Which devices would you want to use ? > > I've thought of using iPhones *if* I could permanently disable > (i.e., destroy) the portion of the radio that makes it usable > as a phone (irrecoverably).
That's called an "iPod touch". -- Grant Edwards grant.b.edwards Yow! BARRY ... That was at the most HEART-WARMING gmail.com rendition of "I DID IT MY WAY" I've ever heard!!
D Yuniskis <not.going.to.be@seen.com> wrote:
> A browser interface is out because the browser has functionality > that the application can't disable (e.g., history, font selection, > etc.).
Just a data point, but a web browser like NetSurf has a framebuffer mode that's pretty much just page (think you can switch off the navigation bar). For example: http://www.netsurf-browser.org/about/screenshots/wip/fb2.png NetSurf has no Javascript which makes it really fast, and it's designed for slow hardware (while it'll run on a 30MHz ARM6, its original target was a 200MHz StrongARM). But it depends what you want - if you need JS then other browsers might support something similar. Theo
David Brown <david@westcontrol.removethisbit.com> wrote:
> It can also go a step further. With some versions of VNC (I use > tightvnc) you can install an extra screen driver (the "mirage" driver) > that spies on the windows graphics GDI calls. With this in place, vnc > can monitor the actual GDI calls ("draw a line here", etc.) and pass > these over the link to the client. It can always fall back on a > straight pixel copy if necessary, but use of "mirage" can greatly reduce > the load on the VNC server, as well as the bandwidth. > > I have no idea if there is a similar system for Linux VNC servers - as I > say, I have had little need of VNC on Linux.
This is essentially how Linux VNC works... the VNC server is a modified version of the X window system (ie the thing that applications talk to - similar to XFree86, x.org), but the X11 code is modified so that it uses VNC as a display interface rather than a driver for some graphics card. By modifying the code it has full access to all graphics events. Theo
Hi Grant (and Steve, for his similar comment)

Grant Edwards wrote:
> On 2010-10-12, D Yuniskis <not.going.to.be@seen.com> wrote: >>> Which devices would you want to use ? >> I've thought of using iPhones *if* I could permanently disable >> (i.e., destroy) the portion of the radio that makes it usable >> as a phone (irrecoverably). > > That's called an "iPod touch".
OK: I've thought of using iPod touch's *if* I could permanently disable (i.e., destroy) the portion of the device that makes it usable as a media player (irrecoverably). <grin> The goal is to make this very noticeably part of "my" device and not usable in its original function. This is to discourage them from "growing legs" ("Cool! I'll just slip this in my pocket as I depart tonight and load my songs on it..."). People would find "high replacement costs" a definite downside (especially as they aren't inexpensive!) Imagine if, for example, you went to a museum and the "self guided tour" was built on devices like these as a platform. You'd be asked to put up a $200 deposit to ensure their return at the end of the tour -- else too many "patrons" would *deliberately* take them home.
Hi Theo,

Theo Markettos wrote:
> D Yuniskis <not.going.to.be@seen.com> wrote: >> A browser interface is out because the browser has functionality >> that the application can't disable (e.g., history, font selection, >> etc.). > > Just a data point, but a web browser like NetSurf has a framebuffer mode > that's pretty much just page (think you can switch off the navigation bar). > For example: > http://www.netsurf-browser.org/about/screenshots/wip/fb2.png
Can I configure it and the PDA so that it *locks* into this mode? I.e., nothing short of reflashing the device (or something with a similarly high bar) can turn the browser *off* (and, presumably, the browser *never* crashes)
> NetSurf has no Javascript which makes it really fast, and it's designed for > slow hardware (while it'll run on a 30MHz ARM6, its original target was a > 200MHz StrongARM). But it depends what you want - if you need JS then other > browsers might support something similar.
JavaScript would only be necessary if I couldn't have some other processes running alongside the browser (this would be a kludge). I am more than happy just pushing "drawing primitives" at the handheld and waiting for input events. I.e., just like I could do with an X server running on that device. I want the handheld to behave AS IF it had previously been part of this product and someone *cut* it out and reconnected it to the device with a long, invisible cord. (i.e., no, it doesn't have an address book, or a media player, or "Solitaire", or... it's *just* the display off of Product X) Thanks, I'll chase it down...