EmbeddedRelated.com
Forums

PDA as "X terminal"

Started by D Yuniskis October 11, 2010
Hi Robert,

robertwessel2@yahoo.com wrote:
> On Oct 12, 11:23 pm, D Yuniskis <not.going.to...@seen.com> wrote: >> Most tablets are a bit large. I'm trying to hit a "sweet spot" >> that's a bit bigger than most phones but not as big as a tablet. >> I.e., you have to carry this around with you *while* you are working >> (including activities that *don't* require you to interact with >> the device) > > Archos (obvious URL) has a variety of tablet sizes running Android. > Their 43 Internet Tablet, for example, has a 4.3 inch screen, WiFi and > Bluetooth, along with most of the other stuff you'd expect. No phone
A *quick* (I'm otherwise preoccupied :< ) look at the specs shows *more* than I need :> A notable omission is any means of expansion (microSD slot is effectively useless for "bolt on" hardware -- CF would have been ideal; SD a distant runner up). I couldn't find anything suggestive of pricing (other than some other refurbished models) but I can chase that down a bit later...
> (although any device of this class is ultimately capable of running IP
Exactly. If *I* want to push audio through the interface, that's *my* perogative -- not the *user's* ("I think I'll just switch on the phone and call home...")
> phone software). The have larger and smaller devices (2.8, 3.2, 5.0 > and 7.0 inch sizes would seem to cover the range you're looking for). > Some of their tablets are unlocked (the 5, for example), and other > Linux distributions are available for them.
Ideally, I'd prefer a BSD platform (decades of experience there). But, I can adapt, if need be.
> While the Android tablet market is just getting rolling (heck, the > whole tablet market is just getting rolling), I think you're going to > see many devices in a variety of sizes.
Thanks!
On 2010-10-13, D Yuniskis <not.going.to.be@seen.com> wrote:
> Hi Grant (and Steve, for his similar comment) > > Grant Edwards wrote: >> On 2010-10-12, D Yuniskis <not.going.to.be@seen.com> wrote: >>>> Which devices would you want to use ? >>> I've thought of using iPhones *if* I could permanently disable >>> (i.e., destroy) the portion of the radio that makes it usable >>> as a phone (irrecoverably). >> >> That's called an "iPod touch". > > OK: > > I've thought of using iPod touch's *if* I could permanently disable > (i.e., destroy) the portion of the device that makes it usable > as a media player (irrecoverably). > ><grin>
OK, that's a bit more difficult. I guess I'd look for a PDA that's well supported by Linux/OPIE/Qtopia. The problem with that is by the time they're "well suppored" they're usually no longer in production. http://opie.handhelds.org/cgi-bin/moin.cgi/Hardware There are always some interesting OEM devices, but usually you've got to buy in volume: http://www.linuxfordevices.com/c/a/Linux-For-Devices-Articles/Linuxbased-MIDs-UMPCs-and-Tablets/ -- Grant Edwards grant.b.edwards Yow! ... I want a COLOR at T.V. and a VIBRATING BED!!! gmail.com
Hi Grant,

Grant Edwards wrote:
> On 2010-10-13, D Yuniskis <not.going.to.be@seen.com> wrote: >> I've thought of using iPod touch's *if* I could permanently disable >> (i.e., destroy) the portion of the device that makes it usable >> as a media player (irrecoverably). >> >> <grin> > > OK, that's a bit more difficult. I guess I'd look for a PDA that's > well supported by Linux/OPIE/Qtopia. The problem with that is by the > time they're "well suppored" they're usually no longer in production. > http://opie.handhelds.org/cgi-bin/moin.cgi/Hardware
I'm not worried about the production issue. Right now, I am researching the issues that come with making an interface portable (for certain classes of applications and users). E.g., you don't think about someone walking off with a 20" touch monitor -- it doesn't happen very often. :> Ownership (usership?) of such a device is different than highly portable devices which can be more readily interchanged. But, there are myriad other issues that are consequential to the portable/smaller implementation. Graphics become more significant (reading small text requires a disproportionately greater amount of the user's attention which can conflict with other activities -- the "texting while driving" syndrome). On a 20" interface, a user can *casually* read legends as they are physically large. Scale that same interface down and the legends become illegible -- or, occupy a larger portion of the available display. In addition to visual consequences, smaller means more precision required in the user's "digit-al" interaction with the touch panel. Controls can't be scaled down unless you want to force the user to use a stylus (which then increases the cognitive load on the user). Smaller controls make it harder for people with motion disorders (e.g., ET) to interact with the device. Weight also becomes an issue. A heavy device becomes burdensome to carry all day. A lighter device often implies reduced battery capacity, features, etc. There are motion disorder consequences as well (a device with a certain amount of "heft" dampens some of the effects of ET). Small devices are more readily used outdoors. So, utility in sunlight (does the screen get washed out? do you have to keep the backlight set high to compensate?) has to be evaluated. Portable devices make location aware computing more of a challenge. A large device is typically sited in a fixed location. That location can be known to the application and it's behavior remains static wrt that parameter. OTOH, if a device is *mobile*, the application needs to change its behavior *dynamically* (instead of "at boot"). A portable device *as* a credential raises security issues; someone *can* walk off with it (much easier than a larger device acting as a similar credential). How you address these possible vulnerabilities, etc. [deep breath] What I want to do, now, is come up with something that I can deploy and, from which, gather usage metrics to better understand these issues, their consequences and other things that come up. With "typical" users (not users who are overly friendly with the device/system). As a first step to that, I wanted to port existing applications to something easily (sacrificing performance, etc.) just to get a *personal* feel for the issues that are likely to arise. I want "real users" to see a beta version of a device, not an alpha. I don't want them familiar at all with the alpha version as that would influence their acceptance/willingness to use the beta version (and, The Market's ultimate attitude towards the *production* version) "Make TWO to throw away..."
> There are always some interesting OEM devices, but usually you've got > to buy in volume: > http://www.linuxfordevices.com/c/a/Linux-For-Devices-Articles/Linuxbased-MIDs-UMPCs-and-Tablets/
Yes (see above). That;s the easy part! Then, you *know* what you want and just have to find someone who can hit your price/feature point. *Getting* to that point is the hard part! :>
On Wed, 13 Oct 2010, D Yuniskis wrote:

> Ideally, I'd prefer a BSD platform (decades of experience there). > But, I can adapt, if need be. >
Who wouldn't? OpenBSD runs on Palm these days, and for an interface, one might look at Evas from the enlightenment project (now in Beta) ... It supports the Xscale PXA 2X0 Arm based palms, including the Tx and the Tungsten ... you could do worse than having what you asked for 8-). Cheers, Rob Sciuk
On 13/10/2010 12:07, Didi wrote:
> On Oct 13, 12:41 pm, David Brown<da...@westcontrol.removethisbit.com> > wrote: >> On 13/10/2010 11:21, Didi wrote: >> >> >> >>> On Oct 13, 11:53 am, D Yuniskis<not.going.to...@seen.com> wrote: >>>> Hi Dimiter, >> >>>> Didi wrote: >>>>> On Oct 12, 9:08 pm, D Yuniskis<not.going.to...@seen.com> wrote: >>>>>> ...... >>>>>>> My netMCA product line relies practically only on VNC for >>>>>>> user access and maintaining a live 800x600, 16 bpp frame takes about >>>>>>> 10% of the power of a 400 MHz mpc5200b (not counting networking >>>>>>> overhead, just frame change detection and compression). Then it >>>>>> Ouch! I assume VNC sits *past* your "presentation layer" >>>>>> so your application can't provide hints to it to expedite >>>>>> the change detection? >> >>>>> VNC is an implementation of RFB (remote framebuffer protocol), >>>>> so basically this is where it belongs. It transfers pixels. >>>>> DPS will tell you when a window has been (or not) modified so >>>>> the VNC server will know that; but somehow signalling which pixels >>>>> got modified is obviously impractical, not to speak on how >>>>> this will be done with all the overlapping live windows etc. >> >>>> For example, my curses implementation watches all accesses >>>> to the "pad" -- equivalent of frame buffer except it's just >>>> a 2D array of (character, attribute) tuples -- and notes where >>>> the changes are. Then, when told to "update the physical display", >>>> the code examines these "change flags" and decides what the optimal >>>> (e.g., least number of characters sent over the serial link) >>>> method of getting the physical display to coincide with the >>>> *desired* "virtual image". >> >>>> A similar thing can be done by noting where accesses to >>>> the frame buffer are made. Of course, the cost of doing >>>> this is considerably more than, for example, a 2000 - 8000 >>>> character "text display". But, you have to look at the >>>> costs of transport vs. computation (e.g., with a serial >>>> port, you can save *lots* of transport time by cutting all >>>> "redundant" character data out of the transmission. >> >>> Hi Don, >> >>> transport time can be saved and is saved, this is exactly >>> what my vnc (rfb) server does compression for. Over a slow >>> link, say, a 500 kbpS, redrawing a blank "shell" screen >>> goes instantly; it takes pretty long if the image is some >>> landscape photo. Over a 10 MbpS link you can work normally, >>> although you will notice a fullscreen redraw; over a >>> 100 MbpS link, you just don't feel this is a remote host >>> (at 800x600 pixels, 16bpp; same on a 1280x1024, although >>> I rarely use that over VNC). >> >>> But keeping track on which pixels have changed in a multi- >>> window multitask environment while drawing these is >>> impractical. It will take a lot more resources to do so >>> than to detect changes in the final framebuffer, not to >>> speak about the mess it will add to the entire code. >>> As it is, applications draw what they want to (spectra >>> displays, oscilloscope displays etc. live things), do >>> the bulk of the work (which is signal processing), and >>> a fraction of the resources goes on VNC, 10% is not >>> that much at all for what you get. >> >>>> (you would have to look *into* the VNC implementation in >>>> order to let it notice the begin/end change information >>>> that you have been logging with each FB access) >> >>> This is why the vnc server does these checks, the code is >>> highly optimized, very efficient. Not much room for improvement >>> there. Of course it takes decisions which parts to send, >>> whether compression of a particular changed region is worth it >>> (e.g. if you have a series of one changed and one unchanged >>> pixel compression will be wasting rather than saving bandwidth). >> >>>> Imagine your "draw_window" routine *marking* what parts of the >>>> display it has altered. Then, your VNC implementation *knows* >>>> which parts of the FB have been modified and which haven't. >> >>> Well, the end result is indeed that. But someone has to keep >>> track on which window has been modified, where on the "screen" >>> this windows origin currently is, how much of it is visible >>> (i.e. not covered by another window or outside of the "screen" >>> frame); doing this all the time - while drawing each pixel >>> or each line (only to add the overhead of clipping and rendering >>> to the vnc server as well....) by the vnc is insane, to put it >>> mildly :-). >>> This why an OS - like DPS - does this for the applications, the >>> VNC server being one of them - and I am sure you cannot >>> point me to a more efficient working implementation which has >>> all that functionality. In fact, I doubt you can locate >>> one which has all that - applications included for the netMCA-2 - >>> which >>> will fit within a 2M flash - with some free space. >> >> Actually, there /are/ VNC server setups which work in this "insane" way. >> I use VNC quite a bit on windows machines, since it is the only decent >> way to remotely access them (with Linux I can do most things with ssh - >> vastly more efficient). >> >> The VNC server gets information from Windows when parts of the screen >> are re-drawn. It doesn't cover everything - accelerated stuff, 3D >> parts, DirectX, video, etc., are always going to be a mess when you are >> using remote displays. Windows also doesn't seem to be able to see >> changes to command prompt boxes (don't ask me why!), so these are >> polled. But the VNC server gets information about rectangular areas >> that may need to be resent. It then compares the new bitmap with its >> copy of the frame buffer to see if it has really changed, and to >> determine how to send it (raw data, lossless compressed data, lossy >> compressed, etc). >> >> It can also go a step further. With some versions of VNC (I use >> tightvnc) you can install an extra screen driver (the "mirage" driver) >> that spies on the windows graphics GDI calls. With this in place, vnc >> can monitor the actual GDI calls ("draw a line here", etc.) and pass >> these over the link to the client. It can always fall back on a >> straight pixel copy if necessary, but use of "mirage" can greatly reduce >> the load on the VNC server, as well as the bandwidth. >> >> I have no idea if there is a similar system for Linux VNC servers - as I >> say, I have had little need of VNC on Linux. > > All these internal tricks are done not to save time, but because > access > to the windows framebuffer is not an option for the applications. > I remember the tightvnc people had at some point some post which said > why it would not be easy at all to support vista IIRC (reason was > no access to display memory). > > This is part of the reason why I said DPS was particularly vnc > friendly. >
It's certainly true that VNC is more efficient with an OS (or graphics layer) designed to work with it. vncserver on Linux is more efficient than windows, because it works directly with the X server rather than having to use backdoor hacks as is needed on Windows. And I'm sure it is even more efficient with DPS.
> Telling the client to draw primitives on the screen is not part of > the RFB protocol, so this must be some other flavour of vnc, if it > really works this way. I don't know a tightvnc which does that > (but I have not recently checked the news there). >
It's always possible that I've misunderstood something here. I certainly find that using the mirage driver reduces the load on the server, and I /believe/ that it reduces the bandwidth. But I haven't done any benchmarking or serious comparative testing, and I could certainly be wrong as to /why/ it works faster.
> BTW tightvnc is a bit sluggish to me (I use only vnc clients on > windows), realVNC works better for me. >
I've found that the different flavours of vnc have leapfrogged each other with regard to performance and features. The great thing about them being open source is that you have multiple implementations that add features and improvements that they think are important for their users. And if one of the other implementations has something they want, they can merge that in with their own code while giving out the code for their own additions.
On Oct 14, 1:07=A0pm, David Brown <da...@westcontrol.removethisbit.com>
wrote:
> .... > I've found that the different flavours of vnc have leapfrogged each > other with regard to performance and features. =A0The great thing about > them being open source is that you have multiple implementations that > add features and improvements that they think are important for their > users. =A0And if one of the other implementations has something they want=
,
> they can merge that in with their own code while giving out the code for > their own additions.
What I found particularly great with VNC is its wide popularity. It opened the door for me to put the netMCA series where I can deliver DPS based products accessible over the net from "any" PC. Over the (many...) years I have put a major effort in staying PC/ windows independent so having a way to use their display, keyboard and mouse to access DPS is really a great asset to me. Having 100 MbpS widely available has also been important, of course. Dimiter ------------------------------------------------------ Dimiter Popoff Transgalactic Instruments http://www.tgi-sci.com ------------------------------------------------------ http://www.flickr.com/photos/didi_tgi/sets/72157600228621276/
Hi Rob,

Spam@ControlQ.com wrote:
> On Wed, 13 Oct 2010, D Yuniskis wrote: > >> Ideally, I'd prefer a BSD platform (decades of experience there). >> But, I can adapt, if need be. > > Who wouldn't?
<grin> There appear to be *lots* of folks in that "wouldn't" camp! :-/
> OpenBSD runs on Palm these days, and for an interface, one might look at > Evas from the enlightenment project (now in Beta) ... It supports the > Xscale PXA 2X0 Arm based palms, including the Tx and the Tungsten ... > you could do worse than having what you asked for 8-).
I've been looking at a bunch of devices that I've rounded up recently as well as over-the-years. I ruled out most of the Palms (that I have) as too light/flimsy. Some are too gimmicky (e.g., the "sliding motion" -- what else could you call it? -- of the T3). I'm also not fond of the reserved area on the Palm screens. Personally, I the HP 3900's seem the best candidate so far. They feel "substantial" (plus the expansion sleeves are a win). But, I think they would only "fit" use by a "man" (using that as a stereotypical term) -- too meaty for a smaller framed man or "woman" to have to lug around all day long. If I had to make a decision *today*, I'd opt for the HP hx4700 based on size/weight/features/etc. It seems more manageable in terms of size/weight. The CF slot is built-in instead of being provided by the sleeve. Battery seems ample (I would have to see how it fares after long term use and repeated partial charges, etc.). It's a 600MHz PXA270 so has more than enough balls to "draw displays". (Accessing the stylus is a bit of a pain -- but, that will be removed so it's not germane to my needs.) I'll have a look at some of the tablet products and see if any will give me the expansion capability in the right size range. If OBSD is supporting the PXA's, I suspect NBSD does, as well? I'll poke around both sites and see what turns up. Thanks!
Hi Dimiter,

Didi wrote:
> On Oct 13, 1:31 pm, D Yuniskis <not.going.to...@seen.com> wrote: >> .... >> You have to look at what you end up doing "in" the >> display. I.e., the nature of changes that you make >> between "updates". (you also have to decide if *you* >> can control the update() or if you are at the mercy of >> an asynchronous update process, etc. > > In a multitask multi-window OS there is no "you", different > tasks can do different things in various windows. Of course > DPS tasks can control window update (draw for a while and > signal the change afterwards, this with a timeout).
Sorry, I meant "you" as in "The Application" (regardless of how many threads) vs. an "asynchronous" process that grabs the contents of the "virtual frame buffer" and sends it out over the wire. In other words, if your application can scribble on the frame buffer and *then* inform/invoke something that moves that out to the physical display (via VNC or whatever), then you can arrange for all of the parts of the virtual display to be updated *before* anything (RFB) tries to pass them along to the outside world. If, for example, you *know* you are going to construct an empty window and *then* paint some "controls"/contents into it, it would be much more efficient for you to have finished filling in the window *before* updating the physical display.
>> You don't have to look at individual pixels. For example, >> in many text-based UI's, I present a layered window interface. >> I.e., a hierarchy of menus implemented as pop-ups layered >> atop each other. >> >> So, I *know* that the changes to any particular screen >> image will be roughly rectangular in nature. > > But how do you know there is no other change done by another > application. Or do all applications draw into the framebuffer > _and_ forward their doings to the vnc server? That would at > least double the application overhead so at the end of the day > you will be less efficient.
Let me explain how (my) curses implementation works and you can see the parallel to a pixel-based frame buffer. The "virtual screen" is a two dimensional array of "cells". Each cell is a (character, attributes) tuple -- attributes being things like color, bold, dim, blink, underline, invisible, etc.). Forget how this is represented in memory. Just pretend it's an array of "characters". ANYONE (task/thread) who wants to send something to the display uses the curses API to do so. I.e., no one writes directly into that array. So, their are calls to let you write *a* character (with attributes) at a particular place (row, column), to erase a portion of the screen, etc. We're dealing with text, typically. So, you often pass strings onto the screen. I.e., an array of characters placed at a particular position "on the screen". [note that the window system resides ON TOP of curses. So, your task typically talks to the windowing API which, in turn, talks to the curses API, etc. At least, that is how it is *logically* structured -- the implementation blurs these layers to increase efficiency] Anytime curses -- acting at the request of some task -- writes into the virtual display, the manner of it's actions are known (by the developer). For example, the hook that lets you write a string into the display writes from left to right (d'uh!). As such, it knows that the leftmost part of the display that will be altered is that of the *first* character that will be written. The SECOND character will never be to the left of the first one (this is obvious :>). Nor will any of the subsequent characters. So, the curses routine can look at the "begin" column (as in "beginning of changed area") variable and compare it to the column number in which the first character will be written. If the column being updated is to the left of the current "begin" value, the begin value can be updated to reflect *this* column as the leftmost that has been altered -- all characters which follow it (in this "write string" function invocation instance) will be to the right of this point -- so, begin need never be examined again (in this function instance). Likewise, the *last* character position written in this string is the only one that must be examined and compared to the "end" variable as it will be the RIGHTMOST change made to the display in this function call. Note that the first charater written might have been to the RIGHT of the "end". Or, the last character might have been to the LEFT of the "begin". Regardless, the two tests I described will accurately cover all possibilities. ALL OF THE CHARACTERS BETWEEN THE FIRST AND LAST ARE NOT CONCERNED WITH begin AND end! (i.e., there is no added cost for tracking them). [I am just describing one version of the "change" algorithm; and, approaching it incrementally. Not to insult your intelligence but, rather, to develop the argument, logically and for the benefit of others reading over your shoulder] Character displays (TTYs) are line-oriented. There aren't usually primitives (ANSI 3.64) that let you deal with "regions". You can position the "display cursor" and then overwrite/insert/etc. WITHIN A LINE, typically. As such, the changes on line 1 aren't related to the changes on line 2 (or 6) -- in the TTY's mind. Of course, if you are drawing (text) windows on the screen, then the contents of lines 1 and 2 may have a very definite relationship to each other (e.g., if you are drawing a box around a region of text then the position of the box's "side" coincides in lines 1 *and* 2...)! So, you can track a begin/end for each row of the virtual display. When you eventually want to update the PHYSICAL display to coincide with the virtual display, you can look at the begin/end values for each row, in turn, and effectively transmit the "set cursor position to begin" command sequence (specific to the particular type of TTY) followed by the characters from virtual display columns "begin" through "end". [in reality, you look at the cost of this operation vs. other alternatives. E.g., if begin is '2' and end is '79', it may be more efficient to send the entire line than to incur the cost of positioning the cursor, first] Instead of tracking begin/end for each row in the display, if you *know* you have something like a windowing API sitting on top of this, you can opt, instead, to track a single begin/end for the entire display -- where begin and end are (row,column) tuples. This exploits *your* knowledge of the fact that you will typically be invoking the curses API with calls like: write_string(row, column, blah); write_string(row+1, column, foo); write_string(row+2, column, baz); as the window you are writing into is located at (row,column). Then, your update() routine just does: for (row = begin.row; row <= end.row; row++) { set_cursor_position(row, begin.column); for (col = begin.column, col < end.column; col++) { transmit_cell(virtual_display[row, col]) } } display = CLEAN; You can also opt for an implementation in which begin marks the start of the change and *everything* (sequentially) between there and "end" (regardless of column) is considered dirty. Finally, you can track each cell individually (at higher cost) by keeping a 1:1 array of "changed" flags per display position. This is more involved but achieves some really impressive results updating "exploding windows" over a very slow comm link! An astute observer will realize that it is only changing the individual characters that need to be changed on the physical display and skipping over all the rest. I.e., the window grows in almost linear time instead of squared. [this is actually a worthwhile "thought experiment". Consider the cost (time) required to draw a growing rectangle from the center out when you have to pass *all* of it's contents to the display vs. when you only have to pass the "growing edges" of that region. The former gets sluggish as the rectangle enlarges (the work required grows with the "area" being repainted each time) while the latter grows more at a constant rate (the work required grows with the *perimeter* being repainted)] The point about all of these is that the "change" information is updated at each access into the virtual display (buffer). And, that the cost of doing so can be much smaller than tracking the change for each "character" (in your case, "pixel"). So, if you are always "passing rectangles" to your frame buffer (even a character glyph is a rectangle -- just smaller!), you can economically track the dimensions of the "smallest enclosing rectangle". Then, when it is time to "update" the display, you *know* that everything outside of this region (marked with a begin and end (row,column)) is "unchanged". You can either opt to send that enclosing rectangle in its entirety; or, can further refine the "change detection" -- but, you only have to examine *it's* contents, nothing more. If two or more threads are painting into the virtual frame buffer concurrently, you treat it just like any other shared resource. "Something" arbitrates access. The crudest case puts a mutex, monitor or some other access mechanism immediately above the "write pixel" level (note that it doesn't have to apply to *each* "write pixel" -- you can process groups of pixels in much the same way I process "strings" of characters). These accesses manipulate the *shared* begin/end variables that govern the entire display. If your threads are each accessing different, NONOVERLAPPING portions of the display, then there is no contention -- you can treat the two regions (of arbitrary size and shape) as two different resources, each with an exclusive user. So, each can have a "begin/end" marker. Your RFB protocol can then know that the changes are in at most two rectangular regions which it can further examine (for more finegrained optimization) and process individually. E.g., I frequently display a time/date or other status information in corners of the display. These are updated by other tasks without concern for what the "application" may be doing (the application may be running *under* some "configuration windows" as the user is modifying the system's configuration *while* it is running... and something else is updating the clock, etc.). If I show the time in columns 70-80 in row 25, then "end" will always be (80,25). If "status" is displayed in a 4x8 window anchored at (1,1), then "begin" will typically be (1,1). So, a simple begin/end per display (!) would give me know useful information: the entire display APPEARS to need to be updated, always! OTOH, if I deal with begin/end per line, this quickly improves: if there is nothing else changing on "row 25", then at most those 10 characters in the "time" will need to be updated. Likewise, only the few character in the "status" area will be updated. Meanwhile, the bulk of the application window is optimized by itself.
>> If you are updating the display after *a* new >> window is created (or old window destroyed), then >> your changes will be confined to a rectangular region >> somewhere (i.e., where the window is/was). > > Oh no. The window can be only partially visible, parts > of it covered by other windows' edges etc., so you don't > know that.
Correct. But, at some level in your window system, there is code that knows which window pixels are "exposed". I.e., which portions of the virtual framebuffer will get scribbled on. Assuming you don't have something like the SHAPE extension, everything boils down to a bunch of rectangles (a window overlapping another window is, worst case, five rectangles)
> Been there considered all that :-). > >>> Well, the end result is indeed that. But someone has to keep >>> track on which window has been modified, where on the "screen" >>> this windows origin currently is, how much of it is visible >>> (i.e. not covered by another window or outside of the "screen" >>> frame); doing this all the time - while drawing each pixel >>> or each line (only to add the overhead of clipping and rendering >>> to the vnc server as well....) by the vnc is insane, to put it >>> mildly :-). >> No, you don't look at individual pixels. You look at >> "drawing objects" (lines, circles, regions, etc.) and >> track *their* "extents". > > So what happens if the application draws a line in one window > (it clips it to its limits and draws it into its offscreen > buffer) and the line is only half visible because of a > covering window. You clip the line to all possible windows > and do that with every line? Give me a break :-).
No. Something eventually parses the window hierarchy (either whole the line is being drawn or when you map the windows onto the display) and decides which portion of which window actually gets drawn on *this* particular pixel in the frame buffer. When that pixel is written into the framebuffer, you know that "pixel at (row,col) has been changed; does this affect begin/end?" You are free to move the windows around and then "redraw" them. Some other window's contents may likely end up being drawn on *that* pixel. Or not. The begin/end information doesn't care what window it is coming from. All that is important is that a particular frame buffer location has been *changed*. I.e., your "change detector" LATER would have found this pixel. I'm just giving it advance warning of where it is -- and, more importantly, where it *isn't* (i.e., "don't check anything outside of this rectangular region because I haven't made any changes there!")
> Let's put it in numbers: an 800x600, 16 bpp buffer is roughly > 1 megabyte. On the system I run DPS currently, DDRAM is about > 1 Gbyte/second fast (32 bit 266 MHz data rate). Somewhat less > in reality but close enough for our purposes. Comparing 1M frames > 16 times per second makes 32 megabytes transferred, or 3% of > the memory bandwidth; I'll put that against any multitask > multiwindow OS running a VNC server, just point me to one > to compare to. Assuming your forwarding method doubles > the application graphics overhead it will take you to reach > 1.6% load for graphics to be beaten. IOW, having a _constant_, > reasonably low overhead for change detection can only be > beaten on very simple systems (no multi-window etc.).
But your "constant" is constant even if nothing has changed in the display! All the begin/end (and similar) tracking does is give you advance notice of where you are *likely* to find changes (and where you WON'T find ANY!). Furthermore, they give information about the nature of those changes. E.g., you can compare two frame buffers and get lots of detail regarding individual pixels that have changed (for example, writing an 'F' over an 'E' in a line of text *within* a window, etc.). But, that can be too much detail. You have to then decide "gee, the cost of treating that single pixel plus the pixel two dots to the left as individual changes exceeds the cost of treating this 8x8 region as a single 'change'". This is because you are looking at dots -- the contextual information is no longer present (is this dot part of a window border that was drawn? am I likely to find other dots nearby that have also changed? or, is this just one little dot in a sea of constancy?) That was my point about where your RFB code lies in the "layering" of things. As you blur those layers, you end up (potentially) gaining efficiency because you can propagate more information between those layers. Only you can tell what your code does. I'm suggesting you look at your user interface and see if all of your changes are, in fact, confined to specific (even if they vary over time) regions that *some part* of your code is aware of -- but that your "change routine" is having to *detect*, each time you invoke it. Then, consider parameterizing your "change" routine so that it only looks at regions that you consider *likely* to have seen changes -- and, consequentially, *ignoring* regions that you KNOW you haven't changed!
> Finally, I was wrong on my 10%. In fact not me, my tt command > (which I wrote :-) was wrong. It lists real % overhead only when > there is some significant system load, otherwise it includes > idling, task switch etc. I now tried it again - ran the "hog" > task, which just does "bra *" - and the results can be seen at > http://tgi-sci.com/misc/vnc.gif . Clearly my tt hack does not > work all that well, percentages add up only to 99 in this > case (I have seen 100 and I have seen 98 :-) ). But for the > sake of my own usage, to estimate stuff etc. it is OK. > The tt task itself takes up that much time because it scrolls > the entire window up (graphically, in fact dps does it for > it when it sees the LF at the bottom line).
On Thu, 14 Oct 2010, D Yuniskis wrote:

> Date: Thu, 14 Oct 2010 09:44:56 -0700 > From: D Yuniskis <not.going.to.be@seen.com> > Newsgroups: comp.arch.embedded > Subject: Re: PDA as "X terminal" > > Hi Rob, > > Spam@ControlQ.com wrote: >> On Wed, 13 Oct 2010, D Yuniskis wrote: >> >>> Ideally, I'd prefer a BSD platform (decades of experience there). >>> But, I can adapt, if need be. >> >> Who wouldn't? > > <grin> There appear to be *lots* of folks in that "wouldn't" camp! :-/
I can't understand the enthusiasm for Linux given the BSD platforms ...
>> OpenBSD runs on Palm these days, and for an interface, one might look at >> Evas from the enlightenment project (now in Beta) ... It supports the >> Xscale PXA 2X0 Arm based palms, including the Tx and the Tungsten ... you >> could do worse than having what you asked for 8-). > > I've been looking at a bunch of devices that I've rounded up > recently as well as over-the-years. I ruled out most of the > Palms (that I have) as too light/flimsy. Some are too > gimmicky (e.g., the "sliding motion" -- what else could you > call it? -- of the T3). I'm also not fond of the reserved > area on the Palm screens. > > Personally, I the HP 3900's seem the best candidate so far. > They feel "substantial" (plus the expansion sleeves are a > win). But, I think they would only "fit" use by a "man" > (using that as a stereotypical term) -- too meaty for a > smaller framed man or "woman" to have to lug around all > day long. > > If I had to make a decision *today*, I'd opt for the HP > hx4700 based on size/weight/features/etc. It seems more > manageable in terms of size/weight. The CF slot is > built-in instead of being provided by the sleeve. Battery > seems ample (I would have to see how it fares after long > term use and repeated partial charges, etc.). It's a > 600MHz PXA270 so has more than enough balls to "draw > displays". (Accessing the stylus is a bit of a pain -- but, > that will be removed so it's not germane to my needs.) > > I'll have a look at some of the tablet products and see > if any will give me the expansion capability in the right > size range. > > If OBSD is supporting the PXA's, I suspect NBSD does, as > well? I'll poke around both sites and see what turns up. > Thanks! >
I don't disagree with your asessement. I have in front of me an HP IPAQ 2410, and a Palm TX (both PXA270's), as well as an old Pilot 5000 (no wireless). Here's a link to the Handhelds.org site which has an assessement (likely out of date) of the state of Linux on various models. http://handhelds.org/moin/moin.cgi/SupportedHandheldSummary The Zaurus is supported pretty well by OpenBSD, and even does a nightly build -- no really, OpenBSD doesn't like cross compilation for platform support, they have a zaurus which builds makeworld in the lab. There is a very talented young individual, Marek Vasut(?sp)? who worked on his thesis getting both Linux onto Palm, and putting OpenBSD there, but he has had a bit of a falling out with Theo, the details of which I am unsure. As for NetBSD support ... start with the Zaurus and work from there, I guess ... I have been hopeful of making older PDA's useful for hacking and such, particularly given the 802.11 support, but in spite of getting Linux and other operating systems running on them, their viability is limited. I've had some degree of success with a Nintendo DS and the Homebrew movement for raw (on the hardware) hacking, also with 802.11 support. At this point, I'm probably going to deal my old PDAs off, and move to an Android tablet ... possibly an Archos, or a Witstec A81E ... but who knows what will be announced next week. I can wait a week or two ... Cheers, Rob.
On Thu, 14 Oct 2010 19:40:55 -0400, Spam@ControlQ.com wrote:

><snip> >I can't understand the enthusiasm for Linux given the BSD platforms ... ><snip>
The FreeBSD kernel is a thing of sheer beauty. Linux is definitely a stone soup, by comparison. Whole pieces of code copied, pasted, and patched in places. But the herd goes where it goes. Jon