EmbeddedRelated.com
Forums

USB memory sticks for root file system - experiences

Started by acd November 14, 2011
On 2011-11-14, acd <acd4usenet@lycos.de> wrote:
> I am not sure whether folks in this group would count a device such as > the pogoplug > as an embedded device, but it applies also to many development > boards. > > I am experimenting for some time with two pogoplugs running linux, > booting arch linux from an USB memory stick. > It seems that particular the cheap supermarket memory sticks do not > last long. > In one case, within a week the USB memory stick got unusable, it > cannot even be formatted anymore. The last thing I did to it - after > it showed problems with ext2 - was formatting it back to vfat and > running a capacity checker under windows.
A root filesystem mounted read-only is a reasonable thing to demand and quite straightforward to achieve with judicious use of memory filesystems and/or NFS mounts. If there's data that absolutely _must_ be kept locally across reboots then I'd be thinking in terms of cpio'ing it into and out of a disk partition in an init script - it is likely to prove much less disruptive than putting an actual writable filesystem on the device. I have something similar sitting on my desk in front of me - my computer terminal is a Neoware CA21 thin client. I found the stock "firmware" a little limiting so I threw a NetBSD installation on it. The integrated 256Mb disk-on-module was a little limiting so I used a 1GB USB drive instead as a temporary measure until I got around to replacing it. Perhaps 18 months later I still haven't quite got around to it and it's still working fine. Again, that's a read-only filesystem in normal operation (i.e. you're not tweaking the configuration) which also has the advantage you can simply turn it off when you're done since all the apps are essentially stateless anyway. I did take a couple of months to refining that installation to read-only operation during which it had no real issues though, which seems better longevity than you have been experiencing. The USB drive is actually a promotional freebie from the local university so I can't cite a particular make and model, and dmesg isn't particularly revealing either since it too does not cite a manufacturer for it: sd0 at scsibus0 target 0 lun 0: <USB, Flash Disk, 1100> disk fixed sd0: 956 MB, 1968 cyl, 16 head, 63 sec, 512 bytes/sect x 1957888 sectors -- Andrew Smallshaw andrews@sdf.lonestar.org
Hi Andrew,

On 11/15/2011 7:00 AM, Andrew Smallshaw wrote:

> I have something similar sitting on my desk in front of me - my > computer terminal is a Neoware CA21 thin client. I found the stock > "firmware" a little limiting so I threw a NetBSD installation on
Interesting. I use CA5's for Xterminals and have been wanting to repurpose some CA10's (1GHz/1GB) for other bits of fabric (firewall, name service, etc.) but have yet to make time to design a CF adapter that will fit in the thing (connector points the wrong way). So, throwing laptop disk drives does the job without much effort.
> it. The integrated 256Mb disk-on-module was a little limiting so > I used a 1GB USB drive instead as a temporary measure until I got > around to replacing it. Perhaps 18 months later I still haven't > quite got around to it and it's still working fine. Again, that's > a read-only filesystem in normal operation (i.e. you're not tweaking > the configuration) which also has the advantage you can simply turn > it off when you're done since all the apps are essentially stateless > anyway.
How did you decide what parts of the system needed to be tweeked to get it to run R/O? E.g., mount an MFS for /var (and probably cut down on logging and/or set newsyslog.conf to compress and discard logs, *often*). Do you run many *real* apps on the boxes? Or, just use it to host an Xserver? (e.g., I figure I could afford to create a small writable portion on the flash device and then intentionally limit the applications that would want to do those writes. That way, I could put my zone files there to startup the name service while letting other "more dynamic" file system uses take place on an MFS mounted elsewhere.)
> I did take a couple of months to refining that installation to
-------------------------^^^^^^ Ouch! Hence my reluctance to tackle this (until I've decided that the CA10's are, indeed, the right platform o which to invest that time)
> read-only operation during which it had no real issues though,
So, root is *mounted* R/O? As such, anything that you may have "overlooked" will eventually cause a panic?
> which seems better longevity than you have been experiencing. The > USB drive is actually a promotional freebie from the local university > so I can't cite a particular make and model, and dmesg isn't > particularly revealing either since it too does not cite a > manufacturer for it: > > sd0 at scsibus0 target 0 lun 0:<USB, Flash Disk, 1100> disk fixed > sd0: 956 MB, 1968 cyl, 16 head, 63 sec, 512 bytes/sect x 1957888 sectors
Cool!
On 2011-12-01, Don Y <not.to.be@seen.com> wrote:
> > Interesting. I use CA5's for Xterminals and have been > wanting to repurpose some CA10's (1GHz/1GB) for other > bits of fabric (firewall, name service, etc.) but > have yet to make time to design a CF adapter that > will fit in the thing (connector points the wrong way). > So, throwing laptop disk drives does the job without > much effort.
I wanted to avoid a disk at all costs. One of the main motivations for using a thin client in the first place was for a completely silent machine so it removes one source of potential distraction (or possibly, it removes one potential excuse). In any case there's no room for an HDD in the CA21 case. The "right" sort of CF adapters are available but you a lot of searching and sorting out to isolate the correct ones - it doesn't help that a lot of sites don't seem to fully appreciate the differences between form factors so don't provide the necessary details unless you're willing to scrutinise images for detailsa couple of pixels high.
> How did you decide what parts of the system needed to be > tweeked to get it to run R/O? E.g., mount an MFS for /var > (and probably cut down on logging and/or set newsyslog.conf > to compress and discard logs, *often*).
Just /var and /tmp on MFS mounts, the former of which is populated from a tar file at boot time. I haven't seen the need to trim logging: since these are used as terminals they get powered off and reset at the end of the day anyway.
> Do you run many *real* apps on the boxes? Or, just use > it to host an Xserver?
Not really. Mostly it's X querying the relevant machine with XDMCP, or running rdesktop for when I need a Windows machine. SSH, telnet and Minicom (to a serial console server) for command line stuff. I have experimented with local apps and most things I use are fine. OpenOffice takes a while to load (perhaps 15 seconds) but it fine after that. The only problematic app is Firefox - it's slower than a dead slug. That's a pretty big limitation for me and I suspect a lot of users - I need a web browser even for a couple of in house databases.
>> I did take a couple of months to refining that installation to > > -------------------------^^^^^^ Ouch! Hence my reluctance to > tackle this (until I've decided that the CA10's are, indeed, > the right platform o which to invest that time)
I'll re-phrase that. A couple of months to _get_around_ to doing the job properly. Perhaps half an hour to sort out when I finally did. There are plenty of examples around online you you look in the right places: it is essentially similar to the way live CDs or even installation media work. There's also a guide on this very scenario at http://www.bsdnexus.com/NetBSD_onastick/install_guide.php
> >> read-only operation during which it had no real issues though, > > So, root is *mounted* R/O? As such, anything that you > may have "overlooked" will eventually cause a panic?
Yes, and I wouldn't expect panics either. Userland issues, sure, although the only problem I recall is with the SSH client and its authorised keys file - the home directories for some users are read-only too. I think I'd better explain that, since there are two classes of login. The first is conventional logins whose home directories are NFS mounted so there's no issue with those. However, I also have some pseudo-users with home directories in /usr (root filesystem). They have logins names of various systems, no password and when logged in their .profiles fire up X and connect to the relevant system. Crude but a hell of a lot more straightforward than some graphical front end. However, when SSHing directly out of the machine it tends to be as myself, so I have an writeable home driectory anyway. -- Andrew Smallshaw andrews@sdf.lonestar.org
Hi Andrew,

On 12/2/2011 4:00 PM, Andrew Smallshaw wrote:
> On 2011-12-01, Don Y<not.to.be@seen.com> wrote: >> >> Interesting. I use CA5's for Xterminals and have been >> wanting to repurpose some CA10's (1GHz/1GB) for other >> bits of fabric (firewall, name service, etc.) but >> have yet to make time to design a CF adapter that >> will fit in the thing (connector points the wrong way). >> So, throwing laptop disk drives does the job without >> much effort. > > I wanted to avoid a disk at all costs. One of the main motivations > for using a thin client in the first place was for a completely > silent machine so it removes one source of potential distraction > (or possibly, it removes one potential excuse).
Agreed -- for an X terminal. It's delightful being able to hide the noisey machines (real servers) elsewhere and concentrate on the work at hand... *soft* music in the background instead of being forced to wear headphones, etc. But, I've had small machines providing "key services" tucked under a dresser in the bedroom for more than a decade. SPARCstation LX was my favorite in this role (*reasonably* quiet -- if you used a new-ish disk -- and low power... not like the majority of PC offerings). Given its unfortunate location (think: sleeping), it is probably an even *better* candidate for "silent operation". [unfortunately, the LX just didn't have the horsepower to keep up with my routing needs as more nodes and networks came online :< ]
> In any case there's > no room for an HDD in the CA21 case.
It looks like the CA21 is about the size of the CA5. The CA10 is probably twice that volume. E.g., it supports a single PCI slot, has provisions for a PCMCIA option on the PCB (i.e., lots of real estate), etc. A laptop drive fits easily. The PCI slot and dual DVI+VGA connectors (it will run dual headed) are the real draw, for me (plus the fact that they were freebies). E.g., I stuffed a 4NIC PCI card in the firewall box so it can straddle the routing between *all* the networks, here (WAN on one NIC, wireless AP on another, "traditional" computing on yet another and automation and multimedia on the last two). I'd like to deploy some of the others for various "dedicated" roles on those other networks (e.g., media services, etc.) but those apps tend to be more (persistent) stateful...
> The "right" sort of CF adapters are available but you a lot of > searching and sorting out to isolate the correct ones - it doesn't > help that a lot of sites don't seem to fully appreciate the > differences between form factors so don't provide the necessary > details unless you're willing to scrutinise images for detailsa > couple of pixels high.
Exactly. Where is pin 1? Which direction will the card extend into the case (the IDE44 connector is located on an edge of the board so if the adapter mounts "the wrong way", you can't plug the adapter into the connector due to mechanical interference with the case)? Will the CF card sit *on* the adapter or hang below it (interference problems, etc.)? Unfortunately, I only have one or two of the original "modules" (which, of course, are designed to fit very nicely into the space provided). They are a bit smallish (i.e., imagine fitting a "flush" set of choices for xfs on that card!) and have XPe on them currently.
>> How did you decide what parts of the system needed to be >> tweeked to get it to run R/O? E.g., mount an MFS for /var >> (and probably cut down on logging and/or set newsyslog.conf >> to compress and discard logs, *often*). > > Just /var and /tmp on MFS mounts, the former of which is populated > from a tar file at boot time. I haven't seen the need to trim > logging: since these are used as terminals they get powered off > and reset at the end of the day anyway.
Understood. In my case (firewall, DHCP, DNS, NTP, xfs, etc.) the whole point was to leave them running 24/7/365 so that any *other* machines could avail themselves of those services as/when needed. All of them tend to *want* to scribble notes someplace -- or, be easily reconfigurable as needs change.
>> Do you run many *real* apps on the boxes? Or, just use >> it to host an Xserver? > > Not really. Mostly it's X querying the relevant machine with XDMCP, > or running rdesktop for when I need a Windows machine. SSH, telnet > and Minicom (to a serial console server) for command line stuff. > I have experimented with local apps and most things I use are fine.
But you remotely mount a $HOME, etc. (?) Something I can't do as *this* is supposed to be the system that others depend upon (i.e., the bottom-most turtle).
> OpenOffice takes a while to load (perhaps 15 seconds) but it fine > after that. The only problematic app is Firefox - it's slower than > a dead slug. That's a pretty big limitation for me and I suspect > a lot of users - I need a web browser even for a couple of in house > databases.
Ah. The only time I tend to use a browser on an X terminal is for Solaris/Jaluna help/man pages. I should try it, though. With 1GHz and 1GB and Gb fabric (though I think the CA10 only runs at 100M??) it should be a client-side limitation. (i.e., not *running* the browser on the X terminal iron but just using it for display services)
>>> I did take a couple of months to refining that installation to >> >> -------------------------^^^^^^ Ouch! Hence my reluctance to >> tackle this (until I've decided that the CA10's are, indeed, >> the right platform o which to invest that time) > > I'll re-phrase that. A couple of months to _get_around_ to doing > the job properly. Perhaps half an hour to sort out when I finally > did. There are plenty of examples around online you you look in > the right places: it is essentially similar to the way live CDs or > even installation media work. There's also a guide on this very > scenario at http://www.bsdnexus.com/NetBSD_onastick/install_guide.php
I see the bigger problem being the effort required to determine what the applications (that will be hosted on that diskless iron) expect in terms of writeable file systems. E.g., long running services *will* tend to create bigger log files, you'll *want* those logs (since the box is providing key services), apps may need to update persistent configuration data, etc. I was chagrined to discover that PostgreSQL won't support a R/O database -- even if you never MODIFY it's contents! E.g., I had planned on keeping the catalog of music selections in an R/O database for the multimedia server (which I wanted to host using another of these silent, fan-less boxes) so that it (and the music itself -- though requiring external media to store due to size) was accessible whenever a user wanted it (i.e., 24/7/365). The fact that it apparently must reside on R/W media (even though it is never deliberately modified) made that a considerably harder challenge. [desktop applications seem to have a cavalier attitude towards resources: they expect them to be limitless and, at the very least, have NO IDEA what their actual requirements might be!]
>>> read-only operation during which it had no real issues though, >> >> So, root is *mounted* R/O? As such, anything that you >> may have "overlooked" will eventually cause a panic? > > Yes, and I wouldn't expect panics either.
Unlike a *real* embedded system, there don't seem to be any details available that tell you just how much memory the kernel will call upon in whatever situations it is likely to encounter with a given set of apps running on it. Since it makes no sense to have swap on a device like this (mount swap on an MFS? why not just use the underlying memory behind the MFS for physical memory??![1]), any time any set of kernel+apps exceeds the total physical memory available... [1] Actually, wrapping memory in an MFS with a co-resident swap (like Solaris' /tmp) can make certain configurations of apps more "runnable" (without altering sources). I would *prefer* the panic (at least while shaking out the various bugs in the configuration) as that draws immediate attention to each (new?) problem. Easier to notice than having to parse *.error syslog entries. :-(
> Userland issues, sure, > although the only problem I recall is with the SSH client and its > authorised keys file - the home directories for some users are > read-only too. > > I think I'd better explain that, since there are two classes of > login. The first is conventional logins whose home directories > are NFS mounted so there's no issue with those. However, I also > have some pseudo-users with home directories in /usr (root filesystem). > They have logins names of various systems, no password and when > logged in their .profiles fire up X and connect to the relevant > system. Crude but a hell of a lot more straightforward than some > graphical front end.
Understood. I manage my ssh, telnet, etc. connections similarly (and make a point to change $PS1 to `hostname` everywhere to remind me of who I'm talking to!)
> However, when SSHing directly out of the > machine it tends to be as myself, so I have an writeable home > driectory anyway.
Is *your* $HOME NFS mounted? Or, "volatile" in an MFS? I.e., can you *do* anything with *just* this machine up and running (and no others)? E.g., I can (currently, owing to the presence of the laptop drive) fire up a CA5 (as an X terminal) and "work" (run apps) on the CA10 (I often use this to *write* code that I don't yet need to compile... when I don't want to deal with starting any bigger iron).
On 2011-12-03, Don Y <not.to.be@seen.com> wrote:
> > But, I've had small machines providing "key services" > tucked under a dresser in the bedroom for more than a > decade. SPARCstation LX was my favorite in this role > (*reasonably* quiet -- if you used a new-ish disk -- and > low power... not like the majority of PC offerings). > Given its unfortunate location (think: sleeping), it is > probably an even *better* candidate for "silent operation".
It sounds like you and I have similar home set ups, not just here but in a few other things you say later on. I have a similar machine here for file & print, Postgres, Apache and a few odds and ends (DHCP, DNS etc) - that's a 600MHz VIA EPIA board I've been using a number of years. I'm thinking of replacing that more for the networking than CPU limits: I'm beginning to think I really need to upgrade it to gigabit ethernet and ideally dual NICs (or at least a NIC that supports VLANs): the sole PCI slot is already occupied by a SATA disk controller. The disk is one of the old 5400 RPM Hitachi CinemaStars which I think I mentioned in a previous thread of yours a year or so ago. They're nice and quiet - 24 dB even when active. Like you I came to the conclusion that sometimes there's no substitute for a disk.
> But you remotely mount a $HOME, etc. (?) Something I can't > do as *this* is supposed to be the system that others > depend upon (i.e., the bottom-most turtle).
Yes - from that EPIA based server. I did have some ideas of using them locally more than I actually do but the real motivation is chiefly so you can plug in a USB drive and access it from your seat. Having your usual home directory available makes that a lot more convenient but of course you could work around it if it wasn't.
> Ah. The only time I tend to use a browser on an X terminal > is for Solaris/Jaluna help/man pages. I should try it, though. > With 1GHz and 1GB and Gb fabric (though I think the CA10 only > runs at 100M??) it should be a client-side limitation. > (i.e., not *running* the browser on the X terminal iron > but just using it for display services)
If it's just being used as a terminal performance is fine and generally indistinguishable from a desktop, even on 100Mbit. I run at 1400x1050x24 and even full screen DVD playback is generally acceptable if it is another machine doing the actual decoding. Slow panning shots are slightly cinefilm-ish: you can see the frames but not enough to spoil what you are watching. Audio is fed to the speakers via analog connection to my usual physical "full size" machine on unused pairs of the network lead - I haven't tried network audio. I haven't played with the Unichrome's MPEG decoder on the Neowares yet, but when that server still had a head (and with a slightly less capable Unichrome chip) it worked well. OTOH I've no idea if that hardware acceleration is network transparent. I suspect it may not be. -- Andrew Smallshaw andrews@sdf.lonestar.org
Hi Andrew,

On 12/3/2011 5:15 PM, Andrew Smallshaw wrote:
> On 2011-12-03, Don Y<not.to.be@seen.com> wrote: >> >> But, I've had small machines providing "key services" >> tucked under a dresser in the bedroom for more than a >> decade. SPARCstation LX was my favorite in this role >> (*reasonably* quiet -- if you used a new-ish disk -- and >> low power... not like the majority of PC offerings). >> Given its unfortunate location (think: sleeping), it is >> probably an even *better* candidate for "silent operation". > > It sounds like you and I have similar home set ups, not just here > but in a few other things you say later on. I have a similar > machine here for file& print, Postgres, Apache and a few odds and > ends (DHCP, DNS etc) - that's a 600MHz VIA EPIA board I've been > using a number of years.
I wanted to get the "core services" that I use off of bigger machines and *into* the fabric, so to speak. It was annoying to have to fire up a UN*X box just to get name services running so a Windows box could access a network printer, or, the font server running just so I could use a particular font in a display, etc. For the automation and multimedia applications, this is even more true (I definitely don't want to have to keep a "real machine" running just to listen to music or control the furnace!)
> I'm thinking of replacing that more for the networking than CPU > limits: I'm beginning to think I really need to upgrade it to > gigabit ethernet and ideally dual NICs (or at least a NIC that > supports VLANs): the sole PCI slot is already occupied by a SATA > disk controller.
Do you really *need* the speed? I have all of my Gb hosts on a single 8 port switch (actually, I think the other switches are also Gb though the hosts that they serve often are not). My thinking when I was assigning switches was that printers and X terminals really don't *need* that sort of bandwidth. Nor does the automation stuff (though I suspect the multimedia *will*). Most of my files are served from bigger/faster boxen which already have fat pipes... [I've recently relearned the lesson that I keep having to learn each time I upgrade fabric: "No matter how fast the fabric gets, transferring *archives* will always take a LOT longer than you think -- because the archives get bigger coincident with the fabric getting faster! E.g., a few TB "over the wire" takes forever -- even at Gb speeds! :< I.e., it seems like SneakerNet (though with huge media) will always have a role for truly high bandwidth transfers ;-) ] So, it seems more effective to keep the "muscle" connected with wide pipes and not worry about the display/print/etc services
> The disk is one of the old 5400 RPM Hitachi CinemaStars which I > think I mentioned in a previous thread of yours a year or so ago. > They're nice and quiet - 24 dB even when active. Like you I came > to the conclusion that sometimes there's no substitute for a disk.
<frown> It's often an expedient. E.g., it will take me a LONG time to figure out how to support R/O media under PostgreSQL (a key requirement for some of the product development work I am doing). But, silly to prevent myself from moving forward populating those databases until then! And, even sillier to host those DBs on a big, noisey beast. (the bigger iron tends to see more use in the winter months when the excess BTUs are more welcome in the office -- definitely NOT in the summer months!)
>> But you remotely mount a $HOME, etc. (?) Something I can't >> do as *this* is supposed to be the system that others >> depend upon (i.e., the bottom-most turtle). > > Yes - from that EPIA based server. I did have some ideas of using > them locally more than I actually do but the real motivation is > chiefly so you can plug in a USB drive and access it from your
If you are running NBSD on the CA21, it should be relatively easy (?) to mount sd0 on $HOME -- assuming you aren't beating on that directory heavily? I never considered the idea of "carrying" $HOME in my pocket and simply plugging it "wherever" it might be needed. E.g., that would even work on a Windows machine! I'll have to think about this... it sounds like it could be a really good idea! Though keeping the discipline to always use that device for *all* the work on the stuff contained thereon might be hard -- I'd almost have to force myself NOT to back it up onto any other server (lest I be tempted to modify the backed up version at some point and quickly get the two versions out of sync) [I tend to think of files as having real, physical locations -- that don't MOVE! But, to allow *me* to move and still access them. :> ]
> seat. Having your usual home directory available makes that a lot > more convenient but of course you could work around it if it wasn't. > >> Ah. The only time I tend to use a browser on an X terminal >> is for Solaris/Jaluna help/man pages. I should try it, though. >> With 1GHz and 1GB and Gb fabric (though I think the CA10 only >> runs at 100M??) it should be a client-side limitation. >> (i.e., not *running* the browser on the X terminal iron >> but just using it for display services) > > If it's just being used as a terminal performance is fine and > generally indistinguishable from a desktop, even on 100Mbit. I > run at 1400x1050x24 and even full screen DVD playback is generally > acceptable if it is another machine doing the actual decoding. Slow
Wow, I had never considered watching full motion video on an X terminal. Most of my "computer use" has fairly static displays so X has been a real win for me. I started using NCD 19r's many years ago. Then, 19c's, HMXpro, etc. Each time, getting more features/performance and smaller footprints (e.g., the HMX "pizza boxes" served 75 pound 21"/25" *CRTs*, while the CA5's support similarly sized LCD monitors in 1/10-th the volume/mass!) I will have to try that just to see what the experience is like. I had assumed bandwidth requirements would be too high. (I've been looking hard at suitable CODECs for the video clients in my multimedia solution for similar reason)
> panning shots are slightly cinefilm-ish: you can see the frames > but not enough to spoil what you are watching. Audio is fed to > the speakers via analog connection to my usual physical "full size" > machine on unused pairs of the network lead - I haven't tried > network audio.
NASd packages seam to be broken pretty often (not sure how much of this is the package maintainer's fault). I used it on the NCD machines but no longer bother with it. If I need audio it tends to be on the multimedia workstation (I use a HiFi or PMP for my background music)
> I haven't played with the Unichrome's MPEG decoder on the Neowares > yet, but when that server still had a head (and with a slightly > less capable Unichrome chip) it worked well. OTOH I've no idea if > that hardware acceleration is network transparent. I suspect it > may not be.
I think you would have to be able to export (import) a virtual frame buffer. Doubtful that they bothered with that sort of support. I'd be curious to try something like that with the Sun Ray architecture!
On 2011-12-05, Don Y <not.to.be@seen.com> wrote:
> Hi Andrew,
Hi Don. Sorry it's taken a few days to get back to you. It's been one of those weeks where you never seem to stop even for a minute. Even now I can't talk for too long since it's now pub o'clock.
> On 12/3/2011 5:15 PM, Andrew Smallshaw wrote: > > Do you really *need* the speed? I have all of my Gb > hosts on a single 8 port switch (actually, I think > the other switches are also Gb though the hosts that > they serve often are not). My thinking when I was > assigning switches was that printers and X terminals > really don't *need* that sort of bandwidth. Nor does > the automation stuff (though I suspect the multimedia > *will*). Most of my files are served from bigger/faster > boxen which already have fat pipes...
For document serving and printing, no the speed is retty much an irrelevance. Backups are another issue entirely... a full network backup is up to around 20 hours now.
> I never considered the idea of "carrying" $HOME in my > pocket and simply plugging it "wherever" it might be > needed. E.g., that would even work on a Windows machine! > I'll have to think about this... it sounds like it could > be a really good idea! Though keeping the discipline to > always use that device for *all* the work on the stuff > contained thereon might be hard -- I'd almost have to > force myself NOT to back it up onto any other server > (lest I be tempted to modify the backed up version at > some point and quickly get the two versions out of sync)
Possibly doable technically but it wouldn't suit me at all so I've never really thought about it. I'm a bit of a slut when it comes to computers - log in on one and then spread myself around half a dozen systems in the same session.
> [I tend to think of files as having real, physical > locations -- that don't MOVE! But, to allow *me* > to move and still access them. :> ]
I'm the polar opposite. My stuff is in ~ and I don't care where that is provided that its always there. The whole point of the network is to set up things like that and then completely ignore them after that.
>> If it's just being used as a terminal performance is fine and >> generally indistinguishable from a desktop, even on 100Mbit. I >> run at 1400x1050x24 and even full screen DVD playback is generally >> acceptable if it is another machine doing the actual decoding. Slow > > Wow, I had never considered watching full motion video > on an X terminal. Most of my "computer use" has fairly > static displays so X has been a real win for me. I started
It's not quite there but it isn't that far off. I suspect some of the glitches are down to sheer horsepower at the other end rather than the terminal or the link between them. A 1080p video plays back as jerky as hell even in a smallish window. Your typical clip from Youtube or where ever at 480x320 or so is silky smooth even when scaled up to full screen. I suspect my next upgrade will do it without missing a beat and quite simply you won't even need to think about it. My previous terminal was an Axel thin client: great little machines except for the video performance or lack thereof.
> using NCD 19r's many years ago. Then, 19c's, HMXpro, etc. > Each time, getting more features/performance and smaller > footprints (e.g., the HMX "pizza boxes" served 75 pound > 21"/25" *CRTs*, while the CA5's support similarly sized > LCD monitors in 1/10-th the volume/mass!)
Still using a 70-something pound 21" CRT here - it took LCDs a long time to progress to the point I was happy with them even when everyone else was claiming they were always vastly superior. That's something else on the "to be replaced" list, not least because its got to the point it's taking a good three or four minutes to warm up. ;-)
>> I haven't played with the Unichrome's MPEG decoder on the Neowares >> yet, but when that server still had a head (and with a slightly >> less capable Unichrome chip) it worked well. OTOH I've no idea if >> that hardware acceleration is network transparent. I suspect it >> may not be. > > I think you would have to be able to export (import) > a virtual frame buffer. Doubtful that they bothered > with that sort of support. I'd be curious to try > something like that with the Sun Ray architecture!
I thought exactly the same thing. It'd be interesting to try running a _local_ copy of Xine (about the only thing that supports the hardware acceleration). It may only be an 800MHz CPU on the CA21 but if my experiences on the 600MHz server are anything to go by it may not be that bad. It's something I'll keep on the "play with someday" pile, but someday isn't anytime soon. -- Andrew Smallshaw andrews@sdf.lonestar.org .
Hi Andrew,

> Hi Don. Sorry it's taken a few days to get back to you. It's been > one of those weeks where you never seem to stop even for a minute. > Even now I can't talk for too long since it's now pub o'clock.
Hurry, hurry; quick, quick -- your pint is getting "cold" ;-)
>> Do you really *need* the speed? I have all of my Gb >> hosts on a single 8 port switch (actually, I think >> the other switches are also Gb though the hosts that >> they serve often are not). My thinking when I was >> assigning switches was that printers and X terminals >> really don't *need* that sort of bandwidth. Nor does >> the automation stuff (though I suspect the multimedia >> *will*). Most of my files are served from bigger/faster >> boxen which already have fat pipes... > > For document serving and printing, no the speed is retty much an > irrelevance. Backups are another issue entirely... a full network > backup is up to around 20 hours now.
Understood. Any of my machines with big file stores are connected with Gb fabric. The rest of the nodes on the 100Mb fabric either have nothing to backup (e.g., X terminals, printers, etc.) *or* so little that the bandwidth is an insignificant factor (e.g., the box that serves my DNS, TFTP, DHCP, etc. is only a few GB for a level 0). Having said that, I still rarely backup the bigger boxes because even Gb speeds take FOREVER to move terabytes! :< So, I have tried to impose a bit of discipline in what I keep where -- and *how*. E.g., much of the filestore is archive. Incremental backup just wastes CPU time while the process realizes "no changes, here". So, I keep one backup on offline spindles and another on optical media (you may *think* DVD-ROMs old a lot... until you actually have to start stacking them up as backup media!) This leaves the only "dynamic" files in a few well-known places -- so I can even afford to do level 0's there with some frequency!
>> I never considered the idea of "carrying" $HOME in my >> pocket and simply plugging it "wherever" it might be >> needed. E.g., that would even work on a Windows machine! >> I'll have to think about this... it sounds like it could >> be a really good idea! Though keeping the discipline to >> always use that device for *all* the work on the stuff >> contained thereon might be hard -- I'd almost have to >> force myself NOT to back it up onto any other server >> (lest I be tempted to modify the backed up version at >> some point and quickly get the two versions out of sync) > > Possibly doable technically but it wouldn't suit me at all so I've > never really thought about it. I'm a bit of a slut when it comes > to computers - log in on one and then spread myself around half a > dozen systems in the same session.
I often have a *presence* on many machines concurrently. But, usually each bit of work has a defined "home". I try to reinforce this by only having certain applications on certain boxes. See below.
>> [I tend to think of files as having real, physical >> locations -- that don't MOVE! But, to allow *me* >> to move and still access them. :> ] > > I'm the polar opposite. My stuff is in ~ and I don't care where > that is provided that its always there. The whole point of the > network is to set up things like that and then completely ignore > them after that.
I live in fear of having a duplicate -- though "unsynchronized" -- copy of any file. In the past, I've "temporarily" moved a "working copy" of something I was working on to another machine. THen, been distracted and lost track of it. So, new changes to the "original copy" until something reminds me, "haven't I already MADE these changes?" -- which prompts a survey of every machine at my disposal until I find the *old* new version. Followed by the pain of merging the two together. Most often, this has happened when I pulled a "copy" of something onto a laptop in case I wanted to play with it while traveling. (I've had a few recent occasions where I was away for months at a time) The flaw being the failure to remember which files I may have altered "while away" and, thus, merging them back into the tree when I returned. [I never let any machine that has been exposed to The Outside World talk directly to the other machines On The Inside. So, I have to remember to manually do these updates/check-ins/syncs]
>>> If it's just being used as a terminal performance is fine and >>> generally indistinguishable from a desktop, even on 100Mbit. I >>> run at 1400x1050x24 and even full screen DVD playback is generally >>> acceptable if it is another machine doing the actual decoding. Slow >> >> Wow, I had never considered watching full motion video >> on an X terminal. Most of my "computer use" has fairly >> static displays so X has been a real win for me. I started > > It's not quite there but it isn't that far off. I suspect some of > the glitches are down to sheer horsepower at the other end rather > than the terminal or the link between them. A 1080p video plays > back as jerky as hell even in a smallish window. Your typical clip > from Youtube or where ever at 480x320 or so is silky smooth even > when scaled up to full screen. I suspect my next upgrade will do > it without missing a beat and quite simply you won't even need to > think about it. My previous terminal was an Axel thin client: > great little machines except for the video performance or lack > thereof.
Hmm.... I will have to "play" and see what its like. I rarely have need to watch video on a development machine (as they don't talk to The Outside).
>> using NCD 19r's many years ago. Then, 19c's, HMXpro, etc. >> Each time, getting more features/performance and smaller >> footprints (e.g., the HMX "pizza boxes" served 75 pound >> 21"/25" *CRTs*, while the CA5's support similarly sized >> LCD monitors in 1/10-th the volume/mass!) > > Still using a 70-something pound 21" CRT here - it took LCDs a long > time to progress to the point I was happy with them even when > everyone else was claiming they were always vastly superior. That's
Agreed. Especially if you wanted "color correct" output (I do DTP on one of the workstations so need the displays "calibrated") I finally tossed the big beasts out of fear of "table collapse": My work area is a set of tables arranged in a "U". "Machines" sit on the floor, under the tables (esp for towers, this makes a lot of sense... I could never reach the DVD tray in the SB2000 if it sat atop the table! And, I dont need that 60 pounds of computer -- nor any of the others -- sitting on those table legs!). So, dicking with cables, etc. means climbing UNDER the tables to access the rear of each machine ("Kids, don't try this at home!"). THis is usually an ordeal: my body is nowhere near as flexible as it once was; my eyes can't read the small lettering in the low light and at the close range required in this setting; all of the power and interconnect cabling hangs from the underside of the tables (which always seems to "catch" in my hair!); and, I have no desire to move the "other" equipment out of the way to make it easy to squirm up to the machine in question! As a result, I'm often banging into things while trying to get around down there. The tables are made from solid core wooden doors (undrilled). 32" x 80" x 2" So, they are very sturdy. However, the are supported with "banquet table legs" (i.e., the sort of legs that you find on "folding tables"). They do fine supporting the weight of the table itself (about 65 pounds). And, the few I/O devices thereon (spaceball, keyboard, tablet, etc.). But, another 150 pounds to each table (2 CRTS per workstation) and suddenly the idea of being wedged between assorted bits of equipment, surrounded by (live) wires while ~250 pounds hangs over our head on flimsy legs -- THAT YOU KEEP BUMPING ACCIDENTALLY -- doesn't sound like a good plan for long term, hospital-free living! :< Even *worse* if some bit of kit managed to get damaged in such a collapse! :> So, I replaced each with a 21" LCD and feel a bit less nervous each time I crawl under there. And, I've used the space *behind* the LCDs or extra storage: spindles of bank optical media, external USB/FW tape/disk devices, photo printers, etc.
> something else on the "to be replaced" list, not least because its > got to the point it's taking a good three or four minutes to warm > up. ;-)
Ah. I always found the sound of the degassing coil particularly threatening -- as if each hum/thump was going to be followed by a puff of magic smoke, etc.
>>> I haven't played with the Unichrome's MPEG decoder on the Neowares >>> yet, but when that server still had a head (and with a slightly >>> less capable Unichrome chip) it worked well. OTOH I've no idea if >>> that hardware acceleration is network transparent. I suspect it >>> may not be. >> >> I think you would have to be able to export (import) >> a virtual frame buffer. Doubtful that they bothered >> with that sort of support. I'd be curious to try >> something like that with the Sun Ray architecture! > > I thought exactly the same thing. It'd be interesting to try > running a _local_ copy of Xine (about the only thing that supports > the hardware acceleration). It may only be an 800MHz CPU on the > CA21 but if my experiences on the 600MHz server are anything to go > by it may not be that bad. It's something I'll keep on the "play > with someday" pile, but someday isn't anytime soon.
<grin> As CCR would say, "someday never comes" ;-)