On 2014-06-13, David Brown <david.brown@hesbynett.no> wrote:> On 13/06/14 16:35, Grant Edwards wrote:>> Oh good, I was hoping we'd get to this part: >> >> print ("0xff,"*32+ "\n") * 32[...]> I usually prefer list comprehensions. (I've made the code a bit more > general here).[Example that uses list comprehensions and includes the "special values" and curly braces and whatnot.] If this were comp.lang.python, I'd be obliged to post a version that uses iterators and itertools instead of list comprehensions, but we'll spare the denizens of c.a.e... -- Grant Edwards grant.b.edwards Yow! I've got a COUSIN at who works in the GARMENT gmail.com DISTRICT ...
filling remaining array elements with fixed value
Started by ●June 12, 2014
Reply by ●June 13, 20142014-06-13
Reply by ●June 13, 20142014-06-13
On Fri, 13 Jun 2014 17:03:59 +0200 David Brown <david.brown@hesbynett.no> wrote:> > I usually prefer list comprehensions. (I've made the code a bit more > general here). > > > datalength = 1024 > linelength = 32 > > # Take special values, pad with lots of 0xff, then truncate > data = ([0x0a, 0x0b, 0x0c] + [0xff] * datalength)[:datalength] > > # Split data array into array of line chunks > datablocks = [data[i : i + linelength] for i in > range(0, len(data), linelength)] > > # Each line is formatted as a string > lines = [", ".join(["0x%02x" % x for x in row]) for > row in datablocks] > > # Put together the lines along with tabs, newlines, etc. > output = "{\n\t" + ",\n\t".join(lines) + "\n};\n" > > # Display output (could also be written to a file) > print output >Even that's a bit tricky. The compiler doesn't need the linebreaks, and at the end of the day, conceptually you've got an array of all 0xFF with a couple of exceptions. So build the array that way: data = ['0xFF'] * 1024 data[0:3] = ['0x0A', '0x0B', '0x0C'] assert len(data) == 1024 print(""" /* Automatically generated document, do not edit. */ #include <inttypes.h> uint8_t data[] = {{ {0} }}; """.format(','.join(data))) Memory and processor cycle inefficient as all hell. Still don't care. -- Rob Gaddi, Highland Technology -- www.highlandtechnology.com Email address domain is currently out of order. See above to fix.
Reply by ●June 13, 20142014-06-13
On 14-06-13 08:11 , Don Y wrote:> Hi Niklas, > > On 6/12/2014 9:05 PM, Niklas Holsti wrote: >> On 14-06-13 03:09 , Mark Curry wrote: > >>> We currently have a setup to do nightly builds of all our code. We've >>> seriously considered, but haven't pulled the trigger yet, on also >>> setting >>> up a build on a virtual machine. This build on the virtual machine >>> wouldn't happen as often, but the virtual machine snapshot would >>> theoretically >>> capture "everything". The virtual machine snapshot could then be >>> checked into revision control. >>> >>> Sounds like overkill, but it some industries, being able to faithfuly >>> rebuild >>> something 5, 10, 15+ years down the line could be useful... >> >> How sure are you that your virtual machine snapshot, taken in 2014 on >> your current PC and hypervisor, will run on your brand-new PC in the >> year 2029? > > "Sure"? <grin> How sure are you that the host OS, VM vendor, tool > vendor, silicon vendor, etc. will be *around* at that time?Very unsure, of course, which was my point: having a virtual machine snapshot from 2014, virtualizing a 2014 machine, will not help me in 2029, if there are no machines/hypervisors that can run that snapshot. David Brown advised using KVM for virtualization, because KVM can "cross-virtualize", for example running an x86 VM in emulation on a processor of a different architecture. I will look into that, thanks David! -- Niklas Holsti Tidorum Ltd niklas holsti tidorum fi . @ .
Reply by ●June 13, 20142014-06-13
(I tried to change the Subject to something more appropriate, hope it works.) On 14-06-13 10:23 , David Brown wrote:> On 13/06/14 06:05, Niklas Holsti wrote: >> On 14-06-13 03:09 , Mark Curry wrote: >>> In article <lnd9ll$nja$1@dont-email.me>, >> >>> ...thread drift... >> >>> We currently have a setup to do nightly builds of all our code. We've >>> seriously considered, but haven't pulled the trigger yet, on also setting >>> up a build on a virtual machine. This build on the virtual machine >>> wouldn't happen as often, but the virtual machine snapshot would theoretically >>> capture "everything". The virtual machine snapshot could then be >>> checked into revision control. >>> >>> Sounds like overkill, but it some industries, being able to faithfuly rebuild >>> something 5, 10, 15+ years down the line could be useful... >> >> How sure are you that your virtual machine snapshot, taken in 2014 on >> your current PC and hypervisor, will run on your brand-new PC in the >> year 2029? >> >> As I understand them, what are called "virtual machines" on PCs only >> virtualize as little of the machine as is necessary to support multiple >> OS's on the same hardware, but are not full emulations of the PC >> processor and I/O. I have not seen any promises from hypervisor vendors >> to support 15-year-old VM snapshots on future PC architectures, which >> may be quite different. >> >> This question is of interest to me because I am working on projects with >> maintenance foreseen until the 2040's. Some people have suggested >> virtual machines as the solution for keeping the development tools >> operational so long, but I am doubtful. >> > > I would recommend a few things here.Thanks, David, for your helpful answer.> First, consider using raw hard > disk images rather than specific formats and containers - the tools for > working with raw images will always be around (a loopback mount in Linux > is usually all you need). Most hypervisors and virtual machines can > work with that.Today they will... but will they, in 2029, or 2040? I am unsure.> Secondly, aim to use KVM on Linux as your hypervisor. I haven't used it > myself - I use either VirtualBox for full emulation or OpenVZ for > lightweight emulation. But KVM can emulate a lot more than other > systems. While it is most efficient when the target and the host cpu > are the same, KVM can handle a mismatch, using QEMU as a cpu emulator > when necessary. If Intel goes bankrupt and your 2040 machine runs on > PowerPC chips, KVM will let you run your x86 virtual machine images.This emulation ability is certainly a step towards a solution.> Also, KVM is entirely open source. You won't have to face vendors in 20 > years time asking for old licenses for their old products - you can > archive Linux and KVM (it is in the kernel, but there are usermode tools > as well) as both source code and installable media, and rebuild machines > in the future.As I understand your suggestion, it involves the following steps that I should do to set up a long-term maintenance system that does not assume survival of the current host-PC architecture and OS until 2040: 1. Find a virtual HW composition, probably based on x86, that: a) QEMU can emulate, and b) is supported by the OSes on which our tools run, and c) runs our tools, too. 2. Configure KVM+QEMU to emulate this virtual HW. 3. Install our OSes and tools on a VM using this virtual, emulated HW. 4. Maintain KVM and QEMU, using their source code, to keep them working on future host PCs, and preserving their ability to emulate the HW composition defined in step 1. This looks possible in principle. Far from easy, though.> And do as much as you possibly can with open source software - both on > the hosts and inside the virtual machines.Good advice, but unfortunately not fully possible for us, because the customers require us to use some closed-source tools. Fortunately, the compiler is open source (GNAT Pro).> And then archive a physical machine or two as well, just to be safe :-)The project in question is part of the ESA/EUMETSAT Meteosat Third Generation programme, which intends to build six satellites of two different types, but will keep only one or two in orbit at any given time -- the rest of the built satellites will be stored (i.e. "archived") and launched later, as and when the flying ones are retired. Yes, we plan to archive some physical computers for the development and maintenance environment, but we will do that as late in the project as possible -- when the *next* generation of PCs no longer supports our (frozen) tools. -- Niklas Holsti Tidorum Ltd niklas holsti tidorum fi . @ .
Reply by ●June 13, 20142014-06-13
Don Y <this@is.not.me.com> writes:> Hi Randy, > > On 6/13/2014 5:41 AM, Randy Yates wrote: >> David Brown<david.brown@hesbynett.no> writes: > > [attrs elided] > >>>>>>>>> I go along with the others who suggested that you write some C/C++ >>>>>>>>> code >>>>>>>>> to generate this code. I've done that many times and it works well. >>>>>>>> >>>>>>>> I find it's usually _way_ faster to write a Python program to generate >>>>>>>> such things, but YMMV. >>>>>>> >>>>>>> Yeah, sure, python, perl, common lisp, scheme, erlang, c, c++, etc. - >>>>>>> pick yer' poison. >>>>>> >>>>>> Write it in the same language that you are compiling -- that way you >>>>>> *know* you have THAT tool available wherever you happen to maintain >>>>>> the codebase (instead of having to have *two* tools). >>>>> >>>>> I don't understand. If I'm writing in C for the '430, how does that >>>>> guarantee I have C for the development host? >>>> >>>> It doesn't guarantee that you have it for the host. But, neither >>>> does it guarantee that you have python, perl, sh, etc. for the host! >>>> What it *does* guarantee is that *you* will know how to write C >>>> for that host (more or less)! It doesn't guarantee that you will >>>> be able to write a perl script *if* you happened to have perl >>>> available to you on *that* host (e.g., none of my windows hosts >>>> have perl installed). >>> >>> I have written embedded C for 20 years - but I would not be confident >>> about using C on the host for something involving a lot of string >>> manipulation and formatting. It's a different skill set, even though it >>> is still C. >> >> What C programmer (embedded or otherwise) doesn't know how to use >> printf()? > > Agreed. "Backing up" (driving in reverse) is a different skill set > than driving forward -- yet, someone who advocated getting out of the > car and *walking* (because he was more sure of his ability to do that) > would leave me suspect: "And when are you going to LEARN to back up?" > >>> And I know plenty of embedded programmers who have no idea >>> how to make a host-run C program at all. >> >> Wha..? You've GOT to be joking! I think there are 5th-graders who know >> how to take a C program from the net and get it compiled. > > But, they'll be able to "make" a host-run *python* program? > >>> So no, you don't have such a guarantee. >>> >>> But I can give a guarantee* that a competent embedded C programmer will >>> pick up the basics of Python quickly, and write string manipulation and >>> formatting code faster than learning to write host C code for the same >>> job. And the resulting scripts will be cleaner, faster to develop, and >>> easier to maintain. >> >> Your statement is irrational. A competent C programmer already knows C. >> If one uses Python, you will need time for learning the language AND >> implementing the required formatting. > > Why limit it to Python? Why not let the developer use whatever > *he* thinks is appropriate for the task at hand? And, force everyone > else in the organization to come up to speed with *his* choice of > tools for *that* project? (of course, *he* will also have to come > up to speed with the tools that his colleagues are using for *their* > projects... "fair is fair") > > This was the siren's song that I fell for early on in my "independent" > career -- picking the right tools FOR ME as I had no "arbitrary" > constraints placed on me by "Corporate", development costs (at least > those reflected in my monthly billing) were *very* obvious, etc. so > why not pick the "most efficient" (for me) way to get the project > "done"? Piece together whatever tools make sense (for *me*!) to > get the job done... > > Ah, but now client -- running a MS shop -- grumbles because he > doesn't have all those tools available that I do in my UNIX+MS > shop! Do I try to be a religious zealot and convince him he > *should* have them (from a purely technical argument)? > > "Gee, 5-10 years from now, you'll be able to get *some* of > this stuff for free! There will be all these FOSS OS's > available (in various levels of maturity) to choose from... > why not get on the band-wagon *now*? MS is such a loser OS > and, by association, anything that *runs* under it must be > as well..." > > Look at some of the FOSS projects and the hodgepodge of tools > they (somewhat arbitrarily) rely upon for proof of this. I.e., > it's *fine* -- if you want to drink the koolade... > >> I claim that a competent embedded C programmer will write such scripts >> more quickly in C than in Python, and that it is more sensible from a >> maintenance persperctive to keep everything in the same language. > > Maybe others have better luck finding well rounded "coders" -- that > are confident/competent writing code that typically runs in a > desktop environment *and* an embedded one, in different languages, > along with some familiarity with (embedded) hardware, etc. > > Should I write the "converter" for my text-to-phoneme algorithm > in LISP as it is ideally suited for that task? Will someone > down the road be able to change the input ruleset and *know* > how to verify that the converter has accurately done its job? > > [I recall encountering gnuplot ports where the regression tests > wouldn't pass. They would *run* to completion -- but, the > resulting plots were obviously wrong! Unfortunately, the folks > doing the port didn't understand what they were trying to "plot" > so they were unable to determine that the plots were incorrect and, > as such, teh *port* was flawed! "Gee, mathematical functions... > you'd think folks would know what they all looked like!"] > > But, from my experience (and most of the grumbling I hear from my > associates), finding someone who knows *a* language *well* is a > significant challenge. Expecting him (her) *and* the rest of the > staff tasked with reviewing his code to know *several* seems like > a recipe for disaster. I can already see those folks nodding their > heads at a design review (for fear of showing the shallowness of > their knowledge of yet-another tool) instead of being able to > actively criticize the implementation. How much more honest will > they be with their abilities when tasked with *maintaining* it? > > *Especially* if the individual who built this multi-tool environment > is "qualified"! ("Gee, he acts as if all of this stuff is 'obvious'; > do I want to show my ignorance by questioning something he's done?") > > <shrug> I think folks have different experiences based on the > environments in which they develop. I've learned to expect *less* > flexibility in my environment rather than more (at least if I > wanted to do less *rework*!) > > YMMV.Hi Don, I think we're in violent agreement. :) Within the last six months I was asked to come on-board a project that required porting from one assembly language to another that was rife with very tedious macros (another tool "feature" that can be grossly misapplied, IMO), and for whom the lead engineer used several tools he was adept with to build and test the project, including his own customized scheme interpreter, gnu make, a central code autogenerater based on awk, etc. And all this on XP using MS Visual SourceSafe! It was a nightmare. -- Randy Yates Digital Signal Labs http://www.digitalsignallabs.com
Reply by ●June 14, 20142014-06-14
Niklas Holsti <niklas.holsti@tidorum.invalid> writes:> (I tried to change the Subject to something more appropriate, hope it > works.) > > On 14-06-13 10:23 , David Brown wrote: >> On 13/06/14 06:05, Niklas Holsti wrote: >>> On 14-06-13 03:09 , Mark Curry wrote: >>>> In article <lnd9ll$nja$1@dont-email.me>, >>> >>>> ...thread drift... >>> >>>> We currently have a setup to do nightly builds of all our code. We've >>>> seriously considered, but haven't pulled the trigger yet, on also setting >>>> up a build on a virtual machine. This build on the virtual machine >>>> wouldn't happen as often, but the virtual machine snapshot would theoretically >>>> capture "everything". The virtual machine snapshot could then be >>>> checked into revision control. >>>> >>>> Sounds like overkill, but it some industries, being able to faithfuly rebuild >>>> something 5, 10, 15+ years down the line could be useful... >>> >>> How sure are you that your virtual machine snapshot, taken in 2014 on >>> your current PC and hypervisor, will run on your brand-new PC in the >>> year 2029? >>> >>> As I understand them, what are called "virtual machines" on PCs only >>> virtualize as little of the machine as is necessary to support multiple >>> OS's on the same hardware, but are not full emulations of the PC >>> processor and I/O. I have not seen any promises from hypervisor vendors >>> to support 15-year-old VM snapshots on future PC architectures, which >>> may be quite different. >>> >>> This question is of interest to me because I am working on projects with >>> maintenance foreseen until the 2040's. Some people have suggested >>> virtual machines as the solution for keeping the development tools >>> operational so long, but I am doubtful. >>> >> >> I would recommend a few things here. > > Thanks, David, for your helpful answer. > >> First, consider using raw hard >> disk images rather than specific formats and containers - the tools for >> working with raw images will always be around (a loopback mount in Linux >> is usually all you need). Most hypervisors and virtual machines can >> work with that. > > Today they will... but will they, in 2029, or 2040? I am unsure.In the event of your virtualization software no longer running the VM, or no longer running on the hardware of the day... Could you not run the obselete virtualization software as a virtual machine on the new virtualization software? :) -- John Devereux
Reply by ●June 14, 20142014-06-14
On Fri, 13 Jun 2014 07:05:42 +0300, Niklas Holsti <niklas.holsti@tidorum.invalid> wrote:>As I understand them, what are called "virtual machines" on PCs only >virtualize as little of the machine as is necessary to support multiple >OS's on the same hardware, but are not full emulations of the PC >processor and I/O.Correct: only certain privileged instructions are trapped and emulated.>I have not seen any promises from hypervisor vendors >to support 15-year-old VM snapshots on future PC architectures, which >may be quite different.It's reasonable to worry that new chips won't support some mode that you need going forward, but consider that Intel's "i" series processors today can software emulate a Pentium/MMX faster than the actual chip ever ran. As long as new chips retain ISA compatibility, or there is a decent emulator available, there should not be a problem. At worst, you might need to run the VM software on top of the emulator. As an example, VMware still has downloadable "player" versions [which run VMs but don't create them] for every generation of their software. If you need to run a v1.1 VM created for an 80386, you can. These players can be run on top of QEMU or Bochs x86 emulators.>This question is of interest to me because I am working on projects with >maintenance foreseen until the 2040's. Some people have suggested >virtual machines as the solution for keeping the development tools >operational so long, but I am doubtful.Long term, it's more likely that you can keep a VM in service than an actual computer. I know DonY has had good luck keeping old machines going for decades, but in my experience, his experience is unusual. If you are dealing only with software tools and don't need to keep special hardware, then VMs definitely are the way to go. If you do need to keep hardware, remember that every successful bus architecture ever made is still available in an industrial backplane. It may cost you a limb, but it's possible to keep all your old bus tied hardware and still run your software in a VM on a modern CPU. You may be able to combine your old development systems into (perhaps many) fewer boxes, though if you have incompatible hardware setups you may need to use a bare metal hypervisor rather than an OS hosted one. YMMV, George
Reply by ●June 14, 20142014-06-14
Hi Randy, On 6/13/2014 3:15 PM, Randy Yates wrote:> I think we're in violent agreement. :) > > Within the last six months I was asked to come on-board a project that > required porting from one assembly language to another that was rife > with very tedious macros (another tool "feature" that can be grossly > misapplied, IMO), and for whom the lead engineer used several tools he > was adept with to build and test the project, including his own > customized scheme interpreter, gnu make, a central code autogenerater > based on awk, etc. And all this on XP using MS Visual SourceSafe! > > It was a nightmare.I think, to some degree, it's "only natural" for folks to come up with solutions that fit *their* abilities/visions/expectations/etc. "Why *personally* take on extra work/risk for no "personal" gain?" In the corporate setting, you're not rewarded for anticipating future needs -- just meet the deadline/target cost/etc. If you comply with any *explicit* requirements on your methodology, then you're golden. In the "independent" setting, you face similar (though different) constraints. E.g., if I give a fixed bid, then any "extra costs" above whatever the "cheapest/quickest" way I can do it come out of *my* pocket (extra time and/or expense). On a T&M job, then the costs of "doing it right" (whatever *that* means!) get passed directly to the client -- in a very *obvious* manner ("Why am I paying you to buy all these tools and come up with these 'elaborate' schemes? Can't you just use...?") In my case, I only contractually agree to provide sources, schematics and hardware prototypes as deliverables. How I *get* to that point is entirely up to me! And, as I only agree to support a design to the extent of *bug* fixes (i.e., I make no guarantees as to my willingness to take on enhancements, derived products, etc.), then I just have to make sure whatever approach I use is viable for the duration of the support aspect of the contract (of course, bugs can, theoretically, turn up at *any* future time -- a flaw in my contracts! :< ) But, it often (esp in the FOSS world) appears that *no* consideration for others is made in the choice of tools. E.g., writing something in perl to do what a sed script could just as easily do. Or, using the "newest" compressor when the savings over a more traditional compressor are negligible ("Yippee! You saved 2KB! That will trim my download time by a few milliseconds and save me half of a 4KB disk block!") Or, some convoluted build scheme (e.g., some even require you to use on-line servers to do the build) instead of just "make" (and, lets not forget all the variations on make!). Jaluna's build system was, by far, one of the most needlessly complex! And, even if you buy into the reasoning for whatever choices the original/previous developer made, there's never anything that describes the process and/or *why* it is (or needs to be) the way it is! These folks are the ones who should be *forced* to perform some maintenance aspect on *their* project 5 or 10 years after release ("What do you mean, you can't do it? Didn't *you* come up with this scheme? If *anyone* should be able to do it, it should be *you*, right?" :> ) On a related -- though different -- note, I have not yet found a *good* way to provide a "roadmap" to the code in my projects. E.g., for the hardware, I can draw a block diagram (or, do a hierarchical design) that shows "The Whole" and lets the viewer drill down to the detail of interest. But, I've not been able to come up with a similar mechanism for software. Especially when it's a "system" and not "just a program". I.e., *after* you understand the system and have had some experience navigating the codebase, you can *probably* find your way around. But, when exposed to it *cold*, it's just too overwhelming: "Where do I start?"
Reply by ●June 14, 20142014-06-14
Hi Niklas, On 6/13/2014 10:17 AM, Niklas Holsti wrote:>>> How sure are you that your virtual machine snapshot, taken in 2014 on >>> your current PC and hypervisor, will run on your brand-new PC in the >>> year 2029? >> >> "Sure"?<grin> How sure are you that the host OS, VM vendor, tool >> vendor, silicon vendor, etc. will be *around* at that time? > > Very unsure, of course, which was my point: having a virtual machine > snapshot from 2014, virtualizing a 2014 machine, will not help me in > 2029, if there are no machines/hypervisors that can run that snapshot. > > David Brown advised using KVM for virtualization, because KVM can > "cross-virtualize", for example running an x86 VM in emulation on a > processor of a different architecture. I will look into that, thanks David!The essence of the problem is that *someone* must provide the support for whatever tools -- actual hardware, software, emulation, etc. If you are "lucky" (or very conservative) and pick something that *stays* "mainstream", then your *chances* of benefiting from SOMEONE ELSE providing that support (*ignorant* of your needs). OTOH, if you are *unlucky* and make a choice that the "market" eventually abandons, then you need to be in a position to "support yourself". There have been a lot of different processors, languages, etc. in the past 15 years (reflecting your 15-year timeframe backwards). How many of them are still "supported"? Can you find a 68040 emulator? 99000? 32000? Z380? etc. (*other* than "hobbyist" attempts) Supporting *compilers* (assemblers, linkage editors, etc.) is almost always possible -- even if you have to roll your own. These are just "text processing" applications, of sorts. And, if push comes to shove, they needn't be very *speedy* (e.g., preserve their binaries and documentation for the CPU/OS on/in which they execute and you can always write a *simulator* that can be dog-slow as it slogs through the executable). Interactive applications (tools) are the big risk. Not just because they can be tedious to use "at reduced speed" (run your IDE on a 100MHz PC someday and see how much fun *that* is! :> ). But, also, because (IME) many desktop apps and toolkits (libraries) have inherent races that you *aren't* victimized by solely because the machine is fast enough to make these "critical regions" small enough that you don't encounter them (often). Slow the processor down and, suddenly, those regions grow to a size where your "human speed" actions can easily trip them up. But, by far, the *biggest* risk is the actual silicon itself. Can you be sure to find components 5, 10, 15 years hence? Will those components be the *same*, functionally, as the ones you have specified today? Are there aspects of your design that subtly (invisibly?) rely on some characteristic of *these* devices OF WHICH YOU MAY NOT BE AWARE? [I was asked to come up with a new design for a *hand tool* many years ago because a change in vendors had caused one of the new components for the *old* design to be BETTER than it had been, previously. As a result, the production line was unable to build the old design as it had adapted to the flaws in the old component!] If you do a "big buy" and warehouse the "spares", are you sure they are operational at the time of purchase? Will your storage techniques ensure their *continued* functionality years later? What if your warehouse catches fire? While you can cheaply duplicate your sources, binaries, tools, etc. keeping an off-site backup of your *inventory* essentially means *doubling* your inventory! <frown> Long term support is not an enviable position to be in. Ideally, make it someone else's problem! :> --don
Reply by ●June 14, 20142014-06-14
Hi George, On 6/14/2014 2:45 AM, George Neuner wrote:>> This question is of interest to me because I am working on projects with >> maintenance foreseen until the 2040's. Some people have suggested >> virtual machines as the solution for keeping the development tools >> operational so long, but I am doubtful. > > Long term, it's more likely that you can keep a VM in service than an > actual computer. I know DonY has had good luck keeping old machines > going for decades, but in my experience, his experience is unusual.DonY had to start preparing for long-term support long before emulators, hypervisors, etc. were available. And, do so *without* a "support staff" onto which he could pass that responsibility. :> I would discourage archiving hardware just because it *is* hard to keep it running. Especially vintage 80's hardware (where things were far less "standarized" than today's machines). And, there *are* alternatives, today.> If you are dealing only with software tools and don't need to keep > special hardware, then VMs definitely are the way to go.I started playing with this option (at your suggestion, George) many months ago. I'm now at a point in my career where I can shed those support *requirements* and see how I *might* have done things. One thing I discovered was it is much easier to set up a machine that *just* runs VM's (than to try to have that capability alongside your "regular workstation tools"). So, I set aside one of the smaller servers for that role. And, in keeping with my preference for small spindles, I've opted to just build "small systems" on individual *removable* ~140GB drives. So, I can pull a "system" and set it on the shelf, "cold" (instead of leaving the drive on-line where it can be the victim of a power glitch or careless "rm -r *", etc. Unfortunately, it takes a *lot* of time to set up all these VM's and the various tools that the tools *in* each of them require! I've had to rethink how to partition the "systems" so I don't end up having to add -- and maintain -- Tool_X to several different VM's. I haven't been able to sort out if I can run multiple VM's with the *illusion* of a single unified desktop (e.g., have schematic and PCB tools in one VM yet *see*/manipulate those objects as well as "drag and drop" between that and another VM hosting software devel tools) (Having a personal IT department would be *so* nice!)> If you do need to keep hardware, remember that every successful bus > architecture ever made is still available in an industrial backplane. > It may cost you a limb, but it's possible to keep all your old bus > tied hardware and still run your software in a VM on a modern CPU. You > may be able to combine your old development systems into (perhaps > many) fewer boxes, though if you have incompatible hardware setups you > may need to use a bare metal hypervisor rather than an OS hosted one.Keep in mind any devices used for your development activities that sit *outside* the "PC" also are of concern! If you can't "talk" to your target at some future date, some of those most precious tools that run *on* the PC may be of little use!







