EmbeddedRelated.com
Forums
The 2026 Embedded Online Conference

filling remaining array elements with fixed value

Started by blisca June 12, 2014
On 14-06-14 22:53 , Don Y wrote:
> Hi Niklas, > > On 6/13/2014 10:17 AM, Niklas Holsti wrote: >>>> How sure are you that your virtual machine snapshot, taken in 2014 on >>>> your current PC and hypervisor, will run on your brand-new PC in the >>>> year 2029? >>> >>> "Sure"?<grin> How sure are you that the host OS, VM vendor, tool >>> vendor, silicon vendor, etc. will be *around* at that time? >> >> Very unsure, of course, which was my point: having a virtual machine >> snapshot from 2014, virtualizing a 2014 machine, will not help me in >> 2029, if there are no machines/hypervisors that can run that snapshot. >> >> David Brown advised using KVM for virtualization, because KVM can >> "cross-virtualize", for example running an x86 VM in emulation on a >> processor of a different architecture. I will look into that, thanks >> David! > > The essence of the problem is that *someone* must provide the support > for whatever tools -- actual hardware, software, emulation, etc.
Yes -- and saying "just use a virtual machine", as some have said to me (outside USENET, I mean) is not enough. In my case, I only need to keep a SW maintenance environment (compiler, linker, testing tools) working. An emulator for the host copmuter could be sufficient, and should not be too hard to maintain if it is written portably in a standard mainstream language and does not rely on specific HW support. It seems that current hypervisors require some specific virtualization support from their host processors, which could become a problem in the long term. The KVM website says that it requires virtualization HW, but I'm not sure if that applies also in the KVM+QEMU combination. On the other hand, perhaps QEMU alone is sufficient -- I won't need to run multiple VMs on the same host.
> Supporting *compilers* (assemblers, linkage editors, etc.) is > almost always possible -- even if you have to roll your own.
I would not want to roll my own Ada compiler, however. (Well, actually I would like to do that, but it would be too expensive and/or take too long.)
> These are just "text processing" applications, of sorts. And, > if push comes to shove, they needn't be very *speedy* (e.g., > preserve their binaries and documentation for the CPU/OS on/in > which they execute and you can always write a *simulator* > that can be dog-slow as it slogs through the executable).
But the simulator/emulator has to be complete and accurate enough to run the operating system on which the compiler/linker or other tool runs. That requires simulation of a whole computer, not just a processor. QEMU can simulate some systems, but I don't yet know if it can simulate a system on which my development OS and tools can run. Of course I could extend QEMU...
> Interactive applications (tools) are the big risk.
Fortunately I won't need any such.
> But, by far, the *biggest* risk is the actual silicon itself.
Fortunately, again, that is not my problem. In fact, all the target systems will be built soon, and those that are not deployed at once will be moth-balled for future use. In nitrogen at controlled temperature and humidity, I believe. -- Niklas Holsti Tidorum Ltd niklas holsti tidorum fi . @ .
Hi Niklas,

On 6/14/2014 2:08 PM, Niklas Holsti wrote:
> On 14-06-14 22:53 , Don Y wrote: >> On 6/13/2014 10:17 AM, Niklas Holsti wrote: >>>>> How sure are you that your virtual machine snapshot, taken in 2014 on >>>>> your current PC and hypervisor, will run on your brand-new PC in the >>>>> year 2029? >>>> >>>> "Sure"?<grin> How sure are you that the host OS, VM vendor, tool >>>> vendor, silicon vendor, etc. will be *around* at that time? >>> >>> Very unsure, of course, which was my point: having a virtual machine >>> snapshot from 2014, virtualizing a 2014 machine, will not help me in >>> 2029, if there are no machines/hypervisors that can run that snapshot. >>> >>> David Brown advised using KVM for virtualization, because KVM can >>> "cross-virtualize", for example running an x86 VM in emulation on a >>> processor of a different architecture. I will look into that, thanks >>> David! >> >> The essence of the problem is that *someone* must provide the support >> for whatever tools -- actual hardware, software, emulation, etc. > > Yes -- and saying "just use a virtual machine", as some have said to me > (outside USENET, I mean) is not enough.
Of course -- that just changes the problem from "how do I support my tools" to "how do I support the VM that my tools *rely* upon".
> In my case, I only need to keep a SW maintenance environment (compiler, > linker, testing tools) working.
Is *all* of your testing done without dealing with the "real world"? I.e., passing test cases (const's) to the UUT and verifying the results are "as expected"?
> An emulator for the host copmuter could > be sufficient, and should not be too hard to maintain if it is written > portably in a standard mainstream language and does not rely on specific > HW support. > > It seems that current hypervisors require some specific virtualization > support from their host processors, which could become a problem in the > long term. The KVM website says that it requires virtualization HW, but > I'm not sure if that applies also in the KVM+QEMU combination. On the > other hand, perhaps QEMU alone is sufficient -- I won't need to run > multiple VMs on the same host.
I think (speaking without detailed knowledge of your specifics) that QEMU or similar "simulator" can probably do the job for you. The problem then becomes ensuring that QEMU will run on "whatever" a workstation looks like in 2029!
>> Supporting *compilers* (assemblers, linkage editors, etc.) is >> almost always possible -- even if you have to roll your own. > > I would not want to roll my own Ada compiler, however. (Well, actually I > would like to do that, but it would be too expensive and/or take too long.) > >> These are just "text processing" applications, of sorts. And, >> if push comes to shove, they needn't be very *speedy* (e.g., >> preserve their binaries and documentation for the CPU/OS on/in >> which they execute and you can always write a *simulator* >> that can be dog-slow as it slogs through the executable). > > But the simulator/emulator has to be complete and accurate enough to run > the operating system on which the compiler/linker or other tool runs.
No. It only needs to *emulate* the features of the OS that the applications require! E.g., it probably doesn't need to support signals (directly), timing primitives, limited IPC/pipe support, etc. It almost certainly wouldn't need to know how to talk to *real* "devices", etc. Even filesystem support could be hacked (as *you* know where all reads and writes for a particular compiler invocation should be directed!)
> That requires simulation of a whole computer, not just a processor. QEMU > can simulate some systems, but I don't yet know if it can simulate a > system on which my development OS and tools can run. Of course I could > extend QEMU...
If you're using GNAT, then just grep the sources for all calls to the OS/filesystem/etc. (elide stdlib and it's ild and see where the "undefined references" occur)
>> Interactive applications (tools) are the big risk. > > Fortunately I won't need any such.
You *don't* use gdb or any other interactive tools for debugging?
>> But, by far, the *biggest* risk is the actual silicon itself. > > Fortunately, again, that is not my problem. In fact, all the target > systems will be built soon, and those that are not deployed at once will > be moth-balled for future use. In nitrogen at controlled temperature and > humidity, I believe.
Any problem that can be someone ELSE's is preferable to problems that must be *yours*! :> Note that there *are* groups who actually are focused on issues of preserving *media* for (VERY) long periods of time. Most of those solutions tend to require a bit of an investment, though. OTOH, that sort of investment may be acceptable to the folks underwriting your effort. I've recently been lamenting how much "old research" is essentially "lost" due to poor preservation techniques. Even things like microfiche which *tried* to make such preservation (of *paper*) more practical have proven ineffective after just *decades*. It's disheartening to imagine how much will be "reinvented", needlessly, as other things "slip away" due to inattention, disinterest, etc. <frown>
On 13/06/14 20:35, Niklas Holsti wrote:
> (I tried to change the Subject to something more appropriate, hope it > works.) > > On 14-06-13 10:23 , David Brown wrote: >> On 13/06/14 06:05, Niklas Holsti wrote: >>> On 14-06-13 03:09 , Mark Curry wrote: >>>> In article <lnd9ll$nja$1@dont-email.me>, >>> >>>> ...thread drift... >>> >>>> We currently have a setup to do nightly builds of all our code. We've >>>> seriously considered, but haven't pulled the trigger yet, on also setting >>>> up a build on a virtual machine. This build on the virtual machine >>>> wouldn't happen as often, but the virtual machine snapshot would theoretically >>>> capture "everything". The virtual machine snapshot could then be >>>> checked into revision control. >>>> >>>> Sounds like overkill, but it some industries, being able to faithfuly rebuild >>>> something 5, 10, 15+ years down the line could be useful... >>> >>> How sure are you that your virtual machine snapshot, taken in 2014 on >>> your current PC and hypervisor, will run on your brand-new PC in the >>> year 2029? >>> >>> As I understand them, what are called "virtual machines" on PCs only >>> virtualize as little of the machine as is necessary to support multiple >>> OS's on the same hardware, but are not full emulations of the PC >>> processor and I/O. I have not seen any promises from hypervisor vendors >>> to support 15-year-old VM snapshots on future PC architectures, which >>> may be quite different. >>> >>> This question is of interest to me because I am working on projects with >>> maintenance foreseen until the 2040's. Some people have suggested >>> virtual machines as the solution for keeping the development tools >>> operational so long, but I am doubtful. >>> >> >> I would recommend a few things here. > > Thanks, David, for your helpful answer. > >> First, consider using raw hard >> disk images rather than specific formats and containers - the tools for >> working with raw images will always be around (a loopback mount in Linux >> is usually all you need). Most hypervisors and virtual machines can >> work with that. > > Today they will... but will they, in 2029, or 2040? I am unsure.
If the day comes when we can't get a computer to read a simple file of bytes, there will be lots of bigger problems than your particular case! I know windows tries to make that sort of thing more difficult with each generation, but fortunately we have Linux, the BSD's, and other Unix systems - these are going to be around for a long time to come, and old versions can still run fine on new hardware. But you should make a point of sticking to mature and stable filesystems - prefer ext3 rather than btrfs, for example (FreeBSD will work with ext3, albeit without journalling, giving you a second source).
> >> Secondly, aim to use KVM on Linux as your hypervisor. I haven't used it >> myself - I use either VirtualBox for full emulation or OpenVZ for >> lightweight emulation. But KVM can emulate a lot more than other >> systems. While it is most efficient when the target and the host cpu >> are the same, KVM can handle a mismatch, using QEMU as a cpu emulator >> when necessary. If Intel goes bankrupt and your 2040 machine runs on >> PowerPC chips, KVM will let you run your x86 virtual machine images. > > This emulation ability is certainly a step towards a solution. > >> Also, KVM is entirely open source. You won't have to face vendors in 20 >> years time asking for old licenses for their old products - you can >> archive Linux and KVM (it is in the kernel, but there are usermode tools >> as well) as both source code and installable media, and rebuild machines >> in the future. > > As I understand your suggestion, it involves the following steps that I > should do to set up a long-term maintenance system that does not assume > survival of the current host-PC architecture and OS until 2040: > > 1. Find a virtual HW composition, probably based on x86, that: > a) QEMU can emulate, and > b) is supported by the OSes on which our tools run, and > c) runs our tools, too. > > 2. Configure KVM+QEMU to emulate this virtual HW. > > 3. Install our OSes and tools on a VM using this virtual, emulated HW. > > 4. Maintain KVM and QEMU, using their source code, to keep them working > on future host PCs, and preserving their ability to emulate the HW > composition defined in step 1. > > This looks possible in principle. Far from easy, though.
If it were easy, your customers would not be paying you big money to solve the problem! But yes, that's pretty much what I had in mind. If you want extra points, get a PPC based computer and check you can run the VM on that too. While I would not expect that in 2040 we will have PPC computers but no x86 compatibles, this would give you a second working system for extra confidence.
> >> And do as much as you possibly can with open source software - both on >> the hosts and inside the virtual machines. > > Good advice, but unfortunately not fully possible for us, because the > customers require us to use some closed-source tools. Fortunately, the > compiler is open source (GNAT Pro).
Well, you get as close as you can. If you can avoid dealing with node-locking, licence restrictions, etc., that will be fewer problems to worry about.
> >> And then archive a physical machine or two as well, just to be safe :-) > > The project in question is part of the ESA/EUMETSAT Meteosat Third > Generation programme, which intends to build six satellites of two > different types, but will keep only one or two in orbit at any given > time -- the rest of the built satellites will be stored (i.e. > "archived") and launched later, as and when the flying ones are retired. > Yes, we plan to archive some physical computers for the development and > maintenance environment, but we will do that as late in the project as > possible -- when the *next* generation of PCs no longer supports our > (frozen) tools. >
On 6/13/2014 2:48 AM, Wouter van Ooijen wrote:
> hamilton schreef op 13-Jun-14 7:21 AM:> On 6/12/2014 11:20 PM, hamilton > wrote: > >> On 6/12/2014 3:24 AM, Wouter van Ooijen wrote: > >>>> #define TEN_FFS > 0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF > >>>> const unsigned char my_array[8]={ > >>>> 0xA, 0xB, 0xC, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF > >>>> TEN_FFS, TEN_FFS, TEN_FFS, TEN_FFS, TEN_FFS, > >>>> TEN_FFS, TEN_FFS, TEN_FFS, TEN_FFS, TEN_FFS > >>>> }; > >>>> #undefine TEN_FFS > >>> > >>> I misread, you want 1000, not 100, but that requires only two more > >>> lines ;) > >>> > >> Don't you mean 200 more lines !! > > sorry, 20 lines > > > > You think too linear. Learn to think recursive. > > #define TEN_FFS 0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF > #define H_FFS TEN_FFS,TEN_FFS,TEN_FFS,TEN_FFS,TEN_FFS,\ > TEN_FFS,TEN_FFS,TEN_FFS,TEN_FFS,TEN_FFS > const unsigned char my_array[8]={ > 0xA, 0xB, 0xC, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF > H_FFS,H_FFS,H_FFS,H_FFS,H_FFS,H_FFS,H_FFS,H_FFS,H_FFS,H_FFS > }; > #undefine TEN_FFS > #undefine H_FFS
Correct me if I am wrong, but isn't that still an array of 8 char? ;) -- Rick
On Sun, 15 Jun 2014 20:30:10 +0200, David Brown
<david.brown@hesbynett.no> wrote:

>On 13/06/14 20:35, Niklas Holsti wrote: >> (I tried to change the Subject to something more appropriate, hope it >> works.) >> >> On 14-06-13 10:23 , David Brown wrote: >>> On 13/06/14 06:05, Niklas Holsti wrote: >>>> On 14-06-13 03:09 , Mark Curry wrote: >>>>> In article <lnd9ll$nja$1@dont-email.me>, >>>> >>>>> ...thread drift... >>>> >>>>> We currently have a setup to do nightly builds of all our code. We've >>>>> seriously considered, but haven't pulled the trigger yet, on also setting >>>>> up a build on a virtual machine. This build on the virtual machine >>>>> wouldn't happen as often, but the virtual machine snapshot would theoretically >>>>> capture "everything". The virtual machine snapshot could then be >>>>> checked into revision control. >>>>> >>>>> Sounds like overkill, but it some industries, being able to faithfuly rebuild >>>>> something 5, 10, 15+ years down the line could be useful... >>>> >>>> How sure are you that your virtual machine snapshot, taken in 2014 on >>>> your current PC and hypervisor, will run on your brand-new PC in the >>>> year 2029? >>>> >>>> As I understand them, what are called "virtual machines" on PCs only >>>> virtualize as little of the machine as is necessary to support multiple >>>> OS's on the same hardware, but are not full emulations of the PC >>>> processor and I/O. I have not seen any promises from hypervisor vendors >>>> to support 15-year-old VM snapshots on future PC architectures, which >>>> may be quite different. >>>> >>>> This question is of interest to me because I am working on projects with >>>> maintenance foreseen until the 2040's. Some people have suggested >>>> virtual machines as the solution for keeping the development tools >>>> operational so long, but I am doubtful. >>>> >>> >>> I would recommend a few things here. >> >> Thanks, David, for your helpful answer. >> >>> First, consider using raw hard >>> disk images rather than specific formats and containers - the tools for >>> working with raw images will always be around (a loopback mount in Linux >>> is usually all you need). Most hypervisors and virtual machines can >>> work with that. >> >> Today they will... but will they, in 2029, or 2040? I am unsure. > >If the day comes when we can't get a computer to read a simple file of >bytes, there will be lots of bigger problems than your particular case!
For a long time, I used 1/2 inch 9 track 1600 bpi (no need for head alignment as with 800 bpi) open reel ANSI magnetic tapes for storing source files. No file archives or compressing, just plain sequential text files. These could be readable on any mainframe or minicomputer of the time and I assumed also in the future. Unfortunately I was wrong, for instance in Finland, there is only a single functioning 1/2 inch tape drive in a computer museum, but how long is it going to be working. So in reality, you need to do the copying to any mature technology about every 10 years.
> I know windows tries to make that sort of thing more difficult with >each generation, but fortunately we have Linux, the BSD's, and other >Unix systems - these are going to be around for a long time to come, and >old versions can still run fine on new hardware. > >But you should make a point of sticking to mature and stable filesystems >- prefer ext3 rather than btrfs, for example (FreeBSD will work with >ext3, albeit without journalling, giving you a second source).
Realistically CDROM (and DVD/BlueRay) file systems on physical disks would be the most likely media to be readable in 2040. How would you connect any current magnetic or SSD to a computer in 2040 ? The question is as relevant today, when I have 5 or 8 channel paper tapes or 1/2 magnetic tapes, into which holes on my modern laptop do I feed these tapes ? :-)
On 6/13/2014 3:23 AM, David Brown wrote:
> > And then archive a physical machine or two as well, just to be safe :-)
If you want hardware you will be able to buy in 2040, put your app on an 8051... the CPU that will never die... at least until they stop making microwave ovens. lol -- Rick
On 14-06-14 09:28 , John Devereux wrote:
> Niklas Holsti <niklas.holsti@tidorum.invalid> writes: > >> (I tried to change the Subject to something more appropriate, hope it >> works.) >> >> On 14-06-13 10:23 , David Brown wrote: >>> On 13/06/14 06:05, Niklas Holsti wrote: >>>> On 14-06-13 03:09 , Mark Curry wrote: >>>>> In article <lnd9ll$nja$1@dont-email.me>, >>>> >>>>> ...thread drift... >>>> >>>>> We currently have a setup to do nightly builds of all our code. We've >>>>> seriously considered, but haven't pulled the trigger yet, on also setting >>>>> up a build on a virtual machine. This build on the virtual machine >>>>> wouldn't happen as often, but the virtual machine snapshot would theoretically >>>>> capture "everything". The virtual machine snapshot could then be >>>>> checked into revision control. >>>>> >>>>> Sounds like overkill, but it some industries, being able to faithfuly rebuild >>>>> something 5, 10, 15+ years down the line could be useful... >>>> >>>> How sure are you that your virtual machine snapshot, taken in 2014 on >>>> your current PC and hypervisor, will run on your brand-new PC in the >>>> year 2029? >>>> >>>> As I understand them, what are called "virtual machines" on PCs only >>>> virtualize as little of the machine as is necessary to support multiple >>>> OS's on the same hardware, but are not full emulations of the PC >>>> processor and I/O. I have not seen any promises from hypervisor vendors >>>> to support 15-year-old VM snapshots on future PC architectures, which >>>> may be quite different. >>>> >>>> This question is of interest to me because I am working on projects with >>>> maintenance foreseen until the 2040's. Some people have suggested >>>> virtual machines as the solution for keeping the development tools >>>> operational so long, but I am doubtful. >>>> >>> >>> I would recommend a few things here. >> >> Thanks, David, for your helpful answer. >> >>> First, consider using raw hard >>> disk images rather than specific formats and containers - the tools for >>> working with raw images will always be around (a loopback mount in Linux >>> is usually all you need). Most hypervisors and virtual machines can >>> work with that. >> >> Today they will... but will they, in 2029, or 2040? I am unsure. > > In the event of your virtualization software no longer running the VM, > or no longer running on the hardware of the day... > > Could you not run the obselete virtualization software as a virtual > machine on the new virtualization software? :)
I see the smiley, but as you probably know, such chains of simulations have been used in the past, typically when a computer manufacturer (say IBM) comes out with a new architecture but wants to keep its current customers happy by running their old programs without recompilation. The most recent such case was perhaps when Apple switched from PowerPC to Intel for its PCs. Can we expect that the machines in 2040 will be able to run today's x86 executables, or will backward compatibility break at some point, perhaps because of Moore's law coming to a stop? The first break will certainly be covered by SW to simulate the old architecture (x86) on the new (whatever it is), but that simulation SW may not widely used for more than a few years, and will then perhaps rot. Some professional or industrial organization could define a "long term persistent" architecture that is designed to be simple to simulate and simple for porting tools, letting performance suffer as it will. This would be "deep time" thinking in the computer domain. But perhaps the x86 architecture already plays this role, de facto. Returning to John's suggestion and smiley, I believe it would be easier to maintain a single simulator of a very old machine, running on new machines, than a chain of simulators of a series of different machines. -- Niklas Holsti Tidorum Ltd niklas holsti tidorum fi . @ .
On 6/15/2014 12:09 PM, upsidedown@downunder.com wrote:

> For a long time, I used 1/2 inch 9 track 1600 bpi (no need for head > alignment as with 800 bpi) open reel ANSI magnetic tapes for storing > source files. No file archives or compressing, just plain sequential > text files. These could be readable on any mainframe or minicomputer > of the time and I assumed also in the future. > > Unfortunately I was wrong, for instance in Finland, there is only a > single functioning 1/2 inch tape drive in a computer museum, but how > long is it going to be working.
The problem with 9 track tape is that it requires regular maintenance (e.g., "retensioning" periodically) to preserve the integrity of the data recorded thereon. Of course, unless you buy a second "dummy" transport and remove the head, that retensioning puts wear on the media and the head. No big deal if you *only* use the transport and media for archival storage and restoration -- but, if you also regularly have it in service... :< And, *expected* life is more like 5-8 years if you start with good media and keep it stored properly (avoid heat and humidity). OTOH, I still have an original X Windows 10.4 distribution on a 7" reel that was readable as recently as last year (haven't tried it since). The biggest killer for low density tape (I have an 800/1600/3200 transport) was how little you could store on them! E.g., less than 100MB on a 10 inch reel -- lots of space for very little data!
> So in reality, you need to do the copying to any mature technology > about every 10 years.
That's about right. OTOH, there is no reason that you *have to* discard the source medium. If push comes to shove and your "new" archive is lost/corrupt/inaccessible, you can *hope* that you may be able to recover from the predecessor.
>> I know windows tries to make that sort of thing more difficult with >> each generation, but fortunately we have Linux, the BSD's, and other >> Unix systems - these are going to be around for a long time to come, and >> old versions can still run fine on new hardware. >> >> But you should make a point of sticking to mature and stable filesystems >> - prefer ext3 rather than btrfs, for example (FreeBSD will work with >> ext3, albeit without journalling, giving you a second source). > > Realistically CDROM (and DVD/BlueRay) file systems on physical disks > would be the most likely media to be readable in 2040. How would you > connect any current magnetic or SSD to a computer in 2040 ?
Much "consumer" CD/DVD media is not suited to long term storage. Again, you're in the 10 year ballpark if well cared for. A bigger problem may be finding a *good*, reliable drive that will still function after that period of time. Again, if used regularly, there is a risk of the laser diode going south. Or, the mechanism gumming up from *lack* of use. Or, one of the countless little plastic parts cracking, etc.
> The question is as relevant today, when I have 5 or 8 channel paper > tapes or 1/2 magnetic tapes, into which holes on my modern laptop do I > feed these tapes ? :-)
Don't you have a paper tape reader/punch? (I have two -- one standalone and one in the ASR-33...). You can always keep an optical reader "in a tiny box" to gain access to them. Along with a large collection of "tape (and other media) drives". (sheesh! talk about a potpourri of "experiments in marketing"... there have got to be more media forms than anyone can count!) [I draw the line on Hollerith cards, though...]
On 14-06-15 02:17 , Don Y wrote:
> Hi Niklas, > > On 6/14/2014 2:08 PM, Niklas Holsti wrote: >> On 14-06-14 22:53 , Don Y wrote: >>> On 6/13/2014 10:17 AM, Niklas Holsti wrote: >>>>>> How sure are you that your virtual machine snapshot, taken in 2014 on >>>>>> your current PC and hypervisor, will run on your brand-new PC in the >>>>>> year 2029? >>>>> >>>>> "Sure"?<grin> How sure are you that the host OS, VM vendor, tool >>>>> vendor, silicon vendor, etc. will be *around* at that time? >>>> >>>> Very unsure, of course, which was my point: having a virtual machine >>>> snapshot from 2014, virtualizing a 2014 machine, will not help me in >>>> 2029, if there are no machines/hypervisors that can run that snapshot. >>>> >>>> David Brown advised using KVM for virtualization, because KVM can >>>> "cross-virtualize", for example running an x86 VM in emulation on a >>>> processor of a different architecture. I will look into that, thanks >>>> David! >>> >>> The essence of the problem is that *someone* must provide the support >>> for whatever tools -- actual hardware, software, emulation, etc. >> >> Yes -- and saying "just use a virtual machine", as some have said to me >> (outside USENET, I mean) is not enough. > > Of course -- that just changes the problem from "how do I support > my tools" to "how do I support the VM that my tools *rely* upon". > >> In my case, I only need to keep a SW maintenance environment (compiler, >> linker, testing tools) working. > > Is *all* of your testing done without dealing with the "real world"? > I.e., passing test cases (const's) to the UUT and verifying the results > are "as expected"?
Yes; real world HW is not my problem, that's for the higher levels in the supply chain. All our testing is on a simulated target system. But our full testing system is fairly complicated, involving a target-processor and equipment simulator, a special test language (in fact several), a queue of tests to be run, a supervisor to run them, lots of I/O log files, etc. And optionally Eclipse, although I think I will avoid that if possible.
> I think (speaking without detailed knowledge of your specifics) that > QEMU or similar "simulator" can probably do the job for you. The > problem then becomes ensuring that QEMU will run on "whatever" > a workstation looks like in 2029!
Yep. Or in 2040 or so, which is the target for maintenance. However, I should be frank that this discussion about VMs is only theoretical for me, at the moment. The original customer requirements asked for maintenance until 2040, but at contract time this was reduced to optional extended maintenance packages, each of limited duration. At present, our plan is to archive host PCs, purchased as late as possible, and hope that they will stay functional as long as required.
>> But the simulator/emulator has to be complete and accurate enough to run >> the operating system on which the compiler/linker or other tool runs. > > No. It only needs to *emulate* the features of the OS that the > applications require!
Good point!
> E.g., it probably doesn't need to support > signals (directly), timing primitives, limited IPC/pipe support, > etc. It almost certainly wouldn't need to know how to talk to > *real* "devices", etc. Even filesystem support could be hacked > (as *you* know where all reads and writes for a particular > compiler invocation should be directed!)
Right, the full OS is not needed. But if we include the build system (gnatmake or gprbuild) and the testing system, these certainly use signals and IPC, and require concurrent processes in possibly different virtual memory spaces. Not so simple as the compiler.
>>> Interactive applications (tools) are the big risk. >> >> Fortunately I won't need any such. > > You *don't* use gdb or any other interactive tools for debugging?
I prefer not to. We may use the GPS IDE for convenience, and perhaps some other GNAT Pro interactive tools, and perhaps even Eclipse for the testing system, but we try to stay with tools that allow command-line usage and shell scripting. So the core development tools are not interactive.
> Note that there *are* groups who actually are focused on issues > of preserving *media* for (VERY) long periods of time. Most of > those solutions tend to require a bit of an investment, though.
I know. I don't think that preserving the bits and bytes of the development tools on readable media will be a problem (as long as the company survives and remembers its responsibility for this). I am worried about *interpreting* (running) those bits and bytes in the future.
> OTOH, that sort of investment may be acceptable to the folks > underwriting your effort.
I'm not sure how seriously the customers take the 2040 date. As I said, the long-term maintenance requirement was descoped from the tender stage to the contract, but it remains as the planned end-of-life date. It seems to me likely that when the moth-balled, 10-20 year-old satellites are dusted off and launched, their HW will have some glitches for which SW work-arounds may be needed. -- Niklas Holsti Tidorum Ltd niklas holsti tidorum fi . @ .
On 14-06-15 21:30 , David Brown wrote:
> On 13/06/14 20:35, Niklas Holsti wrote: >> (I tried to change the Subject to something more appropriate, hope it >> works.) >> >> On 14-06-13 10:23 , David Brown wrote: >>> On 13/06/14 06:05, Niklas Holsti wrote: >>>> On 14-06-13 03:09 , Mark Curry wrote: >>>>> In article <lnd9ll$nja$1@dont-email.me>, >>>> >>>>> ...thread drift... >>>> >>>>> We currently have a setup to do nightly builds of all our code. We've >>>>> seriously considered, but haven't pulled the trigger yet, on also >>>>> setting >>>>> up a build on a virtual machine. This build on the virtual machine >>>>> wouldn't happen as often, but the virtual machine snapshot would >>>>> theoretically >>>>> capture "everything". The virtual machine snapshot could then be >>>>> checked into revision control. >>>>> >>>>> Sounds like overkill, but it some industries, being able to >>>>> faithfuly rebuild >>>>> something 5, 10, 15+ years down the line could be useful... >>>> >>>> How sure are you that your virtual machine snapshot, taken in 2014 on >>>> your current PC and hypervisor, will run on your brand-new PC in the >>>> year 2029? >>>> >>>> As I understand them, what are called "virtual machines" on PCs only >>>> virtualize as little of the machine as is necessary to support multiple >>>> OS's on the same hardware, but are not full emulations of the PC >>>> processor and I/O. I have not seen any promises from hypervisor vendors >>>> to support 15-year-old VM snapshots on future PC architectures, which >>>> may be quite different. >>>> >>>> This question is of interest to me because I am working on projects >>>> with >>>> maintenance foreseen until the 2040's. Some people have suggested >>>> virtual machines as the solution for keeping the development tools >>>> operational so long, but I am doubtful. >>>> >>> >>> I would recommend a few things here. >> >> Thanks, David, for your helpful answer. >> >>> First, consider using raw hard >>> disk images rather than specific formats and containers - the tools for >>> working with raw images will always be around (a loopback mount in Linux >>> is usually all you need). Most hypervisors and virtual machines can >>> work with that. >> >> Today they will... but will they, in 2029, or 2040? I am unsure. > > If the day comes when we can't get a computer to read a simple file of > bytes, there will be lots of bigger problems than your particular case!
I'm sure that the file of bytes can be *read*, but can it be interpreted correctly?
> I know windows tries to make that sort of thing more difficult with > each generation, but fortunately we have Linux, the BSD's, and other > Unix systems - these are going to be around for a long time to come, and > old versions can still run fine on new hardware.
I'm doubtful that 2040's hardware will run 2014's Linux/x86 without an emulator like QEMU. With an emulator, yes, but the emulator must emulate a computer system with peripherals, complete enough to run the tools we need.
> But you should make a point of sticking to mature and stable filesystems > - prefer ext3 rather than btrfs, for example (FreeBSD will work with > ext3, albeit without journalling, giving you a second source).
Are we sure that ext3 will be supported in 2040? Possibly not in the native 2040 OS, but hopefully in our emulator.
>> As I understand your suggestion, it involves the following steps that I >> should do to set up a long-term maintenance system that does not assume >> survival of the current host-PC architecture and OS until 2040: >> >> 1. Find a virtual HW composition, probably based on x86, that: >> a) QEMU can emulate, and >> b) is supported by the OSes on which our tools run, and >> c) runs our tools, too. >> >> 2. Configure KVM+QEMU to emulate this virtual HW. >> >> 3. Install our OSes and tools on a VM using this virtual, emulated HW. >> >> 4. Maintain KVM and QEMU, using their source code, to keep them working >> on future host PCs, and preserving their ability to emulate the HW >> composition defined in step 1. >> >> This looks possible in principle. Far from easy, though. > > If it were easy, your customers would not be paying you big money to > solve the problem! But yes, that's pretty much what I had in mind.
Ok. (But the "big money" is not really there, because -- as I said in another post -- the long-term maintenance requirement was descoped from tender phase to contract phase.) -- Niklas Holsti Tidorum Ltd niklas holsti tidorum fi . @ .
The 2026 Embedded Online Conference