On 13/06/14 16:37, Don Y wrote:> Hi Randy, > > On 6/13/2014 5:41 AM, Randy Yates wrote: >> David Brown<david.brown@hesbynett.no> writes: > > [attrs elided] > >>>>>>>>> I go along with the others who suggested that you write some C/C++ >>>>>>>>> code >>>>>>>>> to generate this code. I've done that many times and it works >>>>>>>>> well. >>>>>>>> >>>>>>>> I find it's usually _way_ faster to write a Python program to >>>>>>>> generate >>>>>>>> such things, but YMMV. >>>>>>> >>>>>>> Yeah, sure, python, perl, common lisp, scheme, erlang, c, c++, >>>>>>> etc. - >>>>>>> pick yer' poison. >>>>>> >>>>>> Write it in the same language that you are compiling -- that way you >>>>>> *know* you have THAT tool available wherever you happen to maintain >>>>>> the codebase (instead of having to have *two* tools). >>>>> >>>>> I don't understand. If I'm writing in C for the '430, how does that >>>>> guarantee I have C for the development host? >>>> >>>> It doesn't guarantee that you have it for the host. But, neither >>>> does it guarantee that you have python, perl, sh, etc. for the host! >>>> What it *does* guarantee is that *you* will know how to write C >>>> for that host (more or less)! It doesn't guarantee that you will >>>> be able to write a perl script *if* you happened to have perl >>>> available to you on *that* host (e.g., none of my windows hosts >>>> have perl installed). >>> >>> I have written embedded C for 20 years - but I would not be confident >>> about using C on the host for something involving a lot of string >>> manipulation and formatting. It's a different skill set, even though it >>> is still C. >> >> What C programmer (embedded or otherwise) doesn't know how to use >> printf()? > > Agreed. "Backing up" (driving in reverse) is a different skill set > than driving forward -- yet, someone who advocated getting out of the > car and *walking* (because he was more sure of his ability to do that) > would leave me suspect: "And when are you going to LEARN to back up?" > >>> And I know plenty of embedded programmers who have no idea >>> how to make a host-run C program at all. >> >> Wha..? You've GOT to be joking! I think there are 5th-graders who know >> how to take a C program from the net and get it compiled. > > But, they'll be able to "make" a host-run *python* program? > >>> So no, you don't have such a guarantee. >>> >>> But I can give a guarantee* that a competent embedded C programmer will >>> pick up the basics of Python quickly, and write string manipulation and >>> formatting code faster than learning to write host C code for the same >>> job. And the resulting scripts will be cleaner, faster to develop, and >>> easier to maintain. >> >> Your statement is irrational. A competent C programmer already knows C. >> If one uses Python, you will need time for learning the language AND >> implementing the required formatting. > > Why limit it to Python? Why not let the developer use whatever > *he* thinks is appropriate for the task at hand? And, force everyone > else in the organization to come up to speed with *his* choice of > tools for *that* project? (of course, *he* will also have to come > up to speed with the tools that his colleagues are using for *their* > projects... "fair is fair") > > This was the siren's song that I fell for early on in my "independent" > career -- picking the right tools FOR ME as I had no "arbitrary" > constraints placed on me by "Corporate", development costs (at least > those reflected in my monthly billing) were *very* obvious, etc. so > why not pick the "most efficient" (for me) way to get the project > "done"? Piece together whatever tools make sense (for *me*!) to > get the job done... > > Ah, but now client -- running a MS shop -- grumbles because he > doesn't have all those tools available that I do in my UNIX+MS > shop! Do I try to be a religious zealot and convince him he > *should* have them (from a purely technical argument)? > > "Gee, 5-10 years from now, you'll be able to get *some* of > this stuff for free! There will be all these FOSS OS's > available (in various levels of maturity) to choose from... > why not get on the band-wagon *now*? MS is such a loser OS > and, by association, anything that *runs* under it must be > as well..." > > Look at some of the FOSS projects and the hodgepodge of tools > they (somewhat arbitrarily) rely upon for proof of this. I.e., > it's *fine* -- if you want to drink the koolade... > >> I claim that a competent embedded C programmer will write such scripts >> more quickly in C than in Python, and that it is more sensible from a >> maintenance persperctive to keep everything in the same language. > > Maybe others have better luck finding well rounded "coders" -- that > are confident/competent writing code that typically runs in a > desktop environment *and* an embedded one, in different languages, > along with some familiarity with (embedded) hardware, etc. > > Should I write the "converter" for my text-to-phoneme algorithm > in LISP as it is ideally suited for that task? Will someone > down the road be able to change the input ruleset and *know* > how to verify that the converter has accurately done its job? > > [I recall encountering gnuplot ports where the regression tests > wouldn't pass. They would *run* to completion -- but, the > resulting plots were obviously wrong! Unfortunately, the folks > doing the port didn't understand what they were trying to "plot" > so they were unable to determine that the plots were incorrect and, > as such, teh *port* was flawed! "Gee, mathematical functions... > you'd think folks would know what they all looked like!"] > > But, from my experience (and most of the grumbling I hear from my > associates), finding someone who knows *a* language *well* is a > significant challenge. Expecting him (her) *and* the rest of the > staff tasked with reviewing his code to know *several* seems like > a recipe for disaster. I can already see those folks nodding their > heads at a design review (for fear of showing the shallowness of > their knowledge of yet-another tool) instead of being able to > actively criticize the implementation. How much more honest will > they be with their abilities when tasked with *maintaining* it? > > *Especially* if the individual who built this multi-tool environment > is "qualified"! ("Gee, he acts as if all of this stuff is 'obvious'; > do I want to show my ignorance by questioning something he's done?") > > <shrug> I think folks have different experiences based on the > environments in which they develop. I've learned to expect *less* > flexibility in my environment rather than more (at least if I > wanted to do less *rework*!) > > YMMV.I agree that it's a bad idea for developers to just pick whatever tools they fancy. There are many reasons why I and several others recommend Python - it is not just a random tool that I happen to like. It's benefits include being strongly cross-platform (as it has always been, right from its inception), very easy to learn, very powerful for string manipulation and formatting, a large standard library with excellent documentation (all in one place), solid support from big companies (the key Python developers work at Google), fast and interactive development for scripts, and it is very well known among a range of users. So yes, if you have a client that is addicted to MS software, then you can tell him that Python is a perfectly solid tool for his systems - he can even install the Python tools for Visual Studio, which is written by MS. Or you and he can continue to keep one foot nailed to the floor by thinking that anything free or open source must be so amateurish that no sane company would use them, and that every problem must be beaten to death by a hammer because no other tools are allowed. As you say, YMMV.
filling remaining array elements with fixed value
Started by ●June 12, 2014
Reply by ●June 15, 20142014-06-15
Reply by ●June 15, 20142014-06-15
rickman schreef op 15-Jun-14 8:40 PM:> On 6/13/2014 2:48 AM, Wouter van Ooijen wrote: >> hamilton schreef op 13-Jun-14 7:21 AM:> On 6/12/2014 11:20 PM, hamilton >> wrote: >> >> On 6/12/2014 3:24 AM, Wouter van Ooijen wrote: >> >>>> #define TEN_FFS >> 0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF >> >>>> const unsigned char my_array[8]={ >> >>>> 0xA, 0xB, 0xC, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF >> >>>> TEN_FFS, TEN_FFS, TEN_FFS, TEN_FFS, TEN_FFS, >> >>>> TEN_FFS, TEN_FFS, TEN_FFS, TEN_FFS, TEN_FFS >> >>>> }; >> >>>> #undefine TEN_FFS >> >>> >> >>> I misread, you want 1000, not 100, but that requires only two more >> >>> lines ;) >> >>> >> >> Don't you mean 200 more lines !! >> > sorry, 20 lines >> > >> >> You think too linear. Learn to think recursive. >> >> #define TEN_FFS 0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF,0xFF >> #define H_FFS TEN_FFS,TEN_FFS,TEN_FFS,TEN_FFS,TEN_FFS,\ >> TEN_FFS,TEN_FFS,TEN_FFS,TEN_FFS,TEN_FFS >> const unsigned char my_array[8]={ >> 0xA, 0xB, 0xC, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF >> H_FFS,H_FFS,H_FFS,H_FFS,H_FFS,H_FFS,H_FFS,H_FFS,H_FFS,H_FFS >> }; >> #undefine TEN_FFS >> #undefine H_FFS > > Correct me if I am wrong, but isn't that still an array of 8 char? ;) >Yeah, I probably copied the wrong part of the OPs question. Wouter
Reply by ●June 15, 20142014-06-15
Hi Niklas, On 6/15/2014 1:29 PM, Niklas Holsti wrote:>>> In my case, I only need to keep a SW maintenance environment (compiler, >>> linker, testing tools) working. >> >> Is *all* of your testing done without dealing with the "real world"? >> I.e., passing test cases (const's) to the UUT and verifying the results >> are "as expected"? > > Yes; real world HW is not my problem, that's for the higher levels in > the supply chain. All our testing is on a simulated target system. > > But our full testing system is fairly complicated, involving a > target-processor and equipment simulator, a special test language (in > fact several), a queue of tests to be run, a supervisor to run them, > lots of I/O log files, etc. And optionally Eclipse, although I think I > will avoid that if possible.OK. The point of my debugger, etc. reference was to call attention to the fact that you aren't *just* "processing text files" in your development effort. E.g., your "target-processor and equipment simulator" undoubtedly (?) draws on more features of the host system than a "simple" compiler that need only be concerned with read() and write() to a file system. The richer -- and more divers -- your development environment, the tougher it will be to maintain for long periods of time. (note that "tough" may not mean "difficult" -- it may simply be *tedious*!)>> I think (speaking without detailed knowledge of your specifics) that >> QEMU or similar "simulator" can probably do the job for you. The >> problem then becomes ensuring that QEMU will run on "whatever" >> a workstation looks like in 2029! > > Yep. Or in 2040 or so, which is the target for maintenance.<frown> How did I get fixated on 2029? :-/> However, I should be frank that this discussion about VMs is only > theoretical for me, at the moment. The original customer requirements > asked for maintenance until 2040, but at contract time this was reduced > to optional extended maintenance packages, each of limited duration. At > present, our plan is to archive host PCs, purchased as late as possible, > and hope that they will stay functional as long as required.Yes. This is what I did. I kept upgrading hosts -- trying to increase performance while retaining compatibility with the "legacy" tools that the host was kept to support. Why hold onto a 16MHz machine when a 100MHz machine takes the same amount of storage space?! But, there are often little things that escape notice unless you keep the "vintage" of those tools in mind. E.g., finding *small* IDE drives quickly became difficult. Or, some "custom" battery for the NVRAM/RTC that is no longer manufactured. Or, compatible magnetic media, etc. Or, genuine serial/parallel ports, etc. Other tool issues can be even more insidious (given that you aren't actively *using* those tools as you make these upgrades). E.g., I used to use Brief exclusively in the early 80's. Delightful on a 30MHz machine. Step up to a 200MHz machine, different host OS (etc) and suddenly its more problem than solution (e.g., keyboard repeat rate).>>> But the simulator/emulator has to be complete and accurate enough to run >>> the operating system on which the compiler/linker or other tool runs. >> >> No. It only needs to *emulate* the features of the OS that the >> applications require! > > Good point! > >> E.g., it probably doesn't need to support >> signals (directly), timing primitives, limited IPC/pipe support, >> etc. It almost certainly wouldn't need to know how to talk to >> *real* "devices", etc. Even filesystem support could be hacked >> (as *you* know where all reads and writes for a particular >> compiler invocation should be directed!) > > Right, the full OS is not needed. But if we include the build system > (gnatmake or gprbuild) and the testing system, these certainly use > signals and IPC, and require concurrent processes in possibly different > virtual memory spaces. Not so simple as the compiler.Understood. It's never as simple as "just use a _______".>>>> Interactive applications (tools) are the big risk. >>> >>> Fortunately I won't need any such. >> >> You *don't* use gdb or any other interactive tools for debugging? > > I prefer not to. We may use the GPS IDE for convenience, and perhaps > some other GNAT Pro interactive tools, and perhaps even Eclipse for the > testing system, but we try to stay with tools that allow command-line > usage and shell scripting. So the core development tools are not > interactive.Sounds like a good strategy. I tend to rely heavily on interactive debuggers, etc. in my troubleshooting. Though all build and regression testing is scripted (out of fear that a human may corrupt the process through carelessness, etc.)>> Note that there *are* groups who actually are focused on issues >> of preserving *media* for (VERY) long periods of time. Most of >> those solutions tend to require a bit of an investment, though. > > I know. I don't think that preserving the bits and bytes of the > development tools on readable media will be a problem (as long as the > company survives and remembers its responsibility for this). I amRemember, media go out of fashion rather quickly when compared to these time scales! Don't *count* on being able to find a compatible "drive" easily. Also, mothballing spares also has to be done "professionally" and not casually/haphazard.> worried about *interpreting* (running) those bits and bytes in the future.Then, foremost, ensure you have all of those *formats* comprehensively documented! If you can't figure out what the data *means*, you have no hope of "using" it, later.>> OTOH, that sort of investment may be acceptable to the folks >> underwriting your effort. > > I'm not sure how seriously the customers take the 2040 date. As I said, > the long-term maintenance requirement was descoped from the tender stage > to the contract, but it remains as the planned end-of-life date. It > seems to me likely that when the moth-balled, 10-20 year-old satellites > are dusted off and launched, their HW will have some glitches for which > SW work-arounds may be needed.Understood. It "costs nothing" to *ask* for outrageous goals. Purses only feel the impact when they have to *deliver* on those goals. E.g., clients always make outlandish "demands" -- then, when confronted with the *actual* (projected) costs of those demands, they are suddenly not as "inflexible" as they were initially described! :> As I alluded earlier, the real problem with long term support comes when you have to *guarantee* it (as opposed to "hope your solution is viable", longterm). You are at the end of a long line of dependencies and, chances are, all the folks upstream from you will make no *real* guarantees on which *you* can rely! :<
Reply by ●June 15, 20142014-06-15
Hi David, On 6/15/2014 1:48 PM, David Brown wrote:> I agree that it's a bad idea for developers to just pick whatever tools > they fancy. There are many reasons why I and several others recommend > Python - it is not just a random tool that I happen to like. It's > benefits include being strongly cross-platform (as it has always been, > right from its inception), very easy to learn, very powerful for string > manipulation and formatting, a large standard library with excellent > documentation (all in one place), solid support from big companies (the > key Python developers work at Google), fast and interactive development > for scripts, and it is very well known among a range of users.And the same could be said for C++, Java, Perl, etc. What will be the language du jour *next* year?> So yes, if you have a client that is addicted to MS software, then you > can tell him that Python is a perfectly solid tool for his systems - he > can even install the Python tools for Visual Studio, which is written by > MS.Why should I assume the role of evangelist? My job is to provide a solution to a particular problem. Not advise clients on how they should structure their engineering departments, the skillsets they should stress with job candidates, etc. When asked to *train* clients' new hires *or* help in the hiring process, I smile graciously and decline -- that's not where my interests lie. I have to make a case for the implementation *I* have chosen, nothing more. If that implementation requires convincing the client of the merits of *several* tools, it's more work for me than if I can point to a smaller set of tools and show the same results. A client feeling "forced" into taking on specific skillsets, tools, equipment, etc. isn't usually a *happy* client ("*You* are costing me money...") I don't think you have very broad experience with the sorts of "developers" typically encountered, here -- nor the firms and policies/politics of their employers. It is often a battle to move from component supplier X to supplier Y -- even moreso processor family A to processor family B ("But all of our staff already are familiar with processor A's architecture, conventions, tools, etc."). Small firms are always pinching pennies -- new tools (even free ones) cost tangible dollars (if *the* software developer is unproductive for a month, that's 8% of their development budget "wasted"). Big firms are often entrenched in policy and procedures so doing anything *different* requires an act of God (often, projects are done "off the books" -- at least to the proof of concept stage -- simply to avoid dealing with the "machinery" involved; easier to "force" a design decision by pointing to an accomplished feat than to try to convince them to go in a new direction a priori) Should I try to sell them on the "beauties" of OS <whatever> over OS <currently_used>? Or, why this slightly more expensive shipset is a better choice for their project GIVEN MY EXPERT OPINION ON WHERE THE PRODUCT IS LIKELY TO EVOLVE? *State* your opinions; then, move on to the job for which you were hired.> Or you and he can continue to keep one foot nailed to the floor by-----^^^> thinking that anything free or open source must be so amateurish that no > sane company would use them, and that every problem must be beaten to > death by a hammer because no other tools are allowed.FOR THE RECORD, I probably use and *deploy* more FOSS than anything *you* use/deploy. All of *my* software development work is done with FOSS tools (I rely on Solaris and Windows based "proprietary" tools for certain documentation, test, and hardware design tasks for which the available FOSS tools *pale*). The fact that *EVERYTHING* I am currently writing (software and docs) and designing (hardware) will be available under an unencumbered (*non*-GPL) license demonstrates my commitment to the concept of "Open" hardware/software. How many *thousands* of hours of *your* time are you GIVING AWAY? "Nailed to the floor"? Hardly! Furthermore, I have been "on the record" as having used and contributed to same for at least 20 years (USENET search shows patches back to '93; I've not looked back further than that). You can probably benefit from taking your own advice in return: thinking that [FOSS] tools are the ONLY solution to EVERY problem! [Perhaps you don't have a budget capable of *buying* all those tools?] Projecting *your* opinions (regardless of how well conceived you *think* they may be) onto someone else's (i.e., client) priorities, capabilities and resources speaks of arrogance. [I'll tell my neighbors of problems they are likely to encounter in their homes (based on first-hand experience, here, and observations of building styles prevalent for this vintage home). But, I won't harp on them to fix them -- even when there are significant risks to *not* fixing them! I may *think* I "know better" but they have their own priorities, plans, resources, etc.]> As you say, YMMV.
Reply by ●June 15, 20142014-06-15
On Sun, 15 Jun 2014 13:07:54 -0700, Don Y <this@is.not.me.com> wrote:>On 6/15/2014 12:09 PM, upsidedown@downunder.com wrote: > >> For a long time, I used 1/2 inch 9 track 1600 bpi (no need for head >> alignment as with 800 bpi) open reel ANSI magnetic tapes for storing >> source files. No file archives or compressing, just plain sequential >> text files. These could be readable on any mainframe or minicomputer >> of the time and I assumed also in the future. >> >> Unfortunately I was wrong, for instance in Finland, there is only a >> single functioning 1/2 inch tape drive in a computer museum, but how >> long is it going to be working. > >The problem with 9 track tape is that it requires regular maintenance >(e.g., "retensioning" periodically) to preserve the integrity of the >data recorded thereon. Of course, unless you buy a second "dummy" >transport and remove the head, that retensioning puts wear on the >media and the head. No big deal if you *only* use the transport >and media for archival storage and restoration -- but, if you also >regularly have it in service... :<I hate those 800 BPI drives, since they decode the byte in parallel, so if you had to read tapes written on an other drive, you have to align the read head in the same way that the tape was written. With 1600 BPI and up, each track was self clocking, so no need to have an extremely accurate read head alignment.>And, *expected* life is more like 5-8 years if you start with good >media and keep it stored properly (avoid heat and humidity). OTOH, >I still have an original X Windows 10.4 distribution on a 7" reel >that was readable as recently as last year (haven't tried it since). > >The biggest killer for low density tape (I have an 800/1600/3200 >transport) was how little you could store on them! E.g., less >than 100MB on a 10 inch reel -- lots of space for very little >data!100 MB was a _huge_ amount of data a decade or two ago.>> So in reality, you need to do the copying to any mature technology >> about every 10 years. > >That's about right. OTOH, there is no reason that you *have to* >discard the source medium. If push comes to shove and your "new" >archive is lost/corrupt/inaccessible, you can *hope* that you >may be able to recover from the predecessor. > >>> I know windows tries to make that sort of thing more difficult with >>> each generation, but fortunately we have Linux, the BSD's, and other >>> Unix systems - these are going to be around for a long time to come, and >>> old versions can still run fine on new hardware. >>> >>> But you should make a point of sticking to mature and stable filesystems >>> - prefer ext3 rather than btrfs, for example (FreeBSD will work with >>> ext3, albeit without journalling, giving you a second source). >> >> Realistically CDROM (and DVD/BlueRay) file systems on physical disks >> would be the most likely media to be readable in 2040. How would you >> connect any current magnetic or SSD to a computer in 2040 ? > >Much "consumer" CD/DVD media is not suited to long term storage. >Again, you're in the 10 year ballpark if well cared for. A bigger >problem may be finding a *good*, reliable drive that will still >function after that period of time. > >Again, if used regularly, there is a risk of the laser diode >going south. Or, the mechanism gumming up from *lack* of use. >Or, one of the countless little plastic parts cracking, etc.Since 78/45/33 rpm audio disks are still readable, and even produced today, an educated guess would be that CDs would be usable after a few decades.>> The question is as relevant today, when I have 5 or 8 channel paper >> tapes or 1/2 magnetic tapes, into which holes on my modern laptop do I >> feed these tapes ? :-) > >Don't you have a paper tape reader/punch? (I have two -- one >standalone and one in the ASR-33...). You can always keep an >optical reader "in a tiny box" to gain access to them.I have toyed with the idea of using any modern A4 flatbed scanner to read punched cards or segments of paper tapes :-)
Reply by ●June 15, 20142014-06-15
On 6/15/2014 3:43 PM, upsidedown@downunder.com wrote:> On Sun, 15 Jun 2014 13:07:54 -0700, Don Y<this@is.not.me.com> wrote: >> On 6/15/2014 12:09 PM, upsidedown@downunder.com wrote:[half inch tape]>> The problem with 9 track tape is that it requires regular maintenance >> (e.g., "retensioning" periodically) to preserve the integrity of the >> data recorded thereon. Of course, unless you buy a second "dummy" >> transport and remove the head, that retensioning puts wear on the >> media and the head. No big deal if you *only* use the transport >> and media for archival storage and restoration -- but, if you also >> regularly have it in service... :< > > I hate those 800 BPI drives, since they decode the byte in parallel, > so if you had to read tapes written on an other drive, you have to > align the read head in the same way that the tape was written. With > 1600 BPI and up, each track was self clocking, so no need to have an > extremely accurate read head alignment.Well, "extremely accurate" is a relative term... :> But, yes, you aren't going to do it yourself without a scope and a reference tape.>> And, *expected* life is more like 5-8 years if you start with good >> media and keep it stored properly (avoid heat and humidity). OTOH, >> I still have an original X Windows 10.4 distribution on a 7" reel >> that was readable as recently as last year (haven't tried it since). >> >> The biggest killer for low density tape (I have an 800/1600/3200 >> transport) was how little you could store on them! E.g., less >> than 100MB on a 10 inch reel -- lots of space for very little >> data! > > 100 MB was a _huge_ amount of data a decade or two ago.<frown> I guess it depends on what you are "holding onto". I still have a dozen Black Watch tapes that I've not (yet) bothered to "recover". That's ~1GB (gasp!) of data (in a cubic *foot* :< Sheesh, *core* was almost that dense! jk)>>>> I know windows tries to make that sort of thing more difficult with >>>> each generation, but fortunately we have Linux, the BSD's, and other >>>> Unix systems - these are going to be around for a long time to come, and >>>> old versions can still run fine on new hardware. >>>> >>>> But you should make a point of sticking to mature and stable filesystems >>>> - prefer ext3 rather than btrfs, for example (FreeBSD will work with >>>> ext3, albeit without journalling, giving you a second source). >>> >>> Realistically CDROM (and DVD/BlueRay) file systems on physical disks >>> would be the most likely media to be readable in 2040. How would you >>> connect any current magnetic or SSD to a computer in 2040 ? >> >> Much "consumer" CD/DVD media is not suited to long term storage. >> Again, you're in the 10 year ballpark if well cared for. A bigger >> problem may be finding a *good*, reliable drive that will still >> function after that period of time. >> >> Again, if used regularly, there is a risk of the laser diode >> going south. Or, the mechanism gumming up from *lack* of use. >> Or, one of the countless little plastic parts cracking, etc. > > Since 78/45/33 rpm audio disks are still readable, and even produced > today, an educated guess would be that CDs would be usable after a few > decades.I don't know. Non Blu-ray (DVD) (consumer) players are already becoming scarce. And, the quality of most of the stuff available would leave me "anxious" as to how well it would hold up over time (even if mothballed). I wonder if any of the plastic parts would suffer from "cold flow" problems (left immobile for years)?>>> The question is as relevant today, when I have 5 or 8 channel paper >>> tapes or 1/2 magnetic tapes, into which holes on my modern laptop do I >>> feed these tapes ? :-) >> >> Don't you have a paper tape reader/punch? (I have two -- one >> standalone and one in the ASR-33...). You can always keep an >> optical reader "in a tiny box" to gain access to them. > > I have toyed with the idea of using any modern A4 flatbed scanner to > read punched cards or segments of paper tapes :-)Oooo... *that's* an idea! Probably not as effective for the tape (which would have to be cut into strips, etc.) *but* you might be able to coax an ADF to feed Hollerith cards one at a time into it! [Otherwise, doing one at a time may exhaust your patience -- if you've ever scanned any number of photos, slides, etc. you'll understand how tedious this can be.] Somewhere, I have a nice little 8 channel photointerrupter module *designed* for paper tape (to be read at very high speed). Put a bit of "drag" on the tape supply and then just *pull* it through the array (i.e., its speed would be relatively constant in the short term so you could even detect "no punch" positions) There are a lot of "media" that are far less easy to "recover" data from without a functioning "mechanism". E.g., I've kept several MO drives out of fear of a reliance on *one* leaving me *screwed* in the event of its failure. Modern discs are probably equally vulnerable; no longer possible to swap platters/controllers to rescue a failed drive. And, fixing the existing controller isn't a "customer option". :<
Reply by ●June 15, 20142014-06-15
On 16/06/14 00:09, Don Y wrote:> On 6/15/2014 3:43 PM, upsidedown@downunder.com wrote: >> On Sun, 15 Jun 2014 13:07:54 -0700, Don Y<this@is.not.me.com> wrote: >>> On 6/15/2014 12:09 PM, upsidedown@downunder.com wrote: >>> >>> Much "consumer" CD/DVD media is not suited to long term storage.Not just consumer CD/DVD media. In the late 90s NIST found it effectively impossible to predict how long a given CD would last for archival purposes. The root cause was that the manufacturers simply used whatever chemicals were available that week - and that was true for nominally "good" "reliable" brands.> Somewhere, I have a nice little 8 channel photointerrupter module > *designed* for paper tape (to be read at very high speed). Put a > bit of "drag" on the tape supply and then just *pull* it through > the array (i.e., its speed would be relatively constant in the short > term so you could even detect "no punch" positions)The 1000cps readers that spat paper tape 6 foot horizontally into a large hopper (paper cuts? what paper cuts?) had /nine/ channels. The sprocket hole acted as a clock for the 8data bits.
Reply by ●June 15, 20142014-06-15
On Sun, 15 Jun 2014 16:09:01 -0700, Don Y <this@is.not.me.com> wrote:>> 100 MB was a _huge_ amount of data a decade or two ago. > ><frown> I guess it depends on what you are "holding onto". >I still have a dozen Black Watch tapes that I've not (yet) >bothered to "recover". That's ~1GB (gasp!) of data (in a >cubic *foot* :< Sheesh, *core* was almost that dense! jk)Actually three decades ago. While installing a 300 MB 14" SMD drive (the same size of an washing machine), my toe went under this device and it was not very well for a month or two :-).
Reply by ●June 15, 20142014-06-15
Hi Tom, On 6/15/2014 4:35 PM, Tom Gardner wrote:> On 16/06/14 00:09, Don Y wrote: >> On 6/15/2014 3:43 PM, upsidedown@downunder.com wrote: >>> On Sun, 15 Jun 2014 13:07:54 -0700, Don Y<this@is.not.me.com> wrote: >>>> On 6/15/2014 12:09 PM, upsidedown@downunder.com wrote: >>>> >>>> Much "consumer" CD/DVD media is not suited to long term storage. > > Not just consumer CD/DVD media. In the late 90s NIST found > it effectively impossible to predict how long a given CD > would last for archival purposes. The root cause was that > the manufacturers simply used whatever chemicals were > available that week - and that was true for nominally "good" > "reliable" brands.Despite all that, I have had remarkably good results with *all* my media (not just "optical"). But, returning to Niklas' predicament, I don't have to *guarantee* the media (or its contents)!>> Somewhere, I have a nice little 8 channel photointerrupter module >> *designed* for paper tape (to be read at very high speed). Put a >> bit of "drag" on the tape supply and then just *pull* it through >> the array (i.e., its speed would be relatively constant in the short >> term so you could even detect "no punch" positions) > > The 1000cps readers that spat paper tape 6 foot horizontally > into a large hopper (paper cuts? what paper cuts?) had /nine/ > channels. The sprocket hole acted as a clock for the 8data bits.<shrug> I may have misremembered what this thing was like. I know when it was given to me I mused over what it's purpose was -- such a "polished" and "contoured" surface between the emitter/detector pairs. And only belatedly realized. "Ah, this is a keeper!" IIRC, you can get PPT in 5 to 8 "channels". [Of course, with constraints on the data (and *drive*), you don't *need* a clock channel...] It, however, does nothing for *punching* tape!
Reply by ●June 15, 20142014-06-15
On 6/15/2014 4:36 PM, upsidedown@downunder.com wrote:> On Sun, 15 Jun 2014 16:09:01 -0700, Don Y<this@is.not.me.com> wrote: > >>> 100 MB was a _huge_ amount of data a decade or two ago. >> >> <frown> I guess it depends on what you are "holding onto". >> I still have a dozen Black Watch tapes that I've not (yet) >> bothered to "recover". That's ~1GB (gasp!) of data (in a >> cubic *foot* :< Sheesh, *core* was almost that dense! jk) > > Actually three decades ago. > > While installing a 300 MB 14" SMD drive (the same size of an washing > machine), my toe went under this device and it was not very well for a > month or two :-).Yeah, when I was in school, I used to have access to the "junk room" at DEC's facility (Maynard?). It was always amusing to imagine what the "events" were like that led to those machines being trashed (disk *crash*). "Holy Sh*t! Did you hear *that*?" I had an old RS08 (RK08? not sure if I recall that correctly... 128KW *fixed* head, single platter 14" "word accessible" disk) about the same time. All I can say is shipping charges must have been a LOT LESS back then!! :-/







