EmbeddedRelated.com
Forums
Memfault Beyond the Launch

Pipelined 6502/z80 with cache and 16x clock multiplier

Started by Brett Davis December 19, 2010
In comp.arch Rob Warnock <rpw3@rpw3.org> wrote:
> Tom Knight <tk@shaggy.csail.mit.edu> wrote: > +--------------- > | WRT reliability, we ran a time shared KA-10 processor for periods of > | over six months, and took the system down for an update, rather than > | having it crash (this was with the ITS operating system). > +--------------- > TOPS-10 was often just as reliable. At the Emory Univ. Chem. Dept., > we ran a time shared KA-10 processor running TOPS-10 for multiple > periods of over six months, and in one instance, over a year. > Note: We filed a complaint with DEC Field Circus over that one, > since we had a service contract and they were supposed to do > preventative service more often than that!! ;-}
... I'm seeing a pattern. All you guys so far reporting these 1 y uptimes seem to've been running KA's. Must have been a good batch of transistors. -- [Specific learning difficulties:]
>Rolling resistance for a [80 kph] bike with average tyres is [...] 2.2 kW. >On a bike with rider in tuck position air resistance with no wind >is something like .4 * v^3 Watts [...] 4.3 kW.
** For Christ's sake - go learn some basic physics, dickhead. The drag experienced by the solar car or a cyclist is almost entirely due to AIR resistance. And that is not a linear function of speed. -- "[NPD?] Phil Allison" <phil_a@tpg.com.au>, 9 Jan 2011 13:28 +1100
Rob Warnock wrote:
> Tom Knight<tk@shaggy.csail.mit.edu> wrote: > +--------------- > | WRT reliability, we ran a time shared KA-10 processor for periods of > | over six months, and took the system down for an update, rather than > | having it crash (this was with the ITS operating system). > +--------------- > > TOPS-10 was often just as reliable. At the Emory Univ. Chem. Dept., > we ran a time shared KA-10 processor running TOPS-10 for multiple > periods of over six months, and in one instance, over a year. > Note: We filed a complaint with DEC Field Circus over that one, > since we had a service contract and they were supposed to do > preventative service more often than that!! ;-} > > +--------------- > | I find it remarkable that today's software is so chock a block > | full of fixable bugs. > +--------------- > > My personal experience with FreeBSD has been pretty good: > > $ uname -mrs ; uptime > FreeBSD 6.2-RELEASE-p4 amd64 > 12:09AM up 401 days, 22:50, 19 users, load averages: 0.03, 0.02, 0.00 > $ > > That machine runs web, mail, SSH, and DNS servers, > and sits right on the 'Net with no hardware firewall.
My home FreeBSD box has been running since V6.0 or so, the only downtime has been for OS upgrades. It is the DMZ host for my home network, as well as a public (IPv4 and IPv6) gps-based ntp server. C:\>nslookup ntp6.tmsw.no Name: ntp6.tmsw.no Address: 2001:16d8:ee97::6 We have had a couple of electrical brownouts/disconnections for maintenance work over those years, but the server is an old Dell laptop, so the battery works as a built-in UPS. Terje -- - <Terje.Mathisen at tmsw.no> "almost all programming can be viewed as an exercise in caching"

kym@kymhorsell.com wrote:

> In comp.arch Tom Knight <tk@shaggy.csail.mit.edu> wrote: > > Neither the PDP-6 nor any of the PDP-10 models had this problem. The > > 7094 did, however, and one could hang CTSS (the time sharing system) > > by executing an infinite indirect loop. Multics (645) could not > > interrupt and restart, but had a timeout trap. There were > > instructions on the PDP-10 which would never finish, even though legal > > and (on paper) terminating, since they could reference a pattern of > > pages which could never be in memory simultaneously. Since > > Now, *that* rings a bell. :) > > > instructions restarted each time from the beginning, they could never > > complete execution.
I once worked on a Univac that had a jump indirect bit. When it got into a jump loop stopping it was interesting. First option was to hit the Stop button. It ran to the end of the current instruction which meant that it kept running. Second option was to power down which meant that it did an save state at the end of the current fetch cycle and restore the current state on power up. No joy. Third option hit stop button and ground the a wire from the instruction register to the console forcing it to think it was decoding a different instruction on the next indirect jump in the loop. Regards, w.. -- Walter Banks Byte Craft Limited http://www.bytecraft.com
Tom Knight wrote:
> rpw3@rpw3.org (Rob Warnock) writes: >> <kym@kymhorsell.com> wrote: >> +--------------- >> | MitchAlsup<MitchAlsup@aol.com> wrote: >> |> The PDP-10 had infinite indirect memory addressing--the addressed >> |> word from memory contained a bit to indicate if another level of >> |> indirection was to be performed. >> | >> | Is my memory faulty or was it possible for the (buggy) TOPS-10 to hang >> | because of this little h/w feature? >> +--------------- >> >> Your memory is faulty. (Sorry.) ;-} ;-} > > Neither the PDP-6 nor any of the PDP-10 models had this problem. The > 7094 did, however, and one could hang CTSS (the time sharing system) > by executing an infinite indirect loop. Multics (645) could not > interrupt and restart, but had a timeout trap. There were > instructions on the PDP-10 which would never finish, even though legal > and (on paper) terminating, since they could reference a pattern of > pages which could never be in memory simultaneously. Since > instructions restarted each time from the beginning, they could never > complete execution. > > WRT reliability, we ran a time shared KA-10 processor for periods of > over six months, and took the system down for an update, rather than > having it crash (this was with the ITS operating system). I find it > remarkable that today's software is so chock a block full of fixable > bugs. No one seems to actually look at what caused a crash any more.
You go away for a couple of days and return inside a time machine. Or a rest home. I remember a big dispute between the customer and FS concerning UC Berkeley's PDP 6. The customer wanted PM time counted as downtime and FD didn't. Memory margins and oiling the teletype. The good old days.
In article <igi0rj$kdm$1@news.eternal-september.org>,
Jim Stewart  <jstewart@jkmicro.com> wrote:
>Tom Knight wrote: >> rpw3@rpw3.org (Rob Warnock) writes: >>> <kym@kymhorsell.com> wrote:
>> Neither the PDP-6 nor any of the PDP-10 models had this problem. The >> 7094 did, however, and one could hang CTSS (the time sharing system) >> by executing an infinite indirect loop. Multics (645) could not >> interrupt and restart, but had a timeout trap. There were >> instructions on the PDP-10 which would never finish, even though legal >> and (on paper) terminating, since they could reference a pattern of >> pages which could never be in memory simultaneously. Since >> instructions restarted each time from the beginning, they could never >> complete execution. >> >> WRT reliability, we ran a time shared KA-10 processor for periods of >> over six months, and took the system down for an update, rather than >> having it crash (this was with the ITS operating system). I find it >> remarkable that today's software is so chock a block full of fixable >> bugs. No one seems to actually look at what caused a crash any more.
I sent a famous service request to Prime sometime in 1987, complaining that the uptime counter; incrementing 330 times a second, wrapped around in it's 32 bit container after 150 or so days. They framed the complaint and put it on the wall.
>You go away for a couple of days and return >inside a time machine. Or a rest home. > >I remember a big dispute between the customer >and FS concerning UC Berkeley's PDP 6. The >customer wanted PM time counted as downtime >and FD didn't. Memory margins and oiling the >teletype. The good old days.
Modern, well designed servers don't have many problems. You have to do the occasional upgrade and reboot as part of meintenence, but the MTBF of single servers are around 5 years. More, if you go for redundancy everywhere and hot-swap parts. OS crashes in Linux, BSD etc from LTS releases that happen except for hardware issues are also very rare. -- mrr
On Tue, 11 Jan 2011 21:21:48 +0100, Morten Reistad <first@last.name> 
wrote:
> Modern, well designed servers don't have many problems. You have to > do the occasional upgrade and reboot as part of meintenence, but > the MTBF of single servers are around 5 years. More, if you > go for redundancy everywhere and hot-swap parts.
> OS crashes in Linux, BSD etc from LTS releases that happen > except for hardware issues are also very rare.
> -- mrr
IIRC, a VAX/VMS machine in Sweden has been running for > 20 years. BR Ulf Samuelsson -- Best Regards, Ulf Samuelsson
On Jan 11, 7:36=A0pm, Ulf Samuelsson <u...@fake.atmel.com> wrote:
> On Tue, 11 Jan 2011 21:21:48 +0100, Morten Reistad <fi...@last.name> > wrote: > > > Modern, well designed servers don't have many problems. You have to > > do the occasional upgrade and reboot as part of meintenence, but > > the MTBF of single servers are around 5 years. More, if you > > go for redundancy everywhere and hot-swap parts. > > OS crashes in Linux, BSD etc from LTS releases that happen > > except for hardware issues are also very rare. > > -- mrr > > IIRC, a VAX/VMS machine in Sweden has been running for > 20 years.
I think if I were a VAX machine in Sweden, i would run as much and as far as I could... ;^) Rick
On Wed, 12 Jan 2011 01:36:53 +0100, Ulf Samuelsson
<ulf@fake.atmel.com> wrote:

>On Tue, 11 Jan 2011 21:21:48 +0100, Morten Reistad <first@last.name> >wrote: >> Modern, well designed servers don't have many problems. You have to >> do the occasional upgrade and reboot as part of meintenence, but >> the MTBF of single servers are around 5 years. More, if you >> go for redundancy everywhere and hot-swap parts. > > >> OS crashes in Linux, BSD etc from LTS releases that happen >> except for hardware issues are also very rare. > > >> -- mrr > >IIRC, a VAX/VMS machine in Sweden has been running for > 20 years.
That must have been the uptime for an entire VAX cluster. Individual machines could have been booted several times, but at least one machine has been running all the time. To upgrade the OS on the VAX cluster, take one machine off the cluster, upgrade that machine to the next version and then join the cluster. Repeat for the other machines. In the cluster the machines could have OS version separated by one step, thus several cycles of this rolling updates had to be made, if a large number of versions is needed to upgrade at once.
On Wed, 12 Jan 2011 09:08:49 +0200, upsidedown@downunder.com wrote:
> On Wed, 12 Jan 2011 01:36:53 +0100, Ulf Samuelsson > <ulf@fake.atmel.com> wrote:
> >On Tue, 11 Jan 2011 21:21:48 +0100, Morten Reistad
<first@last.name>
> >wrote: > >> Modern, well designed servers don't have many problems. You have
to
> >> do the occasional upgrade and reboot as part of meintenence, but > >> the MTBF of single servers are around 5 years. More, if you > >> go for redundancy everywhere and hot-swap parts. > > > > > >> OS crashes in Linux, BSD etc from LTS releases that happen > >> except for hardware issues are also very rare. > > > > > >> -- mrr > > > >IIRC, a VAX/VMS machine in Sweden has been running for > 20 years.
> That must have been the uptime for an entire VAX cluster. Individual > machines could have been booted several times, but at least one > machine has been running all the time.
> To upgrade the OS on the VAX cluster, take one machine off the > cluster, upgrade that machine to the next version and then join the > cluster. Repeat for the other machines.
> In the cluster the machines could have OS version separated by one > step, thus several cycles of this rolling updates had to be made,
if a
> large number of versions is needed to upgrade at once.
No, it was a single machine controlling something related to the railroad. Today, you would probably have used an embedded system. Best Regards Ulf -- Best Regards, Ulf Samuelsson
On Tue, 11 Jan 2011 11:17:33 -0500, Walter Banks <walter@bytecraft.com> wrote:
   [...]
> I once worked on a Univac that had a jump indirect bit. When > it got into a jump loop stopping it was interesting. > > First option was to hit the Stop button. It ran to the end of the > current instruction which meant that it kept running. > > Second option was to power down which meant that it did an > save state at the end of the current fetch cycle and restore the > current state on power up. No joy. > > Third option hit stop button and ground the a wire from the > instruction register to the console forcing it to think it was > decoding a different instruction on the next indirect jump > in the loop.
Our APL timesharing system ran on an IBM 370/155. Since it did cause its share of OS failures, the timesharing system support group tended to be called first when OS/360 MVT hung. One day the system operator called us in to find out why nothing was working and we were astounded to see that hitting the STOP button had no visible effect. The 370/155 was one of the last of the IBM mainframes that still had one of those huge Panel-o-Lamps display panels, and normally they would display (e.g.) the current instruction address and the perhaps the status of various data buses in what appeered to be fairly random patterns. Not this time -- they were _pulsing_, and in _groups_, like some sort of Holywood special effect. This, added to the suddenly non-operational STOP button, gave us a _very_ strange feeling for a few moments. Then someone hit SYSTEM RESET, the other "stop" button, and the machine finally stopped. On the 370/155 all instructions were multiples of two bytes long, 16-bit "halfwords" in IBM-speak. It seems that somehow the Program Check New PSW, the pointer that got loaded when some random instruction tried to (e.g.) divide by zero, had been overwritten and had an odd address in it. The next time a Program Check exception occurred the 370/155, following its carefully-programmed microcode instructions, loaded its Machine Check New PSW. Immediately -- before the instruction _at_ that address was loaded -- the odd address was recognized as a Program Check condition, and the Program Check New PSW was loaded. Immediately -- before the instruction _at_ that address... In other words, the machine was caught in a microcode loop less than one S/370 instruction long, and apparently the microcode test for the STOP button had been left out of that loop. Ah, I miss Das Blinkenlights... <grin!> We now return you to your previous discussion. Frank McKenney -- `It's all about bossing computers around. Users have to say "Please". Programmers get to say "Do what I want NOW or the hard disk gets it".' -- Richard Heathfield on the nature of programming -- Frank McKenney, McKenney Associates Richmond, Virginia / (804) 320-4887 Munged E-mail: frank uscore mckenney ayut mined spring dawt cahm (y'all)

Memfault Beyond the Launch