"del cecchi" <dcecchi.nojunk@att.net> writes:> Like hardware doesn't have bugs or "errata". Go check Intel's web site.Or for an example of hardware errors in a mature industry, consider the hundreds or thousands of car recalls that occur globally each year.
Self restarting property of RTOS-How it works?
Started by ●February 7, 2005
Reply by ●February 11, 20052005-02-11
Reply by ●February 11, 20052005-02-11
>>Like hardware doesn't have bugs or "errata". Go check Intel's web site. > Or for an example of hardware errors in a mature industry, consider the> hundreds or thousands of car recalls that occur globally each year. Slightly different issue. This occurs because with more units deployed, you manage to hit those areas of failure that before were inaccessible, due to simple statistics. Many of the software failures I've seen, on the other hand, are of the trivial kind a cursory user would find in a few minutes to days. (In)famous was one of the major VMS releases - V5, if I remember correctly - where it took three people from our institute literally minutes to find a dozen bugs or so ranging from the annoying - changing the password gave the success message twice - to the grave - doing something slightly unexpected on the command line lead to a reproduceable system crash. We wondered what that year of internal and external field test had been wasted on. Jan
Reply by ●February 11, 20052005-02-11
Terje Mathisen wrote:> > I have noted previously here in c.a that most of the _really_ good > low-level/systems programmers I know seem to have an engineering instead > of computer science background. > > A coincidence?I have also noticed that the programmers from a computer science background tend to be much better at working out a system architecture and planning first. My hypothesis: the more detail-oriented people tend to gravitate toward the engineering side, and tend to excel at detail-oriented tasks, while the computer science people tend to be better at big picture and abstract concepts. Just MHO. Ed
Reply by ●February 11, 20052005-02-11
Neil Kurzman <nsk@mail.asb.com> writes:> 1. Crashing is always a software problem. No input should cause the code to get lost.> 2. Restarting crashed task may be a bad Idea. For the Therac example > suppose the dose task dies. If you restart it does it try to give > another dose? What if it keeps crashing and restarting. Can the > system figure out if the restart is helping or making it worse?THe Therac problem was that no one considered what would happen if the operator did an ABA type mode change with out waiting for either step to complete. The result was the real world was out of step with the software internal state. This negated the effect of the saftey checks. Although no input SHOULD cause a fault, and all Florida swampland should be wonderfull, in real systems there are some things where you need to act ASAP to prevent interesting times. Railway switching, BOS converter injection, plating power supplys to think of a quick few. It is not the system that figures out if a task is restarted or not, that is for the designer. The system just has to implement it. -- Paul Repacholi 1 Crescent Rd., +61 (08) 9257-1001 Kalamunda. West Australia 6076 comp.os.vms,- The Older, Grumpier Slashdot Raw, Cooked or Well-done, it's all half baked. EPIC, The Architecture of the future, always has been, always will be.
Reply by ●February 11, 20052005-02-11
In article <Kw1Pd.9131$oO.4160@newsread2.news.atl.earthlink.net>, Ed Beroset <beroset@mindspring.com> writes: |> Terje Mathisen wrote: |> > |> > I have noted previously here in c.a that most of the _really_ good |> > low-level/systems programmers I know seem to have an engineering instead |> > of computer science background. |> > |> > A coincidence? |> |> I have also noticed that the programmers from a computer science |> background tend to be much better at working out a system architecture |> and planning first. !!!! My experience is that they are generally CATASTROPHIC at that; MUCH worse than even engineers :-( Oh, yes, they work out an 'architecture' and a 'plan', but it is usually based on a completely unrealistic view of the world, where nothing ever goes wrong and nobody ever makes a mistake. The worst fault is usually that they regard it as reasonable to omit all error recovery, diagnosis and robustness, and claim that going bananas is a perfectly reasonable response to a natural human error. Also, they regard it as perfectly reasonable to produce interfaces that positively encourage such errors, and fail to see that it is the responsibility of a designer to ensure that the product is (as far as is possible) easy to use and fail-safe in operation. There are a FEW meritorious exceptions, and some computer science academics who would love to change this but are constrained by the pressure to produce graduates with the widest possible (theoretical) knowledge in the shortest possible time. You CAN'T teach an engineering attitude in a short lecture course - it needs practical training, and lots of it. |> My hypothesis: the more detail-oriented people tend to gravitate toward |> the engineering side, and tend to excel at detail-oriented tasks, while |> the computer science people tend to be better at big picture and |> abstract concepts. I.e. producing ridiculously unrealistic designs and leaving all the real work to someone else. Regards, Nick Maclaren.
Reply by ●February 11, 20052005-02-11
Ed Beroset wrote:> Terje Mathisen wrote: > >> >> I have noted previously here in c.a that most of the _really_ good >> low-level/systems programmers I know seem to have an engineering >> instead of computer science background. >> >> A coincidence? > > > I have also noticed that the programmers from a computer science > background tend to be much better at working out a system architecture > and planning first. > > My hypothesis: the more detail-oriented people tend to gravitate toward > the engineering side, and tend to excel at detail-oriented tasks, while > the computer science people tend to be better at big picture and > abstract concepts. > > Just MHO. > > Ed >Those comp-sci geniuses are the ones that gave us a software paradigm that is susceptible to attacks as simple as buffer overruns, and store data in randomly scattered chunks linked by pointers. And put multiple unrelated locks in the same cache line? That the ones you are talking about? Del cecchi
Reply by ●February 11, 20052005-02-11
In article <3723urF57etf3U1@individual.net>, elder.costa@terra.com.br says...> del cecchi wrote: > > > > Like hardware doesn't have bugs or "errata". Go check Intel's web > > site. > > > > Especially new hardware. sheesh > > > > First of all, I dare to say this kind of hardware is mostly if not > totaly software as these processors are synthesized from (Verilog? > VHDL?) building blocks. Therefore some or most of software engineering > applies I guess.Not really. A lot of the processor may be synthesized from HDL, but much is in custom circuits with perhaps an HDL model of the custom circuit for simulation. Either way, you can't reboot them from a new HDL file either. Silicon's got to change.> I think Ganssle meant much more the mindset than the knowledge/expertise > area itself. Hardware design and development also carries its own set of > "bugs" and bad practices though (I wonder how many engineers design > based only on components typical figures.)Sure. Statistical models are used in the design process. Sometimes one designs to a number that's even better than the process mean (I suppose that's your definition of "typical"). If one designed for worst case nothing would work because it would never be built. -- Keith
Reply by ●February 11, 20052005-02-11
israel t wrote:> "del cecchi" <dcecchi.nojunk@att.net> writes: > >> Like hardware doesn't have bugs or "errata". Go check Intel's >> web site. > > Or for an example of hardware errors in a mature industry, > consider the hundreds or thousands of car recalls that occur > globally each year.Reminds me of the last GM car I will ever own, which happened about 25 years ago. Within the first 500 miles it had seized the front brakes (that took less than 10), destroyed a fan belt pulley (with no replacements in the parts stream, jury rigged with a weld. Two months later they had a replacement pulley). Within 10,000 miles a front door had literally fallen off. By 60,000 miles the engine block was cracked due to a non-functional freeze plug (this was also due to a careless mechanic who changed the coolant to pure water in the summer while repairing the failed heater). -- "If you want to post a followup via groups.google.com, don't use the broken "Reply" link at the bottom of the article. Click on "show options" at the top of the article, then click on the "Reply" at the bottom of the article headers." - Keith Thompson
Reply by ●February 11, 20052005-02-11
prep@prep.synonet.com wrote:>THe Therac problem was that no one considered what would happen if the >operator did an ABA type mode change with out waiting for either step >to complete.Actually, they did consider it and concluded that it was impossible - and they were (sort of) right. The Therac had separate well-tested code that ran the machine, and separate not-so-well-tested code that ran the operator interface. As originally shipped, it was impossible to complete the data input that fast. Then they started getting complaints about having to re-enter the data in a bunch of fields every time, so they put in a feature where a tab would give you the same input as was used in the last run. Because it was in the operator interface code, it didn't get tested as well. What testing they did do failed to show the bug because developers tend to watch the screen looking for odd behavior, while an actual operator hits the tab key as fast as he/she can in order to do the next run. I still think that the biggest error was taking out the microswitch with hardware that wouldn't let it operate if the mechanical moving parts had not arrived where they should be. Just sending the move command and waiting N seconds was an unacceptable system design decision whether or not the code was buggy. The cryptic error messages and the ability to keep trying to dose the same patient over and over in response to an error message didn't help things. I have worked on systems for aircraft where the software engineer was invited to write malicious code that would damage the hardware, with a reward of an extra week of vacation time for writing that code. Then the hardware engineer was invited to induce a single fault that would cause the real software to lock up, go crazy, etc, with the same reward offer. -- Guy Macon <http://www.guymacon.com/>
Reply by ●February 11, 20052005-02-11
Nick Maclaren wrote:>!!!! My experience is that they are generally CATASTROPHIC at that; >MUCH worse than even engineers :-( > >Oh, yes, they work out an 'architecture' and a 'plan', but it is >usually based on a completely unrealistic view of the world, where >nothing ever goes wrong and nobody ever makes a mistake. The worst[snip] Nonsense. You can't tar an entire class of people based on the worst examples. And I say that as an engineer who knows enough to let the computer science fellows do what they do best.