EmbeddedRelated.com
Forums

Self restarting property of RTOS-How it works?

Started by Unknown February 7, 2005
valiana@fastmail.fm writes:

>i have used their milia treatment for milia seed on my eye brown area
>zest_f...@yahoo.com
>i keep seeing reviews and raves about this >http://www[...].com and http://www.[...].com.sg .
well, i can tell you that it has no floating point and that its memory bandwidth sucks. and that your nose falls off if you use it. -- mac the na�f
>> without receiving a packet. What is the probability that no packets >> will arrive in an interval of five seconds?
You might also want to know whether packet traffic is independent. Many networks consist of a large population of clients that wind up being synchronized by a few servers. For example, if a file server stalls for several seconds, how likely are all the other nodes to fall silent? -- mac the na�f
CBFalconer wrote:
> Ed Beroset wrote: >> Del Cecchi wrote: >>> Ed Beroset wrote: >>> >>>> I have also noticed that the programmers from a computer science >>>> background tend to be much better at working out a system >>>> architecture and planning first. >> [...] >>>> >>> Those comp-sci geniuses are the ones that gave us a software >>> paradigm that is susceptible to attacks as simple as buffer >>> overruns, and store data in randomly scattered chunks linked by >>> pointers. And put multiple unrelated locks in the same cache >>> line? That the ones you are talking about? >> >> It's interesting to learn that no engineers were ever involved in >> building such flaws. >> >> My background happens to be more in the engineering than the >> computer science end of things, but I don't share your evident >> contempt for the field. Here's an example: An embedded >> communication system receives packet-based messages of varying >> lengths at an average rate of 100 packets per minute, but >> asynchronously. Because the system also checks its timing >> against the recovered clock from the messages, which it can >> easily keep synchronized within limits as long as it doesn't go >> too long without receiving a packet. What is the probability >> that no packets will arrive in an interval of five seconds? >> >> I can answer that question easily because I've studied a little >> computer science. Can you? If not, how can you properly >> engineer the system? > > If it's internal clock can't stay synchronized over 5 seconds, or > even much longer, I think there is something wrong with the > hardware design. Of course you haven't defined synchronized. I > certainly couldn't answer it, but I would know enough to hunt up > queueing theory, which is quite mature and predates computers. > Whatever the synchronizing requires, I would attempt to put > something in the transmitter system to ensure satisfaction. > Statistics can always burn you. > > But you are asking the wrong question. However, if you asked what > is the probability that 10 packets will arrive in 5 seconds, you > would have a good point. Again, the place to look is queueing > theory. I do know that the design is going to require some sort of > buffering, and if there is nothing else critical and resources are > pre-established I will assign as much buffer space as possible > (assuming no other similar requirements) and not bother with the > details.
I think I misconstrued your 'synchronized clock'. You are not talking about time, but about a data clock, i.e. a strobe. In this case I consider the whole design flawed, because once more I don't want to trust to statistics. The transmitter should be emitting a preamble to synchronize the clocks. -- "If you want to post a followup via groups.google.com, don't use the broken "Reply" link at the bottom of the article. Click on "show options" at the top of the article, then click on the "Reply" at the bottom of the article headers." - Keith Thompson
In article <110sib3f4693c5e@news.supernews.com>,
Bill Sommerfeld  <sommerfeld@hamachi.org> wrote:
>Terje Mathisen wrote: > >> The packet gaps will most probably follow a Poisson distribution, > >Poisson is unlikely to be a good model unless there are a large number >of independant flows passing through the link. > >Given the problem's parameters I'd expect only a few flows to be present.
It can be better than might appear, even in such a case. But you (and others) are perfectly correct that it is not necessarily a good model - it depends.
>The distribution of inter-packet gaps will depend on the behavior of the >source, the sink, and on other factors such as the and-to-end round trip >delay between the systems.
Yes. And even more on interactions between the packets, whether in the sources, the sinks or the transport.
>citeseer found a lot of possible papers on the topic. this one >seems to be directly relevant: > >http://www.cercs.gatech.edu/tech-reports/tr2004/git-cercs-04-09.pdf > >and this looks like one of the first to say "hey, poisson doesn't fit": > >http://www.cse.ohio-state.edu/~jain/papers/train.htm
That's only because of ignorance of previous work. Most of the serious work in this field was done decades ago by statisticians working in the telecommunications industry - there was a vast body of knowledge when I did my diploma in statistics (c. 1970), with both a great deal of theory and experimental data. Regards, Nick Maclaren.
Please note that the attributions got messed up below.  In particular I
didn't say the statement attributed to me that starts with "It's
interesting..." and continues with "my background...."

top posting seemed appropriate here.
"Terje Mathisen" <terje.mathisen@hda.hydro.com> wrote in message
news:culakb$cm5$1@osl016lin.hda.hydro.com...
> Ed Beroset wrote: > > Del Cecchi wrote: > > It's interesting to learn that no engineers were ever involved in > > building such flaws. > > Of course they were! All professions make mistakes, only in some
fields
> the consequences are more severe, and the cost of fixing it much
higher.
> > > > My background happens to be more in the engineering than the
computer
> > science end of things, but I don't share your evident contempt for
the
> > field. Here's an example: An embedded communication system
receives
> > packet-based messages of varying lengths at an average rate of 100 > > packets per minute, but asynchronously. Because the system also
checks
> > its timing against the recovered clock from the messages, which it
can
> > easily keep synchronized within limits as long as it doesn't go too
long
> > without receiving a packet. What is the probability that no packets > > will arrive in an interval of five seconds? > > Not enough info! > > a) What is the packet arrival time distribution? Should we make the > assumptions as for a many independent producers, or does the chance of > generating a new packet increase as a function of the time since the
last?
> > a) What is the average packet length (in milliseconds)? This is of > course needed to be able to calculate the average and expected gap
lengths.
> > > > I can answer that question easily because I've studied a little
computer
> > science. Can you? If not, how can you properly engineer the
system?
> > With my old telecomms classes 25+ years behind me, I still remember > enough to say that this looks like a classic queueing theory question. > Erlang to the resque. (But only if the same simplifications hold
true!)
> > The packet gaps will most probably follow a Poisson distribution, with > just ~1.6 packets/second on average, the chance of a 5 second gap
never
> happening seems pretty bad, i.e. this would be bad engineering. > > Anyway, this is most definitely an engineering problem (at least at my > alma mater), not CS. > > Terje > > -- > - <Terje.Mathisen@hda.hydro.com> > "almost all programming can be viewed as an exercise in caching"
Ed Beroset <beroset@mindspring.com> writes:

>Del Cecchi wrote: >> Ed Beroset wrote: >> >>> I have also noticed that the programmers from a computer science >>> background tend to be much better at working out a system architecture >>> and planning first. >[...] >>> >> Those comp-sci geniuses are the ones that gave us a software paradigm >> that is susceptible to attacks as simple as buffer overruns, and store >> data in randomly scattered chunks linked by pointers. And put multiple >> unrelated locks in the same cache line? That the ones you are talking >> about?
>It's interesting to learn that no engineers were ever involved in >building such flaws.
It think it's extremely unfair to blame comp-sci for linked lists and and buffer overflows; most of that stuff was invented before comp-sci was being taught.
>I can answer that question easily because I've studied a little computer >science. Can you? If not, how can you properly engineer the system?
My favourite example: people without comp-sci hardly ever get floating point comparison right. Casper -- Expressed in this posting are my opinions. They are in no way related to opinions held by my employer, Sun Microsystems. Statements on Sun products included here are not gospel and may be fiction rather than truth.
"Casper H.S. Dik" <Casper.Dik@Sun.COM> wrote in message
news:420e7916$0$28988$e4fe514c@news.xs4all.nl...
> Ed Beroset <beroset@mindspring.com> writes: > > >Del Cecchi wrote: > >> Ed Beroset wrote: > >> > >>> I have also noticed that the programmers from a computer science > >>> background tend to be much better at working out a system
architecture
> >>> and planning first. > >[...] > >>> > >> Those comp-sci geniuses are the ones that gave us a software
paradigm
> >> that is susceptible to attacks as simple as buffer overruns, and
store
> >> data in randomly scattered chunks linked by pointers. And put
multiple
> >> unrelated locks in the same cache line? That the ones you are
talking
> >> about? > > >It's interesting to learn that no engineers were ever involved in > >building such flaws. > > It think it's extremely unfair to blame comp-sci for linked lists and > and buffer overflows; most of that stuff was invented before comp-sci > was being taught.
But who was it that put linked lists and fixed size buffers with undefined size inputs in widely used software? Engineers? I can't imagine just reading until end of record with no control or checking on the input. A program that dies due to the ping of death? Who wrote this stuff? Did they test it? And years later we still have it? I'm not trying to insult folks or start a flame war but some of these things boggle the mind. del cecchi
In article <377hvaF59pmgoU1@individual.net>,
del cecchi <dcecchi.nojunk@att.net> wrote:
> >"Casper H.S. Dik" <Casper.Dik@Sun.COM> wrote in message >news:420e7916$0$28988$e4fe514c@news.xs4all.nl... >> >> It think it's extremely unfair to blame comp-sci for linked lists and >> and buffer overflows; most of that stuff was invented before comp-sci >> was being taught. > >But who was it that put linked lists and fixed size buffers with >undefined size inputs in widely used software? Engineers?
Firstly, please separate the two. There is nothing wrong with using linked lists appropriately. They should NOT be used when a necessary primitive is to index by entry number, and you need to take care to avoid fragmentation, but that is about all. The latter was widepread in the 1960s in commercial systems written in various assemblers.
>I can't imagine just reading until end of record with no control or >checking on the input. A program that dies due to the ping of death?
The latter is provably insoluble, though it is possible to write code that is resistant to it.
>Who wrote this stuff? Did they test it? And years later we still have >it?
Computer scientists. Students. Employees of software houses. etc. Generally, they didn't test it. And, as with the MVT Linkage Editor, we had it 20 years after it should have been buried with a stake through the heart of the last listing.
>I'm not trying to insult folks or start a flame war but some of these >things boggle the mind.
They do, indeed. But your viewpoint of what happened and who was responsible is a bit simplistic. It is very messy, and only SOME of the blame should be assigned to computer scientists. Here is a very rough summary of one viewpoint on it: Back around 1970, most computer scientists damned Fortran for being unreliable (i.e. uncheckable), and supported Pascal, Lisp etc. (to taste). Now, they were unfair to blame Fortran, as it WAS checkable, but had a point about the programming styles. In parallel, A,T&T Bell Labs produced a semi-portable assembler and computer scientists' experimental bench (C and Unix), with the full knowledge and conscious decision that diagnostics, robustness and so on were largely omitted both for clarity and to allow the experimenters maximum flexibility. In the 1970s, the first generation of people who had been trained as computer scientists became professors etc., and regrettably many of them took the attitude that it was someone else's business to turn their leading-edge ideas and demonstrations into real products. The "someone else" was assumed to be computing service staff, vendors' engineers etc. Those people brought C and Unix into the mainstream. Round about 1980, mainly in the USA and UK, the governments starting demolishing central computing services and (in the USA) giving almost unlimited budgets to leading-edge computer science departments for industrial collaboration (culminating in theings like Project Athena). The theory was that industry would behave as in the previous paragraph. What is now called Thatcherism in the UK (though it predated here), and can be described as dogmatic divide-and-conquer monetarism, meant that many of the traditional links and controls (NOT just university computing services) were emasculated or destroyed. In subsequent years, this affected standards organisations, government quality control agencies and so on. And industry often did the same with their internal equivalents - which caused the FDIV bug, at least one of IBM's disk fiascos, and many other failures. Monetarism in turn gave a major boost to marketing over engineering, which was synergistic with things like the IBM PC, leading to a wide acceptance that it is better to have leading-edge gimmicks than actually work. People may forget that a single wayward program (and EDLIN was one that could do it) would not just crash the system but could trash the whole filing system, irrecoverably. But that was OK. That was the point at which I refused to move with the times, and suffered as you might expect. The whole of this came together, with the result that the traditional enemy camps (Fortran, Cobol, Pascal, Lisp, compiler-generated checking and other aspects of software engineering) got shoved into a ghetto and deprecated as obsolete. Most computer science work on software engineering is on methodologies (often completely unrealistic) and on largely irrelevant tools (i.e. they tackle needs that experienced programmers don't have). But EXACTLY the same is true of vendors' products, because it is the zeitgeist that has changed. Note that this has now reached even standards organisations, where the misguided but traditional (i.e. precise and consistent) POSIX was taken over by the woolly and inconsistent so-called Single Unix Standard. And it has reached hardware, where many vendors' now design their firmware to reduce the visibility of failures rather that make their products more robust. Who comes out of this with credit? Damn few organisations and people. Regards, Nick Maclaren.
Ed Beroset <beroset@mindspring.com> writes:

> Here's an example: An embedded communication system receives > packet-based messages of varying lengths at an average rate of 100 > packets per minute, but asynchronously. Because the system also > checks its timing against the recovered clock from the messages, > which it can easily keep synchronized within limits as long as it > doesn't go too long without receiving a packet. What is the > probability that no packets will arrive in an interval of five > seconds?
> I can answer that question easily because I've studied a little > computer science. Can you? If not, how can you properly engineer > the system?
We can only punt an answer to that if the packets are independant. Be the first one I've come across if they are. The trafic paterns are the required info, plus the allowable latencies. Neither poison nor gaussism stats do a good job in most cases, or if they do, the load is so low you need not bother at all. -- Paul Repacholi 1 Crescent Rd., +61 (08) 9257-1001 Kalamunda. West Australia 6076 comp.os.vms,- The Older, Grumpier Slashdot Raw, Cooked or Well-done, it's all half baked. EPIC, The Architecture of the future, always has been, always will be.


Nick Maclaren wrote:
> >Ed Beroset wrote: >> >>> I.e. producing ridiculously unrealistic designs and leaving all >>> the real work to someone else. >> >>By implying that design work is not "real work" you have just proved my >>point. > >If you had been following this group for even a short while, you >would realise how foolish that makes you look. I don't believe that >anyone could believe that I would post such an implication, though >trolls would claim it - though I am NOT claiming you are a troll, >merely mistaken.
I don't think it makes him look foolish at all; I read the same meaning into your words. The fact that your reply borders on being a personal attack rather than striving to correct any misunderstanding adds weight to Ed's interpretation. You exhibited the same behavior towards me when you wrote
>but are YOU prepared to justify the design of the X Windowing >System, to take one prime example of what I was referring to?
Note that this was in a thread posted to comp.arch.embedded with the phrase "RTOS" in the subject line - off-topic as well as being overly confrontational. I advise you to examine your posting style. It is not conducive to a civil and reasoned technical discussion of the subject at hand. "A little rudeness and disrespect can elevate a meaningless interaction into a battle of wills and add drama to an otherwise dull day." -Calvin discovers Usenet