Reply by Albert van der Horst October 12, 20052005-10-12
In article <dhp6mi$7u3$1@gemini.csx.cam.ac.uk>,
Nick Maclaren <nmm1@cus.cam.ac.uk> wrote:
>In article <5pCdnWbp1rsoKaLeRVnyuw@pipex.net>, >Steve at fivetrees <steve@NOSPAMTAfivetrees.com> wrote: >>"Nick Maclaren" <nmm1@cus.cam.ac.uk> wrote in message >>news:dhobjl$e2v$1@gemini.csx.cam.ac.uk... >>> < extreme_pedant_mode=ON > >>> >>> Not so. In Fortran 77, it was possible to allocate all memory statically, >>> but it isn't generally possible. It can't be done in Fortran 90 or C. >> >><even_more_extreme_pedant_mode=ON> >> >>It *can* be done in C - by avoiding malloc ;). >> >><even_more_extreme_pedant_mode=OFF> > >I will skip the mode nesting, as things are getting ridiculous :-) > >Problem one: in C, there is no memory management beyond stack scoped >except by using malloc. Any algorithm that requires more than that >can't be done if you don't use malloc.
That is why so many serious real time application do their own heap management. If the heap usage is simple, programming it yourself may be not even be hard.
>Problem two: there are many library facilities that use malloc either >explicitly or implicitly - including almost all I/O - you would have >to avoid them, too.
Yeah, you have to avoid printf c.s. They are mostly in the user interface though. Most real time applications are split in a real real time part and the user interface that is not. If you adapt this, you can use printf freely in the user interface and the problem all but disappears.
> > >Regards, >Nick Maclaren.
-- -- Albert van der Horst, UTRECHT,THE NETHERLANDS Economic growth -- like all pyramid schemes -- ultimately falters. albert@spenarnc.xs4all.nl http://home.hccnet.nl/a.w.m.van.der.horst
Reply by October 5, 20052005-10-05
Paul Keinanen <keinanen@sci.fi> writes:

> On 05 Oct 2005 08:42:46 -0700, > dgay@barnowl.research.intel-research.net wrote: > > >"FredK" <fred.nospam@nospam.dec.com> writes: > > >> I've yet to find UNIX code that sticks to a convention, and that is > >> remotely safe from boneheaded mistakes. Probably 80% or more > >> of the code I've seen doesn't use C prototypes, which would > >> have at least pointed out they wanted Delete() instead of delete() - > >> each of which had different parameters (for an example). > > > >I'm curious where such code comes from - can you say? Most (nearly all?) of > >the code I see, i.e., mostly open source code, has prototypes. > > Are you talking about Linux or Unix in general ?
I'm talking about the open source available in the late 80s onwards time frame, so mostly Unix in general rather than Linux in particular. And some amount was more cross-platform than that (e.g., emacs). -- David Gay dgay@acm.org
Reply by Nick Maclaren October 5, 20052005-10-05
In article <79fyrfbwzw.fsf@barnowl.research.intel-research.net>,
 <dgay@barnowl.research.intel-research.net> wrote:
> >However, gcc was essentially always available for Solaris (their may have >been a short delay after the first Solaris release, but I didn't notice >it at least).
The problem was in the libraries. Solaris didn't sort out even the basics until 1995/6, didn't enable ISO C + POSIX codes until 1998 (Solaris 7) and didn't sort out all of the chaos until Solaris 9. Still, it did better than a certain big blue company :-(
>I don't think X11 is typical of "most codes", though. Isn't it the one >which gives different signatures to the same function when viewed from >different modules?
Probably. It has included every other sin, crime and idiocy, so why not complete the set? Regards, Nick Maclaren.
Reply by Paul Keinanen October 5, 20052005-10-05
On 05 Oct 2005 08:42:46 -0700,
dgay@barnowl.research.intel-research.net wrote:

>"FredK" <fred.nospam@nospam.dec.com> writes:
>> I've yet to find UNIX code that sticks to a convention, and that is >> remotely safe from boneheaded mistakes. Probably 80% or more >> of the code I've seen doesn't use C prototypes, which would >> have at least pointed out they wanted Delete() instead of delete() - >> each of which had different parameters (for an example). > >I'm curious where such code comes from - can you say? Most (nearly all?) of >the code I see, i.e., mostly open source code, has prototypes.
Are you talking about Linux or Unix in general ? At least I had to convert some sample client code for a communication package to K&R so that the primitive compiler intended mainly for compiling the Unix kernel cold compile it. In these systems in which the client was intended to be used, the customer did not use these systems for program development but only for production, so any "modern" compiler would be missing. Paul
Reply by October 5, 20052005-10-05
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
> In article <79k6grc3fj.fsf@barnowl.research.intel-research.net>, > <dgay@barnowl.research.intel-research.net> wrote: > > > >> |> > >> |> I'm curious where such code comes from - can you say? Most (nearly all?) of > >> |> the code I see, i.e., mostly open source code, has prototypes. > >> > >> Older code, mainly. Most open source code has been using ISO C > >> (or what it thinks is ISO C) for only 5-10 years - before 1995, > >> there were a LOT of systems which didn't have an even remotely > >> conforming compiler. > > > >Well yes, but we are 10 years after that. And most of the code I saw in the > >early 90s (again, mostly open source) used, at least, conditionally > >compiled prototypes. gcc was available for most platforms in that time > >frame. And conversion of existing code bases using tools like protoize > >was fairly straightforward... > > Hang on. That is ONLY 10 years - most software has a much longer life > than that - even source unchanged! > > Firstly, gcc wasn't available for most platforms before 1995 - it was > available for the platforms used by most users - not the same thing, > at all, at all. And one of the most important platforms (Solaris) > didn't support ISO C until 1998 and not fully until a couple of years > back.
However, gcc was essentially always available for Solaris (their may have been a short delay after the first Solaris release, but I didn't notice it at least).
> Secondly, conversion of clean codes using protoize was possible, but > why bother unless you were starting an update cycle? ISO C89 supports > the older form of function definition.
However, if you're not updating it, why is anyone looking at it?
> Thirdly, conversion of some codes (mostly, but not all, unclean) wasn't > possible because protoize couldn't (and probably can't) handle anything > other than simple code. I wrote some incredible preprocessor hacks > to make X11.3 usable (don't ask), which used prototypes and conformed > to the draft standard, and protoize (really don't ask) took one look > and collapsed in a heap.
I don't think X11 is typical of "most codes", though. Isn't it the one which gives different signatures to the same function when viewed from different modules? -- David Gay dgay@acm.org
Reply by Nick Maclaren October 5, 20052005-10-05
In article <79k6grc3fj.fsf@barnowl.research.intel-research.net>,
 <dgay@barnowl.research.intel-research.net> wrote:
> >> |> >> |> I'm curious where such code comes from - can you say? Most (nearly all?) of >> |> the code I see, i.e., mostly open source code, has prototypes. >> >> Older code, mainly. Most open source code has been using ISO C >> (or what it thinks is ISO C) for only 5-10 years - before 1995, >> there were a LOT of systems which didn't have an even remotely >> conforming compiler. > >Well yes, but we are 10 years after that. And most of the code I saw in the >early 90s (again, mostly open source) used, at least, conditionally >compiled prototypes. gcc was available for most platforms in that time >frame. And conversion of existing code bases using tools like protoize >was fairly straightforward...
Hang on. That is ONLY 10 years - most software has a much longer life than that - even source unchanged! Firstly, gcc wasn't available for most platforms before 1995 - it was available for the platforms used by most users - not the same thing, at all, at all. And one of the most important platforms (Solaris) didn't support ISO C until 1998 and not fully until a couple of years back. Secondly, conversion of clean codes using protoize was possible, but why bother unless you were starting an update cycle? ISO C89 supports the older form of function definition. Thirdly, conversion of some codes (mostly, but not all, unclean) wasn't possible because protoize couldn't (and probably can't) handle anything other than simple code. I wrote some incredible preprocessor hacks to make X11.3 usable (don't ask), which used prototypes and conformed to the draft standard, and protoize (really don't ask) took one look and collapsed in a heap. Regards, Nick Maclaren.
Reply by October 5, 20052005-10-05
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:

> In article <79oe64as21.fsf@barnowl.research.intel-research.net>, > dgay@barnowl.research.intel-research.net writes: > |> "FredK" <fred.nospam@nospam.dec.com> writes: > |> > > |> > I've yet to find UNIX code that sticks to a convention, and that is > |> > remotely safe from boneheaded mistakes. Probably 80% or more > |> > of the code I've seen doesn't use C prototypes, which would > |> > have at least pointed out they wanted Delete() instead of delete() - > |> > each of which had different parameters (for an example). > |> > |> I'm curious where such code comes from - can you say? Most (nearly all?) of > |> the code I see, i.e., mostly open source code, has prototypes. > > Older code, mainly. Most open source code has been using ISO C > (or what it thinks is ISO C) for only 5-10 years - before 1995, > there were a LOT of systems which didn't have an even remotely > conforming compiler.
Well yes, but we are 10 years after that. And most of the code I saw in the early 90s (again, mostly open source) used, at least, conditionally compiled prototypes. gcc was available for most platforms in that time frame. And conversion of existing code bases using tools like protoize was fairly straightforward... -- David Gay dgay@acm.org
Reply by Nick Maclaren October 5, 20052005-10-05
In article <79oe64as21.fsf@barnowl.research.intel-research.net>,
dgay@barnowl.research.intel-research.net writes:
|> "FredK" <fred.nospam@nospam.dec.com> writes:
|> > 
|> > I've yet to find UNIX code that sticks to a convention, and that is
|> > remotely safe from boneheaded mistakes.  Probably 80% or more
|> > of the code I've seen doesn't use C prototypes, which would
|> > have at least pointed out they wanted Delete() instead of delete() -
|> > each of which had different parameters (for an example).
|> 
|> I'm curious where such code comes from - can you say? Most (nearly all?) of
|> the code I see, i.e., mostly open source code, has prototypes.

Older code, mainly.  Most open source code has been using ISO C
(or what it thinks is ISO C) for only 5-10 years - before 1995,
there were a LOT of systems which didn't have an even remotely
conforming compiler.


Regards,
Nick Maclaren.
Reply by October 5, 20052005-10-05
"FredK" <fred.nospam@nospam.dec.com> writes:
> "Casper H.S. Dik" <Casper.Dik@Sun.COM> wrote in message > > That's not just an issue with case sensitivity but a similar > > situations can arise with other naming schemes. (And this > > one source of confusion is easily recmoved using a coding > > style which specifies how to use case. > > > > I've yet to find UNIX code that sticks to a convention, and that is > remotely safe from boneheaded mistakes. Probably 80% or more > of the code I've seen doesn't use C prototypes, which would > have at least pointed out they wanted Delete() instead of delete() - > each of which had different parameters (for an example).
I'm curious where such code comes from - can you say? Most (nearly all?) of the code I see, i.e., mostly open source code, has prototypes. -- David Gay dgay@acm.org
Reply by Drazen Kacar October 5, 20052005-10-05
Nick Maclaren wrote:

> Sigh. This is getting tedious, so this will be my last response.
If you wish. I was under impression that the problem was "finding out which library and which symbol in that library is used to satisfy another symbol in another library is not so easy". In present tense.
> Most of these 'solutions' work, in that they do something that MAY > help.
I wasn't offering a 'solution'. I was simply pointing you to the debugging facility.
> But the precise behaviour of the system and those facilities is rarely, > if ever, documented
What do you want? Something that can be run through a formal verification process?
> For example, that facility doesn't provide all of the information > that may be needed to answer the nastier questions I have described > and, anyway, is data dependent.
So try with "LD_DEBUG=bindings,detail". You'll get a few more numbers.
> So that you have to check it every time you run your program to be > certain it is doing what you expect. That is ridiculous.
That's how ELF works. If you change the run-time environment, your ELF objects might change the behavior. Even formal verification can't help with that.
> For example, consider: > > franklin-2$LD_DEBUG=bindings ls /dev/null 2>&1 | egrep 'calling|open' > 13010: calling .init (from sorted order): /usr/lib/libc.so.1 > 13010: calling .init (done): /usr/lib/libc.so.1 > 13010: calling .fini: /usr/lib/libc.so.1 > franklin-2$ > > versus: > > franklin-2$LD_DEBUG=bindings ls -l /dev/null 2>&1 | egrep 'calling|open' > 12852: calling .init (from sorted order): /usr/lib/libc.so.1 > 12852: calling .init (done): /usr/lib/libc.so.1 > 12852: binding file=/usr/lib/libc.so.1 to file=/usr/lib/libc.so.1: symbol `_open64' > 12852: binding file=/usr/lib/libc.so.1 to file=/usr/lib/libc.so.1: symbol `_open' > 12852: calling .fini: /usr/lib/libc.so.1 > franklin-2$
So? I don't see a problem.
> To the best of my knowledge, there is no decently encapsulated tool > for modern Unices that will tell you the dependencies of an executable > in enough detail to predict exactly how the binding will work.
Not even on AIX? IIRC, AIX performs bindings at link time, so it should be deterministic. ELF systems perform bindings at run-time, so it depends on the run-time configuration. Some ELF systems have a way to perform bindings at link time, but I'm not going to advise anyone to use that.
> So, if you need to check that an executable is safe when used on > another system (or even by someone else!), you are stuffed. You would > clearly be surprised how often I see binding bugs in vendors' own > software run on vendors' own systems, caused by a 'transparent' > upgrade, and the main reason that it is so common is that it can't > practically be checked.
No, I wouldn't be surprised. I never said I liked ELF. Why do you think I would? -- .-. .-. Yes, I am an agent of Satan, but my duties are largely (_ \ / _) ceremonial. | | dave@fly.srk.fer.hr