EmbeddedRelated.com
Forums

Source code static analysis tool recommendations

Started by John Speth February 2, 2018
On 07/02/18 03:34, Richard Damon wrote:
> On 2/6/18 4:53 PM, David Brown wrote: >> On 06/02/18 19:25, Hans-Bernhard Br&#4294967295;ker wrote: >>> Am 06.02.2018 um 09:34 schrieb David Brown: >>>> On 05/02/18 23:18, Hans-Bernhard Br&#4294967295;ker wrote: >>>>> Am 05.02.2018 um 18:48 schrieb Stefan Reuther: >>> >>>>> Where on earth did you get that idea? UINT32_C does not even _appear_ >>>>> in the C99 standard. >>> >>>> 7.20.4 "Macros for integer constants" (assuming the section number is >>>> the same for C99 and C11, which is where I looked). >>> >>> C99 7.18.4. Ah there it is. Sorry, my bad for not finding that (or >>> knowing about it off-hand). >>> >> >> There aren't many in this newsgroup who could that section - remember, >> this is the /practical/ group, not comp.lang.c ! And you are >> technically right - "UINT32_C" does not appear anywhere, so you won't >> find it be searching. (For anyone who is interested enough to still >> be reading, but not interested enough to look at the standards, the >> paragraph talks about UINT/N/_C in general rather than UINT32_C in >> particular.) >> >>> But then Stefan's next statement is wrong. There's no way this: >>> >>> > expands to nothing on a 32-bit architecture... >>> >>> can be correct. At the very least, UINT32_C(1) has to change the >>> signedness of its constant argument, i.e. it has to expand to either >>> of the following, or an equivalent: >>> >>> (uint_least32_t)1 >>> 1u >>> >>> And in both cases, both MISRA and PC-Lint _will_ accept that shift. >>> Well, assuming the tool was configured correctly, that is. >> >> <stdint.h> on my Linux system has: >> >> /* Unsigned. */ >> # define UINT8_C(c) c >> # define UINT16_C(c) c >> # define UINT32_C(c) c ## U >> # if __WORDSIZE == 64 >> # define UINT64_C(c) c ## UL >> # else >> # define UINT64_C(c) c ## ULL >> # endif >> >> >> I would say that on a 32-bit system, UINT32_C has to include an >> "unsigned" indication (like a cast, or adding a U suffix). But since >> there is no such thing in C as an integer constant of types smaller >> than int or unsigned int, I can't decide if UINT16_C here should have >> the U suffix or not. I'm open to ideas either way - or we could punt >> it to comp.lang.c for the opinions (at length) of people who sleep >> with the C standards under the pillow, rather than just having a >> shortcut on their desktop (like me). >> > > My thought is that since a uint8_t value or a uint16_t where int is 32 > bits, will promote to a signed int value, not making UINT8 or UINT16 > constants unsigned would be correct.
Yes, I agree with that logic. Then it will be "an integer constant expression corresponding to the type uint_leastN_t", in that you will get the same behaviour for the expression "u + x" for any x, when u is either: uint8_t u = 1; or #define u UINT8_C(1) But you can also say that by adding a U, you turn the constant into type "unsigned int" rather than "int", and thus it will not be promoted to a signed int. I don't know if the standards are clear enough here to require one interpretation or the other. The gcc implementations I have looked at (some with glibc for Linux, some with newlib for embedded ARM) all follow the first version with no U for sizes less than an "int". An older CodeWarrior with its EWL library seems to take the other version and add a U. Based purely on "which of these to I trust more to understand the standards", my vote will be with gcc/glibc/newlib - the CodeWarrior C99 support and library has never impressed me with its quality.
On 07/02/18 02:02, Hans-Bernhard Br&#4294967295;ker wrote:
> Am 06.02.2018 um 23:02 schrieb David Brown: >> On 06/02/18 19:05, Stefan Reuther wrote: > >>> MISRA defines a subset of C, and a program must follow the rules of that >>> subset. Of course that can be an own type system. >>> >> >> No, it cannot - it /has/ to use the C type system when you are >> programming in C. > > If only. At least in the 2004 edition, they pulled an entire > hypothetical integer type system named "underlying type" out of thin air > and based some of their rules on that, totally mucking up any clear > picture of the actual C integer semantics anyone might still have had in > their mind. >
And then they renamed it to "essential type" in MISRA 2012 - perhaps because when doing MISRA C++, they discovered that C++ already uses the term "underlying type". And yes, I fully agree about it messing up the C code and how people understand it.
> And yes, that utter phantasy of theirs does indeed claim the existence > of integer constants of a type smaller than int. Such a thing has never > existed in the entire history of at least roughly standard-conforming C > compilers, and it flat-out contradicts the actual semantics of C integer > constants and conversions. Didn't stop them, though.
But it will not affect the meaning of the code to a C compiler, which only deals with C types. As you say, it can affect C checkers - but not a C compiler, at least not when running in standard modes.
> > What it means is that a MISRA 2004 checker will indeed pretend that the > type of 1U is uint8_t(!), totally defeating the purpose of the UINT32C() > macro if the compiler/libc happens to implement that by suffixing a 'U'. > > So in a MISRA 2004 environment, one had better forget those macros even > exist --- I obviously did :-). You'll really have to write 1UL, > (uint32_t)1 or equivalent, instead. > > If there's method to that madness, it eludes me.
I have seen bugs in a certain massively popular, expensive but nameless (to protect the guilty) 8051 compiler in the way it handled integer promotions. I don't remember the details (it was a colleague that used the tool), but it is conceivable that the compiler was running in some sort of "MISRA mode" with different semantics for types than standard C.
Am 06.02.2018 um 23:02 schrieb David Brown:
> On 06/02/18 19:05, Stefan Reuther wrote: >> Problem is, we're actually doing C++, where MISRA imposes the same rules >> for numbers, but would actually require us to write >> >> uint32_t MASK = static_cast<uint32_t>(1) << 17; >> >> which is pretty much noise for my taste. > > That's MISRA for you - rules for rules sake, no matter how ugly and > incomprehensible your code ends up. In situations like this, it totally > misses the point.
Exactly. Now tell that the people who want cool downward pointing charts on their PowerPoints to justify the money spent on the checker...
>> I like to use this actual definition from Windows APIs as a "how not" >> example for cast-addicts: >> >> #define MAKELONG(a, b) ((LONG) (((WORD) (((DWORD_PTR) (a)) & \ >> 0xffff)) | ((DWORD) ((WORD) (((DWORD_PTR) (b)) & 0xffff))) << 16)) > > I doubt if anyone will hold up the Windows API as a shining example of > good design and clear programming!
Actually, the Windows API isn't too bad. The combination of rules that built 16-bit Windows are borderline genius (e.g. how do you do virtual memory on an 8086 that has not any hardware support for that?). The Win32 API has async I/O, wait for multiple semaphores, etc., all of which was bolted-on in Linux years later. But having a C API (i.e. no inline functions), and evolving that through years, will lead to monstrosities like the above. I would have written inline LONG makeLong(WORD a, WORD b) { return ((LONG)a << 16) | b; } and leave the truncating to the compiler.
>>>>> In MISRA C, the literal 1 is a char not an int? >>>> >>>> Yes. >>> >>> No. You are mixing MISRA rules with language rules. >>> >>> MISRA provides rules on how to write C - it most certainly does not >>> /change/ C. In C, the literal 1 is an decimal integer constant of type >>> "int" (section 6.4.1.1). >>> >>> As a coding standard, MISRA /can/ have rules to say that you are not >>> allowed to /use/ a literal here - but it cannot change its meaning in >>> the language. >> >> MISRA defines a subset of C, and a program must follow the rules of that >> subset. Of course that can be an own type system. > > No, it cannot - it /has/ to use the C type system when you are > programming in C.
Not as long as its own type system defines a subset. A coding standard cannot define a '+' operator that adds two strings. But it can surely say "although the C standard allows you to add an 8-bit unsigned and a 16-bit signed to get 32-bit signed, I won't allow that".
>> The most important takeaways I have from MISRA is: think about what you >> do and why you do it, and: you need a deviation procedure that allows >> you to break every rule. That deviation procedure is what makes MISRA >> work because it rules out laziness as a reason for not doing something. >> I won't write a deviation request to be allowed to leave out the {}'s, >> but if my design requires me to have malloc and recursion, I want to >> have that! > > I have nothing against thinking! But I /do/ have something against > having to write a deviation report in order to avoid a cast monstrosity > just because the rules say that "1" is not an int.
Exactly. (Just today had another discussion because Klocwork spit out new warnings. One "remove this error handling because I think this error does not occur", one "remove this variable initialisation because I think this is a dead store", two "this variable isn't used [but I see it being used the next line]". This surely justifies leaving everything behind and pet the tool.) Stefan
On 07/02/18 20:04, Stefan Reuther wrote:
> Am 06.02.2018 um 23:02 schrieb David Brown: >> On 06/02/18 19:05, Stefan Reuther wrote: >>> Problem is, we're actually doing C++, where MISRA imposes the same rules >>> for numbers, but would actually require us to write >>> >>> uint32_t MASK = static_cast<uint32_t>(1) << 17; >>> >>> which is pretty much noise for my taste. >> >> That's MISRA for you - rules for rules sake, no matter how ugly and >> incomprehensible your code ends up. In situations like this, it totally >> misses the point. > > Exactly. Now tell that the people who want cool downward pointing charts > on their PowerPoints to justify the money spent on the checker... > >>> I like to use this actual definition from Windows APIs as a "how not" >>> example for cast-addicts: >>> >>> #define MAKELONG(a, b) ((LONG) (((WORD) (((DWORD_PTR) (a)) & \ >>> 0xffff)) | ((DWORD) ((WORD) (((DWORD_PTR) (b)) & 0xffff))) << 16)) >> >> I doubt if anyone will hold up the Windows API as a shining example of >> good design and clear programming! > > Actually, the Windows API isn't too bad. The combination of rules that > built 16-bit Windows are borderline genius (e.g. how do you do virtual > memory on an 8086 that has not any hardware support for that?). The > Win32 API has async I/O, wait for multiple semaphores, etc., all of > which was bolted-on in Linux years later.
That is not what I am talking about. There have been plenty of useful (and some imaginative) features in Windows API, no doubts there. (The async I/O functions needed to be in Windows because it was originally pretty much single-tasking and single-threaded. Linux didn't have nearly as much need of it because you could quickly and cheaply create new processes, and because of the "select" call. Async I/O was essential to Windows, and merely "nice to have" on other systems.) The mess of Windows API is the 16 different calling conventions, the 23.5 different non-standard types duplicating existing functionality (while still impressively managing to mix up "long" and pointers, leading to it being the only 64-bit system with 32-bit "long"), function calls with the same name doing significantly different things in different versions of the OS (compare Win32s, Win9x and WinNT APIs), and beauties such as the "CreateFile" call that could handle every kind of resource except files.
> > But having a C API (i.e. no inline functions),
C has had inline functions for two decades. The lack of inline functions in the Windows C world is purely a matter of laziness on MS's part in not providing C99 support in their tools. It is an absurd and unjustifiable gap.
> and evolving that through > years, will lead to monstrosities like the above. I would have written > > inline LONG makeLong(WORD a, WORD b) { return ((LONG)a << 16) | b; } > > and leave the truncating to the compiler.
However it is written, there should be no pointer types in there! But yes, there are entirely understandable "historical reasons" for at least some of the baggage in the WinAPI. The design started out as a complete mess from an overgrown toy system rushed to the market without thought other than to cause trouble for competitors. The Windows and API design has been a lot better, and lot more professional, since the days of NT - but it takes a long time to deal with the leftovers. There has never been a possibility of clearing out the bad parts and starting again.
> >>>>>> In MISRA C, the literal 1 is a char not an int? >>>>> >>>>> Yes. >>>> >>>> No. You are mixing MISRA rules with language rules. >>>> >>>> MISRA provides rules on how to write C - it most certainly does not >>>> /change/ C. In C, the literal 1 is an decimal integer constant of type >>>> "int" (section 6.4.1.1). >>>> >>>> As a coding standard, MISRA /can/ have rules to say that you are not >>>> allowed to /use/ a literal here - but it cannot change its meaning in >>>> the language. >>> >>> MISRA defines a subset of C, and a program must follow the rules of that >>> subset. Of course that can be an own type system. >> >> No, it cannot - it /has/ to use the C type system when you are >> programming in C. > > Not as long as its own type system defines a subset. A coding standard > cannot define a '+' operator that adds two strings. But it can surely > say "although the C standard allows you to add an 8-bit unsigned and a > 16-bit signed to get 32-bit signed, I won't allow that".
Yes, that's true. But what it /cannot/ do is say that if you add an 8-bit signed and a 16-bit signed you will get a 16-bit signed (on a 32-bit system). It can't change the rules of C, or the way the types work - all it can do is add extra restrictions.
> >>> The most important takeaways I have from MISRA is: think about what you >>> do and why you do it, and: you need a deviation procedure that allows >>> you to break every rule. That deviation procedure is what makes MISRA >>> work because it rules out laziness as a reason for not doing something. >>> I won't write a deviation request to be allowed to leave out the {}'s, >>> but if my design requires me to have malloc and recursion, I want to >>> have that! >> >> I have nothing against thinking! But I /do/ have something against >> having to write a deviation report in order to avoid a cast monstrosity >> just because the rules say that "1" is not an int. > > Exactly. > > (Just today had another discussion because Klocwork spit out new > warnings. One "remove this error handling because I think this error > does not occur", one "remove this variable initialisation because I > think this is a dead store", two "this variable isn't used [but I see it > being used the next line]". This surely justifies leaving everything > behind and pet the tool.) >
I have had more luck using gcc, and its increasingly more sophisticated static warnings. At least then I know the static error checking and the compiler agree with each other!
Am 07.02.2018 um 09:21 schrieb David Brown:

> I have seen bugs in a certain massively popular, expensive but nameless > (to protect the guilty) 8051 compiler in the way it handled integer > promotions.
I'm quite sure I'm talking about that very same compiler when I say: no, those weren't bugs. They were the amply documented effect of that compiler being run in an explicitly _not_ standard compliant setting.
> I don't remember the details (it was a colleague that used > the tool), but it is conceivable that the compiler was running in some > sort of "MISRA mode" with different semantics for types than standard C.
No. It was running in a "let's not even pretend this micro is actually big enough to efficiently implement C without seriously breaking a large fraction of the rules" mode. Many, maybe all the C compilers for small 8-bitters have such a mode, and often that's their default mode because, frankly, the standard-conforming mode would be nearly useless for most serious work. OTOH, those micros, and the projects they're used in, are generally small enough that you don't really have to go all MISRA on them.
On 08/02/18 00:57, Hans-Bernhard Br&#4294967295;ker wrote:
> Am 07.02.2018 um 09:21 schrieb David Brown: > >> I have seen bugs in a certain massively popular, expensive but nameless >> (to protect the guilty) 8051 compiler in the way it handled integer >> promotions. > > I'm quite sure I'm talking about that very same compiler when I say: no, > those weren't bugs. They were the amply documented effect of that > compiler being run in an explicitly _not_ standard compliant setting. >
That is entirely possible. As I say, it was a colleague that asked for help, wondering if his C code was wrong. The C was correct, the compiler was generating object code that did not match the C. But it could well have been as you say, and the compiler was running in a significantly non-standard mode.
>> I don't remember the details (it was a colleague that used >> the tool), but it is conceivable that the compiler was running in some >> sort of "MISRA mode" with different semantics for types than standard C. > > No. It was running in a "let's not even pretend this micro is actually > big enough to efficiently implement C without seriously breaking a large > fraction of the rules" mode. > > Many, maybe all the C compilers for small 8-bitters have such a mode, > and often that's their default mode because, frankly, the > standard-conforming mode would be nearly useless for most serious work. > OTOH, those micros, and the projects they're used in, are generally > small enough that you don't really have to go all MISRA on them.
Certainly there are some things that standard C requires that would be very painful for handling in these brain-dead devices. A prime example is re-entrant or recursive functions. Without a decent data stack (and, in particular, SP+x addressing modes), it is very inefficient to have local variables in a stack on things like an 8051. So most compilers for these kinds of chips will put the local variables at fixed addresses in ram. Functions cannot then be used recursively. However, dealing with this efficiently does not need a change to the language supported - the compiler can analyse the source code and call paths, see that all or most functions are /not/ used recursively, and generate code taking advantage of that fact. The lazy way to handle it is to say that any recursive functions need to be specially marked (with a pragma, attribute, or whatever). This will be used so rarely in such code that it is not a problem. In the case I had seen here, I think (IIRC) the problem was that the compiler was doing arithmetic on 8-bit types as 8-bit arithmetic - it was not promoting it to 16-bit ints. IMHO there is no justification for this non-standard behaviour. It is simple enough for the compiler to give the correct C logical behaviour while optimising to 8-bit generated code in cases where the high byte is not needed. I have no problem with compilers requiring the use of extensions or extra features to get good code from these devices. Having a "flash" keyword, or distinguishing between short and long pointers - that's fine. Making "double" the same as "float" - less fine, but understandable. But changing the rules for integer promotions and the usual arithmetic conversions? No, that is not appropriate. Other "helpful" ideas I have seen on compilers that I consider broken are to skip the zeroing of uninitialised file-scope data (and hide a tiny note about it deep within the manual), and to make "const" work as a kind of "flash" keyword on a Harvard architecture cpu so that "const char *" and "char *" become completely incompatible.
Am 07.02.2018 um 20:54 schrieb David Brown:
> On 07/02/18 20:04, Stefan Reuther wrote: >>>> I like to use this actual definition from Windows APIs as a "how not" >>>> example for cast-addicts: >>>> >>>> #define MAKELONG(a, b) ((LONG) (((WORD) (((DWORD_PTR) (a)) & \ >>>> 0xffff)) | ((DWORD) ((WORD) (((DWORD_PTR) (b)) & 0xffff))) >>>> << 16)) >>> >>> I doubt if anyone will hold up the Windows API as a shining example of >>> good design and clear programming! >> >> Actually, the Windows API isn't too bad. The combination of rules that >> built 16-bit Windows are borderline genius (e.g. how do you do virtual >> memory on an 8086 that has not any hardware support for that?). The >> Win32 API has async I/O, wait for multiple semaphores, etc., all of >> which was bolted-on in Linux years later. > > That is not what I am talking about. There have been plenty of useful > (and some imaginative) features in Windows API, no doubts there. (The > async I/O functions needed to be in Windows because it was originally > pretty much single-tasking and single-threaded. Linux didn't have > nearly as much need of it because you could quickly and cheaply create > new processes, and because of the "select" call. Async I/O was > essential to Windows, and merely "nice to have" on other systems.)
Being able to create processes does not help if you actually need threads to be able to efficiently communicate, e.g. between a GUI thread and a background thread. 'select' does not allow you to wait on semaphores, meaning you have to emulate this with a self-pipe. Now that we have 'eventfd' and native threads - bolted-on years later - it starts to make sense somehow.
> The mess of Windows API is the 16 different calling conventions, the > 23.5 different non-standard types duplicating existing functionality > (while still impressively managing to mix up "long" and pointers, > leading to it being the only 64-bit system with 32-bit "long"), function > calls with the same name doing significantly different things in > different versions of the OS (compare Win32s, Win9x and WinNT APIs), and > beauties such as the "CreateFile" call that could handle every kind of > resource except files.
This sounds vastly exaggerated. I haven't tried Win32s, but so far the only difference between Win9x and WinNT APIs was an occasional error return in wrong format, INVALID_HANDLE instead of NULL or something like that. POSIX APIs give you beauties such as 'int' vs 'socklen_t', I/O functions that take a 'size_t' size but return a 'ssize_t' result, error codes such as EINTR, the whole mess of 'lseek'/'lseek64' and 'off_t'/'off64_t'. Everyone got their dark corners.
>> But having a C API (i.e. no inline functions), > > C has had inline functions for two decades.
Windows has existed for two-and-a-half...
>>> I have nothing against thinking! But I /do/ have something against >>> having to write a deviation report in order to avoid a cast monstrosity >>> just because the rules say that "1" is not an int. >> >> Exactly. >> >> (Just today had another discussion because Klocwork spit out new >> warnings. One "remove this error handling because I think this error >> does not occur", one "remove this variable initialisation because I >> think this is a dead store", two "this variable isn't used [but I see it >> being used the next line]". This surely justifies leaving everything >> behind and pet the tool.) > > I have had more luck using gcc, and its increasingly more sophisticated > static warnings. At least then I know the static error checking and the > compiler agree with each other!
In general, I agree, but unfortunately sometimes even gcc disappoints. Last two weeks' disappointments: '1 << 31' is not a warning although it has unspecified behaviour. It accepts VLAs in C++ by default even with '-std=c++11 -ansi', and requires '-pedantic' to notice it. This makes it harder to convince QA people that just using gcc is enough. Stefan
On 08/02/18 18:58, Stefan Reuther wrote:
> Am 07.02.2018 um 20:54 schrieb David Brown:
<snip, as it's getting way off topic>
> Everyone got their dark corners.
That is a fair summary!
> >>> But having a C API (i.e. no inline functions), >> >> C has had inline functions for two decades. > > Windows has existed for two-and-a-half...
Yes, but the world has moved on since then, and the only reason MS did not gradually supersede the messiest parts of WinAPI with better versions from C99 (uint32_t instead of DWORD, inline instead of macros, etc.) is that they did not make a C99 compiler. Some design choices you have to live with because a clean break is not possible, some you get stuck with because a clean break is too expensive, but some we live with because no one bothered to make the change.
> >>>> I have nothing against thinking! But I /do/ have something against >>>> having to write a deviation report in order to avoid a cast monstrosity >>>> just because the rules say that "1" is not an int. >>> >>> Exactly. >>> >>> (Just today had another discussion because Klocwork spit out new >>> warnings. One "remove this error handling because I think this error >>> does not occur", one "remove this variable initialisation because I >>> think this is a dead store", two "this variable isn't used [but I see it >>> being used the next line]". This surely justifies leaving everything >>> behind and pet the tool.) >> >> I have had more luck using gcc, and its increasingly more sophisticated >> static warnings. At least then I know the static error checking and the >> compiler agree with each other! > > In general, I agree, but unfortunately sometimes even gcc disappoints.
Yes, indeed - gcc and its static error checking is excellent, but very far from perfect.
> > Last two weeks' disappointments: '1 << 31' is not a warning although it > has unspecified behaviour.
Left shift 1 << 31 (with 32-bit ints) is undefined behaviour, not unspecified behaviour (there is a big difference) in the C standards. However, gcc treats some additional aspects of signed << as defined behaviour. Unfortunately, the documentation does not give the details, but it is reasonable to suppose that it treats this as two's complement shifting. Being undefined in the C standards does not preclude a compiler from having well-defined behaviour for the expression. (I'd have preferred a warning, personally. And the weak documentation here is definitely a flaw.)
> It accepts VLAs in C++ by default even with > '-std=c++11 -ansi', and requires '-pedantic' to notice it.
That is in accordance with the gcc documentation. "-ansi" merely selects the standard, and should arguably give an error when used with "-std=c++11" as "-ansi" is equivalent to "-std=c90" or "-std=c++98" depending on the language. It does not enable other diagnostics or disable gcc extensions - it picks the basic standards and disables any gcc extensions that conflict with them. Thus it tells the compiler that it needs to be able to compile all C90 or C++98 programs, even if you happen to have functions called "asm" or "typeof". It does /not/ tell the compiler to reject programs that are not standard-conforming C90 or C++98 programs. "-pedantic" tells the compiler to enable all diagnostics required by the standards. Note that this /still/ does not mean rejecting programs which use gcc extensions, or other code that does not have fully defined behaviour in the standards. It means providing diagnostics when the standards explicitly say so, even if gcc would normally give meaning to the code as an extension (such as using VLA's outside of C99/C11). So no bugs there - just some misunderstandings about the gcc flags. You are not the first person to make such mixups, so arguably the choice of flag names or the documentation could be better.
> > This makes it harder to convince QA people that just using gcc is enough. >
That can be true. But I think using any tool for QA requires a careful understanding of the tool and its options. I would not say that gcc is a good static analysis tool - I would say that gcc used appropriately with a careful choice of flags and warnings is a good static analysis tool. And I would say that about other tools too - /no/ static analysis tool I have looked at works as I would like it without careful choices of flags.
On 08/02/18 06:04, Stefan Reuther wrote:
> Am 06.02.2018 um 23:02 schrieb David Brown: >> On 06/02/18 19:05, Stefan Reuther wrote: >>> Problem is, we're actually doing C++, where MISRA imposes the same rules >>> for numbers, but would actually require us to write >>> >>> uint32_t MASK = static_cast<uint32_t>(1) << 17; >>> >>> which is pretty much noise for my taste. >> >> That's MISRA for you - rules for rules sake, no matter how ugly and >> incomprehensible your code ends up. In situations like this, it totally >> misses the point. > > Exactly. Now tell that the people who want cool downward pointing charts > on their PowerPoints to justify the money spent on the checker... > >>> I like to use this actual definition from Windows APIs as a "how not" >>> example for cast-addicts: >>> >>> #define MAKELONG(a, b) ((LONG) (((WORD) (((DWORD_PTR) (a)) & \ >>> 0xffff)) | ((DWORD) ((WORD) (((DWORD_PTR) (b)) & 0xffff))) << 16)) >> >> I doubt if anyone will hold up the Windows API as a shining example of >> good design and clear programming! > > Actually, the Windows API isn't too bad. The combination of rules that > built 16-bit Windows are borderline genius (e.g. how do you do virtual > memory on an 8086 that has not any hardware support for that?).
That part is a direct copy of the early Apple APIs.
On 09/02/18 19:17, David Brown wrote:
> On 08/02/18 18:58, Stefan Reuther wrote: >> Am 07.02.2018 um 20:54 schrieb David Brown: >>>> But having a C API (i.e. no inline functions), >>> C has had inline functions for two decades. >> Windows has existed for two-and-a-half... > > Yes, but the world has moved on since then, and the only reason MS did > not gradually supersede the messiest parts of WinAPI with better > versions from C99 (uint32_t instead of DWORD, inline instead of macros, > etc.) is that they did not make a C99 compiler.
Or any compiler, in fact. There were three C compilers for 8086 at the time they bought one. The only two that were any good told Microsoft to bugger off, so MS bought the worst one available and proceeded to abuse it mercilessly, thereby holding back the industry for decades and producing mountains of unreliable rubbish that they foisted on their unwilling victims. Clifford Heath