> David Brown <david.brown@hesbynett.removethisbit.no> writes:
>
>> Ben Bacarisse wrote:
>>> David Brown <david.brown@hesbynett.removethisbit.no> writes:
> <snip>
>>>> Yes, embedded programmers know that "all" processors and C compilers
>>>> use twos-complement wrapping arithmetic (except when they use
>>>> saturating arithmetic...).
>>>
>>> I am a bit stumped by this. My sarcasm detector went off, but I can't
>>> see your point. If your compiler does not do standard (as in the C
>>> standard) arithmetic, then you are not writing in C but something a bit
>>> like it. Are such non-conforming compilers common in the embedded
>>> world?
>>
>> Sometimes saturating arithmetic is used in embedded processors,
>> especially in DSPs (or DSP extensions to conventional processors).
>
> Yes, I knew that was what you were saying but it does not answer my
> question. What do some/all/most C compiler do about this? They have
> lots of options but the least appealing (to me) would be to abandon
> conformance.
I think the point you're missing, that a lot of people who don't work in
these kind of environments miss is this:
In certain types of programming, such as embedded programming, you are
exposed to components of the system (the example here is DSPs) which do not
believe in the C standard. Working with this level of device requires a
certain discipline in what you expect to happen and they way you treat
instructions & registers and that discipline naturally bleeds into all of
your coding, even when it's not strictly required by the C standard.
These rules that David has are very important when dealing with stuff that
is non-C native, such as various bits of low-level hardware. Strong
discipline in how you treat your types is crucial if you don't want that
stuff to bite you.
I basically came to the same way of thinking when I was working on system
interfaces in various OSes in the '80s with C. These OSes were on several
different architectures with different word/byte sizes and none of them were
C platform based. register & system call argument size was important and
and lazy programming "the compiler will fix it" was a guarantee for disaster
and many hours debugging.
Bruce
Reply by Keith Thompson●March 12, 20092009-03-12
Phil Carmody <thefatphil_demunged@yahoo.co.uk> writes:
> Tim Rentsch <txr@alumnus.caltech.edu> writes:
>> Phil Carmody <thefatphil_demunged@yahoo.co.uk> writes:
>>> Mark Wooding <mdw@distorted.org.uk> writes:
[...]
>>> > UINT_MAX + UINT_MAX would be undefined behaviour rather than
>>> > UINT_MAX - 1.
>>>
>>> Wouldn;t it be impy defined?
>>
>> Undefined behavior. If UINT_MAX == INT_MAX, then the addition is
>> done as (int), resulting in overflow, which is the canonical
>> example of undefined behavior.
>
> Thanks for the correction. Until about 10 minutes before I posted,
> I thought it was undefined, but alas I saw too many seemingly
> relevant words in the completely irrelevant 6.3.1.3 (3) and got
> confused. ("Otherwise, the new type is signed and the value cannot
> be represented in it; either the result is implementation-defined
> or an implementation-defined signal is raised.)
>
> Oops.
It's an easy enough mistake to make. For an operation yielding a
signed result, overflow invokes undefined behavior in most cases, but
an implementation-defined result (or an implementation-defined signal)
if the operation happens to be a conversion. It seems like an
arbitrary distinction.
--
Keith Thompson (The_Other_Keith) kst@mib.org <http://www.ghoti.net/~kst>
Nokia
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Reply by Phil Carmody●March 12, 20092009-03-12
Tim Rentsch <txr@alumnus.caltech.edu> writes:
> Phil Carmody <thefatphil_demunged@yahoo.co.uk> writes:
>
>> Mark Wooding <mdw@distorted.org.uk> writes:
>> > Phil Carmody <thefatphil_demunged@yahoo.co.uk> writes:
>> >
>> >> Oh my ${DEITY} - unsigned int is promoted to int everywhere in such
>> >> cases! Did the standards committee really want the 'or equal to'?
>> >> It seems to make unsigned arithmetic as such practically impossible,
>> >> all attempts would end up being just work on ints that would get
>> >> implicitly converted to unsigned at the end. I suspect this might lead
>> >> to some unintuitive expression evaluations, but can't think of any off
>> >> the top of my head.
>> >
>> > UINT_MAX + UINT_MAX would be undefined behaviour rather than UINT_MAX - 1.
>>
>> Wouldn;t it be impy defined?
>
> Undefined behavior. If UINT_MAX == INT_MAX, then the addition is
> done as (int), resulting in overflow, which is the canonical
> example of undefined behavior.
Thanks for the correction. Until about 10 minutes before I posted,
I thought it was undefined, but alas I saw too many seemingly
relevant words in the completely irrelevant 6.3.1.3 (3) and got
confused. ("Otherwise, the new type is signed and the value cannot
be represented in it; either the result is implementation-defined
or an implementation-defined signal is raised.)
Oops.
Phil
--
I tried the Vista speech recognition by running the tutorial. I was
amazed, it was awesome, recognised every word I said. Then I said the
wrong word ... and it typed the right one. It was actually just
detecting a sound and printing the expected word! -- pbhj on /.
Reply by Tim Rentsch●March 12, 20092009-03-12
Gil Hamilton <gil_hamilton@hotmail.com> writes:
> Tim Rentsch <txr@alumnus.caltech.edu> wrote in
> news:kfnd4cvlv0q.fsf@alumnus.caltech.edu:
>
> > Gil Hamilton <gil_hamilton@hotmail.com> writes:
> >
> >> Tim Rentsch <txr@alumnus.caltech.edu> wrote in
> >> news:kfntz6gg3vu.fsf@alumnus.caltech.edu:
> >>
> >> > First simple example:
> >> >
> >> > x = a + b + c + v;
> >> >
> >> > The first rule for volatile is captured in a single sentence
> >> > in 6.7.3p6:
> >> >
> >> > Therefore any expression referring to such an object [i.e.,
> >> > with volatile-qualified type] shall be evaluated strictly
> >> > according to the rules of the abstract machine, as described
> >> > in 5.1.2.3.
> >> >
> >> > The full expression assigning to x is an expression referring to an
> >> > object with volatile-qualified type. Therefore that expression,
> >> > the /entire/ expression, must be evaluated strictly according to
> >> > the rules of the abstract machine. The sums must be formed in the
> >> > right order; even though addition for (unsigned int) commutes,
> >> > the additions must be done as (((a + b) + c) + v), and not, for
> >> > example, as ((a + (b + c)) + v). Furthermore, the sum (a+b)
> >> > must be performed, even if that value happens to be lying around
> >> > conveniently in a register somewhere. These consequences follow
> >> > because of the requirement than any volatile-referring expression
> >> > be evaluated /strictly/ according to the rules of the abstract
> >> > machine.
> >>
> >> I disagree with this analysis. I think you're ascribing too pandemic
> >> a meaning to the phrase 'any expression referring to such an
> >> object...'.
> >>
> >> As you say, the language syntax requires that the interpretation of
> >> the expression is "(((a + b) + c) + v)". However, decomposing that
> >> further shows that in the outermost expression, 'v' is being added to
> >> the result of another expression '((a + b) + c)'. This latter
> >> (sub-)expression references no volatile object and hence could be
> >> commuted as "(a + (b + c))" or could use an already computed
> >> sub-expression like (a + b). Once the '((a + b) + c)' is evaluated,
> >> the outermost expression (which *does* reference a volatile object)
> >> can then be evaluated 'strictly according to the rules of the
> >> abstract machine'.
> [snip]
> >> But the
> >> term 'full expression' is explicitly defined in the standard (6.8p4):
> >> "A full expression is an expression that is not part of another
> >> expression or declarator. [...]" And it seems to have exactly the
> >> meaning you are arguing for here. But if 'full expression' was
> >> indeed what was intended in 6.7.3p6 as you argue, then wouldn't the
> >> well-defined term have been used there?
> >
> > There are three problems with this argument.
> >
> > One, the most obvious and the most natural reading for "any
> > expression" is... any expression. For it to be anything
> > other than the entire containing expression, there should
> > be some other text somewhere in the Standard that suggests
> > 'any expression' be given a different reading. But there
> > doesn't seem to be any such text.
>
> I'll make one more attempt at explaining why I don't think that's a
> natural reading of the text. [snip long explanation based on
> how the input is parsed.]
>
> I believe this interpretation is completely consistent with the
> standard's text "any expression referring to..."
Seen from this perspective, it's not unreasonable to say the
expression 'x = <additive-expression> + v' is an expression referring
to a volatile object. However, it's just as reasonable to say that
the expression 'x = a + b + c + v' is an expression referring to a
volatile object. The Standard says /any/ expression, and the larger
expression qualifies, so the larger expression is covered under the
"strictly according" clause.
> [snip]
>
>
> > Three, using 'full expression' instead of 'any expression' would
> > clearly do the wrong thing. For example, consider this declaration of
> > a variable length array (again 'v' is volatile):
> >
> > int foo[v];
> >
> > No full expressions in sight. Yet we certainly want the expression
> > 'v' that specifies the array size evaluated according to the rules
> > of volatile-referring expressions.
>
> I concede that would not be the right thing (since "full expression" is
> explicitly defined as *not* being part of a declarator). However, I
> wasn't arguing that they *should have* used the term "full expression";
> my real point was that the breadth implied by "full expression" cannot
> be assumed.
Yes, I understood your position. However, the argument you gave
depended on the term "full expression" being a convenient substitute
term for "any expression". Since "full expression" doesn't fill the
bill, that significantly weakens the argument.
Reply by Tim Rentsch●March 12, 20092009-03-12
Mark Wooding <mdw@distorted.org.uk> writes:
> [snip]
>
> However, in n1256, we get the extra text
>
> : -- An object or expression with an integer type whose integer
> : conversion rank is less than *or equal to* the rank of |int| and
> : |unsigned int|.
>
> I'm now rather interested to know where this change came from, and what
> it's for. Silently degrading from signed to unsigned arithmetic, i.e.,
I assume you meant to say from unsigned to signed, which is what
happens if UINT_MAX == INT_MAX.
> from an arithmetic with well-specified and predictable behaviour to an
> arithmetic with implementation-defined[1] aspects, is a pretty serious
> change to make without a very good reason.
>
> [1] Not undefined: thank you, Phil Carmody, for the correction.
Actually undefined. If INT_MAX == UINT_MAX, then doing "unsigned"
additions as (int) can result in overflow; 3.4.3p3:
EXAMPLE An example of undefined behavior is the behavior on integer overflow.
Reply by Tim Rentsch●March 12, 20092009-03-12
Phil Carmody <thefatphil_demunged@yahoo.co.uk> writes:
> Mark Wooding <mdw@distorted.org.uk> writes:
> > Phil Carmody <thefatphil_demunged@yahoo.co.uk> writes:
> >
> >> Oh my ${DEITY} - unsigned int is promoted to int everywhere in such
> >> cases! Did the standards committee really want the 'or equal to'?
> >> It seems to make unsigned arithmetic as such practically impossible,
> >> all attempts would end up being just work on ints that would get
> >> implicitly converted to unsigned at the end. I suspect this might lead
> >> to some unintuitive expression evaluations, but can't think of any off
> >> the top of my head.
> >
> > UINT_MAX + UINT_MAX would be undefined behaviour rather than UINT_MAX - 1.
>
> Wouldn;t it be impy defined?
Undefined behavior. If UINT_MAX == INT_MAX, then the addition is
done as (int), resulting in overflow, which is the canonical
example of undefined behavior.
Reply by Keith Thompson●March 8, 20092009-03-08
raltbos@xs4all.nl (Richard Bos) writes:
> Mark Wooding <mdw@distorted.org.uk> wrote:
[...]
>> However, in n1256, we get the extra text
>>
>> : -- An object or expression with an integer type whose integer
>> : conversion rank is less than *or equal to* the rank of |int| and
>> : |unsigned int|.
>>
>> I'm now rather interested to know where this change came from, and what
>> it's for. Silently degrading from signed to unsigned arithmetic, i.e.,
>> from an arithmetic with well-specified and predictable behaviour to an
>> arithmetic with implementation-defined[1] aspects, is a pretty serious
>> change to make without a very good reason.
>
> I think someone should post to comp.std.c about this. It seems a very
> strange addition, and one wonders whether the Committee spotted all
> implications of it when they introduced it.
[snip]
I posted to comp.std.c yesterday. See the thread "Unintended side
effect of DR 230?".
--
Keith Thompson (The_Other_Keith) kst@mib.org <http://www.ghoti.net/~kst>
Nokia
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
Reply by Richard Bos●March 8, 20092009-03-08
Mark Wooding <mdw@distorted.org.uk> wrote:
> Ben Bacarisse <ben.usenet@bsb.me.uk> writes:
>
> > Mark Wooding <mdw@distorted.org.uk> writes:
> >> UINT_MAX + UINT_MAX would be undefined behaviour rather than UINT_MAX - 1.
> >> This is devastatingly awful! I think we're only safe because there
> >> aren't any such implementations.
> >>
> >> I don't have my copy of C90[1] handy, so I can't tell whether this bug
> >> was actually in C90 as well.
> >
> > I will only say that C90 has the same semantics in these cases.
>
> No, it doesn't. 6.2.1.1:
>
> : A |char|, a |short int|, or an |int| bit field, or their signed or
> : unsigned varieties, or an enumeration type, may be used in an
> : expression wherever an |int| or |unsigned int| may be used. If an
> : |int| can represent all values of the original type, the value is
> : converted to an |int|; otherwise, it is converted to an |unsigned
> : int|. These are called the /integral promotions/.[27] All other
> : arithmetic types are unchanged by the integral promotions.
>
> (Copy-typed by hand.) An |unsigned int| is not a |char|, or a |short
> int|, and is certainly not an |int| bit field; therefore it falls under
> `other arithmetic types' and is unchanged by the integral promotions.
>
> The corresponding text from C99 is 6.3.1.1p2:
>
> : The following may be used in an expression wherever an |int| or
> : |unsigned int| may be used:
> :
> : -- An object or expression with an integer type whose integer
> : conversion rank is less than the rank of |int| and |unsigned int|.
> :
> : -- A bit-field of type |_Bool|, |int|, |signed int|, or |unsigned
> : int|.
> :
> : If an |int| can represent all values of the original type, the value is
> : converted to an |int|; otherwise, it is converted to an |unsigned
> : int|. These are called the integer promotions.[48] All other types are
> : unchanged by the integer promotions.
>
> Which is still as it should be: the conversion rank of |unsigned int| is
> certainly not less than the conversion rank of |unsigned int|.
>
> However, in n1256, we get the extra text
>
> : -- An object or expression with an integer type whose integer
> : conversion rank is less than *or equal to* the rank of |int| and
> : |unsigned int|.
>
> I'm now rather interested to know where this change came from, and what
> it's for. Silently degrading from signed to unsigned arithmetic, i.e.,
> from an arithmetic with well-specified and predictable behaviour to an
> arithmetic with implementation-defined[1] aspects, is a pretty serious
> change to make without a very good reason.
I think someone should post to comp.std.c about this. It seems a very
strange addition, and one wonders whether the Committee spotted all
implications of it when they introduced it.
More specifically, the only integer types with a rank equal to int or
unsigned int are int and unsigned int themselves, AFAICT. Applying the
rule in question to any extended type, or to any short or char types
which happen to have the same width as int, is perfectly reasonable, and
is what the rule originally said.
The only thing the addition does is to pull int and unsigned int into
the rule, with weird and IMO undesirable results when unsigned int has
the same width as int. True, that's unusual, and one should expect
unusual things to happen on such systems; even so, one would still
expect unsignedness to be conserved, which it now must not be. This is a
clear bug - again, IMO.
Ok, never mind the "someone should". I'll take the blame if they _did_
think of this - I'll crosspost it myself. And follow-ups set, as well.
Richard
Reply by ●March 8, 20092009-03-08
On 6 Mar, 10:25, Mark Wooding <m...@distorted.org.uk> wrote:
> nick_keighley_nos...@hotmail.com writes:
> > because error takes a sequence of pre-processor tokens not
> > a string literal. I was being pedantic.
>
> Is a string literal not a preprocessing token, then? =A06.4p1 would
> disagree.
>
> I can be pedantic too.
fair point. I think it belongs in the same class as
return (0);
(0) is an expression of type int
Reply by Keith Thompson●March 7, 20092009-03-07
Mark Wooding <mdw@distorted.org.uk> writes:
[...]
> However, in n1256, we get the extra text
>
> : -- An object or expression with an integer type whose integer
> : conversion rank is less than *or equal to* the rank of |int| and
> : |unsigned int|.
>
> I'm now rather interested to know where this change came from, and what
> it's for. Silently degrading from signed to unsigned arithmetic, i.e.,
> from an arithmetic with well-specified and predictable behaviour to an
> arithmetic with implementation-defined[1] aspects, is a pretty serious
> change to make without a very good reason.
>
> [1] Not undefined: thank you, Phil Carmody, for the correction.
The change appeared in TC 2, in response to DR #230
<http://www.open-std.org/jtc1/sc22/wg14/www/docs/dr_230.htm>. It was
intended to deal with enumerated types with a rank equal to that of
int. It appears that the effect on unsigned int (on systems where the
range of int includes all values of type unsigned int) was unintended.
I'll bring this up in comp.std.c.
--
Keith Thompson (The_Other_Keith) kst@mib.org <http://www.ghoti.net/~kst>
Nokia
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"