• Re: What is your opinion about unsigned int u = -2 ?

    From James Kuyper@21:1/5 to Thiago Adams on Wed Jul 31 17:17:15 2024
    Thiago Adams <thiago.adams@gmail.com> writes:

    What is your opinion about this:

    unsigned int u1 = -1;

    Generally -1 is used to get the maximum value.

    Is this guaranteed to work?

    Yes.
    "... the value is converted by repeatedly adding or subtracting
    one more than the maximum value that can be represented in the new type
    until the value is in the range of the new type." (6.3.1.3p2).

    In practice, there's better ways than repeated addition, but that's how
    the conversion is defined.

    How about this one?

    unsigned int u2 = -2;

    That is guaranteed to produce 1 less than the maximum value.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to Thiago Adams on Wed Jul 31 20:29:48 2024
    Thiago Adams <thiago.adams@gmail.com> writes:

    What is your opinion about this:

    unsigned int u1 = -1;

    Generally -1 is used to get the maximum value.

    Yes, that's a common usage, though I prefer either -1u or ~0u.

    Is this guaranteed to work?

    How about this one?

    unsigned int u2 = -2;
    Does it makes sense? Maybe a warning here?

    Warnings are almost always good, especially if they can be configured.
    For example you can ask gcc to warn about converting -1 to unsigned
    while leaving -1u and ~0u alone.

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Blue-Maned_Hawk@21:1/5 to Thiago Adams on Thu Aug 1 06:34:21 2024
    Thiago Adams wrote:

    What is your opinion about this:

    unsigned int u1 = -1;

    Generally -1 is used to get the maximum value.
    Is this guaranteed to work?

    Whether or not it is, i would prefer to use the UINT_MAX macro to make the
    code clearer.

    How about this one?

    unsigned int u2 = -2;
    Does it makes sense? Maybe a warning here?

    I cannot think of any situations where that would make sense, but i also
    cannot guarantee that there are not any.



    --
    Blue-Maned_Hawk│shortens to Hawk│/blu.mɛin.dʰak/│he/him/his/himself/Mr. blue-maned_hawk.srht.site
    I was so amused with it that i did it twenty-three more times.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to bluemanedhawk@invalid.invalid on Thu Aug 1 12:02:24 2024
    Blue-Maned_Hawk <bluemanedhawk@invalid.invalid> writes:

    Thiago Adams wrote:
    ...
    How about this one?

    unsigned int u2 = -2;
    Does it makes sense? Maybe a warning here?

    I cannot think of any situations where that would make sense, but i also cannot guarantee that there are not any.

    Some of the multi-byte conversion functions (like mbrtowc) return either (size_t)-1 or (size_t)-2 to indicate different kinds of failure so it's
    not inconceivable that someone might write

    size_t processed_but_incomplete = -2;

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Bart@21:1/5 to Kenny McCormack on Fri Aug 2 15:48:15 2024
    On 02/08/2024 15:33, Kenny McCormack wrote:
    In article <v8inds$2qpqh$1@dont-email.me>,
    Thiago Adams <thiago.adams@gmail.com> wrote:
    ...
    So it seams that anything is ok for unsigned but not for signed.
    Maybe because all computer gave same answer for unsigned but this is not
    true for signed?

    I think it is because it wants to (still) support representations other
    than 2s complement. I think POSIX requires 2s complement, and I expect the
    C standard to (eventually) follow suit.


    C23 assumes 2s complement. However overflow on signed integers will
    still be considered UB: too many compilers depend on it.

    But even if well-defined (eg. that UB was removed so that overflow just
    wraps as it does with unsigned), some here, whose initials may or may
    not be DB, consider such overflow Wrong and a bug in a program.

    However they don't consider overflow of unsigned values wrong at all,
    simply because C allows that behaviour.

    But I don't get it. If my calculation gives the wrong results because
    I've chosen a u32 type instead of u64, that's just as much a bug as
    using i32 instead of i64.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kenny McCormack@21:1/5 to thiago.adams@gmail.com on Fri Aug 2 14:33:31 2024
    In article <v8inds$2qpqh$1@dont-email.me>,
    Thiago Adams <thiago.adams@gmail.com> wrote:
    ...
    So it seams that anything is ok for unsigned but not for signed.
    Maybe because all computer gave same answer for unsigned but this is not
    true for signed?

    I think it is because it wants to (still) support representations other
    than 2s complement. I think POSIX requires 2s complement, and I expect the
    C standard to (eventually) follow suit.

    --
    Res ipsa loquitur.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ben Bacarisse@21:1/5 to Bart on Fri Aug 2 16:17:29 2024
    Bart <bc@freeuk.com> writes:

    On 02/08/2024 15:33, Kenny McCormack wrote:
    In article <v8inds$2qpqh$1@dont-email.me>,
    Thiago Adams <thiago.adams@gmail.com> wrote:
    ...
    So it seams that anything is ok for unsigned but not for signed.
    Maybe because all computer gave same answer for unsigned but this is not >>> true for signed?
    I think it is because it wants to (still) support representations other
    than 2s complement. I think POSIX requires 2s complement, and I expect the >> C standard to (eventually) follow suit.


    C23 assumes 2s complement. However overflow on signed integers will still
    be considered UB: too many compilers depend on it.

    But even if well-defined (eg. that UB was removed so that overflow just
    wraps as it does with unsigned), some here, whose initials may or may not
    be DB, consider such overflow Wrong and a bug in a program.

    However they don't consider overflow of unsigned values wrong at all,
    simply because C allows that behaviour.

    But I don't get it. If my calculation gives the wrong results because I've chosen a u32 type instead of u64, that's just as much a bug as using i32 instead of i64.

    I don't think *anyone* considers a program that produces the wrong
    result to be have any less buggy simply because of the types used. You
    are ascribing to other views that I have never seen anyone express.

    --
    Ben.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to Keith Thompson on Fri Aug 2 15:21:09 2024
    On 8/2/24 14:48, Keith Thompson wrote:
    Bart <bc@freeuk.com> writes:
    [...]
    C23 assumes 2s complement. However overflow on signed integers will
    still be considered UB: too many compilers depend on it.

    But even if well-defined (eg. that UB was removed so that overflow
    just wraps as it does with unsigned), some here, whose initials may or
    may not be DB, consider such overflow Wrong and a bug in a program.

    However they don't consider overflow of unsigned values wrong at all,
    simply because C allows that behaviour.

    But I don't get it. If my calculation gives the wrong results because
    I've chosen a u32 type instead of u64, that's just as much a bug as
    using i32 instead of i64.

    There is a difference in that unsigned "overflow" might give
    (consistent) results you didn't want, but signed overflow has undefined behavior.

    When David was expressing the opinion Bart is talking about above, he
    was talking about whether it was desirable for unsigned overflow to have undefined behavior, not about the fact that, in C, it does have
    undefined behavior. He argued that signed overflow almost always is the
    result of a logical error, and the typical behavior when it does
    overflow, is seldom the desired way of handling those cases. Also, he
    pointed out that making it undefined behavior enables some convenient optimizations.

    For instance, the expression (num*2)/2 always has the same value as
    'num' itself, except when the multiplication overflows. If overflow has undefined behavior, the cases where it does overflow can be ignored,
    permitting (num*2)/2 to be optimized to simply num.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to James Kuyper on Sat Aug 3 19:34:54 2024
    On 02/08/2024 21:21, James Kuyper wrote:
    On 8/2/24 14:48, Keith Thompson wrote:
    Bart <bc@freeuk.com> writes:
    [...]
    C23 assumes 2s complement. However overflow on signed integers will
    still be considered UB: too many compilers depend on it.

    But even if well-defined (eg. that UB was removed so that overflow
    just wraps as it does with unsigned), some here, whose initials may or
    may not be DB, consider such overflow Wrong and a bug in a program.

    However they don't consider overflow of unsigned values wrong at all,
    simply because C allows that behaviour.

    But I don't get it. If my calculation gives the wrong results because
    I've chosen a u32 type instead of u64, that's just as much a bug as
    using i32 instead of i64.

    There is a difference in that unsigned "overflow" might give
    (consistent) results you didn't want, but signed overflow has undefined
    behavior.

    When David was expressing the opinion Bart is talking about above, he
    was talking about whether it was desirable for unsigned overflow to have undefined behavior, not about the fact that, in C, it does have
    undefined behavior. He argued that signed overflow almost always is the result of a logical error, and the typical behavior when it does
    overflow, is seldom the desired way of handling those cases. Also, he
    pointed out that making it undefined behavior enables some convenient optimizations.

    For instance, the expression (num*2)/2 always has the same value as
    'num' itself, except when the multiplication overflows. If overflow has undefined behavior, the cases where it does overflow can be ignored, permitting (num*2)/2 to be optimized to simply num.


    Yes, that is all correct.

    IMHO - and I realise it is an opinion not shared by everyone - I think
    it would be best for a language of the level and aims of C to leave all
    integer overflows as undefined behaviour. It is helpful for
    implementations to have debug or sanitizing modes that generate run-time
    checks and run-time errors for overflows, to aid in debugging. (clang
    and gcc both provide such features - no doubt other compilers do too.)

    And you do need additional features to get modulo effects on the
    occasions when these are needed. I think you could come a long way with
    the ckd_ macros from C23 :

    #include <stdckdint.h>
    bool ckd_add(type1 *result, type2 a, type3 b);
    bool ckd_sub(type1 *result, type2 a, type3 b);
    bool ckd_mul(type1 *result, type2 a, type3 b);


    (Of course, C is the way it is, for many reasons - and I am not
    suggesting it be changed!)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Bart on Sat Aug 3 19:17:19 2024
    On 02/08/2024 16:48, Bart wrote:
    On 02/08/2024 15:33, Kenny McCormack wrote:
    In article <v8inds$2qpqh$1@dont-email.me>,
    Thiago Adams  <thiago.adams@gmail.com> wrote:
    ...
    So it seams that anything is ok for unsigned but not for signed.
    Maybe because all computer gave same answer for unsigned but this is not >>> true for signed?

    I think it is because it wants to (still) support representations other
    than 2s complement.  I think POSIX requires 2s complement, and I
    expect the
    C standard to (eventually) follow suit.


    C23 assumes 2s complement. However overflow on signed integers will
    still be considered UB: too many compilers depend on it.

    But even if well-defined (eg. that UB was removed so that overflow just
    wraps as it does with unsigned), some here, whose initials may or may
    not be DB, consider such overflow Wrong and a bug in a program.

    However they don't consider overflow of unsigned values wrong at all,
    simply because C allows that behaviour.

    But I don't get it. If my calculation gives the wrong results because
    I've chosen a u32 type instead of u64, that's just as much a bug as
    using i32 instead of i64.



    You don't get it because you never pay attention to what I write - you'd
    rather jump to conclusions without reading.

    In almost all cases, wrapping signed integer overflow would give the
    incorrect (in terms of what makes sense for the code) result even if it
    is fully defined by the compiler.

    In almost all cases, wrapping unsigned integer overflow would give the incorrect (in terms of what makes sense for the code) result regardless
    of the fact that C gives a definition for the behaviour.

    There are, of course, exceptions - situations where you really do want
    modulo arithmetic. But in most cases you want integer types that model
    real mathematical integers to the extent possible with efficient
    practical implementations. Using 16-bit int because the numbers are
    easier, if you have 65535 apples in a pile and you add an apple, you do
    not expect to have 0 apples. That would be almost as silly as having
    32767 apples, adding one more, and having -32768 apples.

    Outside of the occasional rare case, code that relies on overflow
    behaviour of integers - signed or unsigned, defined by the language/implementation or not - is logically incorrect code.

    That is in addition to some cases (signed integer overflow for C) being undefined in the language.

    It's helpful that the language provides a way to get modulo arithmetic
    for the cases that need it. But just because C defines the behaviour of unsigned integer overflow, does not make it makes sense in code.
    Defined incorrect results are just as wrong as undefined incorrect results.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Keith Thompson on Sat Aug 3 19:43:14 2024
    On 03/08/2024 04:40, Keith Thompson wrote:
    Thiago Adams <thiago.adams@gmail.com> writes:
    [...]
    Here a sample with signed int that has a overflow warning.


    #include <stdio.h>

    int main()
    {
    constexpr int a = 2147483647;
    constexpr int b = 1;
    constexpr int c = a+b;
    }

    https://godbolt.org/z/ca31r8EMK

    It's reasonable to warn about a+b, since it has undefined behavior.
    In fact gcc warns about the expression a+b, since it has undefined
    behavior, and issues a fatal error message about its use in a context requiring a constant expression, since that's a constraint violation.

    I think both cases (overflow and wraparound) should have warnings.

    You're free to think that, of course, but wraparound behavior is well
    defined and unambiguous. I wouldn't mind an *optional* warning, but
    plenty of programmers might deliberately write something like

    const unsigned int max = -1;

    with the reasonable expectation that it will set max to INT_MAX.

    Comparing with __builtin_add_overflow it also reports wraparound.

    #include <stdio.h>

    int main()
    {
    unsigned int r;
    if (__builtin_add_overflow(0,-1, &r) != 0)
    {
    printf("fail");
    }
    }

    Of course __builtin_add_overflow is a non-standard gcc extension. The documentation says:

    -- Built-in Function: bool __builtin_add_overflow (TYPE1 a, TYPE2 b,
    TYPE3 *res)
    ...
    These built-in functions promote the first two operands into
    infinite precision signed type and perform addition on those
    promoted operands. The result is then cast to the type the third
    pointer argument points to and stored there. If the stored result
    is equal to the infinite precision result, the built-in functions
    return 'false', otherwise they return 'true'. As the addition is
    performed in infinite signed precision, these built-in functions
    have fully defined behavior for all argument values.

    It returns true if the result is equal to what would be computed in
    infinite signed precision, so it treats both signed overflow and
    unsigned wraparound as "overflow". It looks like a useful function, and
    if you use it with an unsigned target, it's because you *want* to detect wraparound.


    C23 provides ckd_add() that is identical to __builtin_add_overflow()
    except for the order of the operands.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Bonita Montero on Wed Aug 7 23:48:55 2024
    On Sun, 4 Aug 2024 20:29:21 +0200, Bonita Montero wrote:

    Since the mid of the 70s all new machines work all with 2s complement.
    There will be never computers with different notations since the 2s complement makes the circuit design easier.

    This may be hard to believe, but I think in the early days 2s-complement arithmetic was seen as something exotic, like advanced mathematics or something. To some, sign-magnitude seemed more “intuitive”.

    As for ones-complement ... I don’t know to explain that.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Lawrence D'Oliveiro on Thu Aug 8 19:47:14 2024
    On 08/08/2024 01:48, Lawrence D'Oliveiro wrote:
    On Sun, 4 Aug 2024 20:29:21 +0200, Bonita Montero wrote:

    Since the mid of the 70s all new machines work all with 2s complement.
    There will be never computers with different notations since the 2s
    complement makes the circuit design easier.

    This may be hard to believe, but I think in the early days 2s-complement arithmetic was seen as something exotic, like advanced mathematics or something. To some, sign-magnitude seemed more “intuitive”.

    As for ones-complement ... I don’t know to explain that.

    Think about negating a value. For two's complement, that means
    inverting each bit and then adding 1. For sign-magnitude, you invert
    the sign bit. For ones' complement, you invert each bit.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Bonita Montero on Fri Aug 9 20:19:59 2024
    On 09/08/2024 20:08, Bonita Montero wrote:
    Am 08.08.2024 um 19:47 schrieb David Brown:

    Think about negating a value.  For two's complement, that means
    inverting each bit and then adding 1.  For sign-magnitude, you
    invert  the sign bit. For ones' complement, you invert each bit.

    But with one's complement you have the same circuits for ading
    and substracting like with unsigned values.

    If you are trying to say that for two's complement, "a + b" and "a - b"
    use the same circuits regardless of whether you are doing signed or
    unsigned arithmetic, then that is correct. It is one of the reasons why
    two's complement became the dominant format.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Bonita Montero on Fri Aug 9 21:40:06 2024
    On 09/08/2024 21:18, Bonita Montero wrote:
    Am 09.08.2024 um 20:19 schrieb David Brown:
    On 09/08/2024 20:08, Bonita Montero wrote:
    Am 08.08.2024 um 19:47 schrieb David Brown:

    Think about negating a value.  For two's complement, that means
    inverting each bit and then adding 1.  For sign-magnitude, you
    invert  the sign bit. For ones' complement, you invert each bit.

    But with one's complement you have the same circuits for ading
    and substracting like with unsigned values.

    If you are trying to say that for two's complement, "a + b" and "a -
    b" use the same circuits regardless of whether you are doing signed or
    unsigned arithmetic, then that is correct.  It is one of the reasons
    why two's complement became the dominant format.


    ... and you've got one more value since there's no negative and
    positive zero.


    It's rarely significant that there's an extra value for signed integers
    - it's just one more out of 4 billion for 32-bit ints. (It's vital that
    you have all possible patterns for unsigned data.)

    It is, I would say, nice that you don't have two different zero representations.

    But sometimes it would be nice if INT_MIN were equal to -INT_MAX, and
    that there was an extra pattern available for an invalid or special
    value. It could provide a very compact representation for a type that
    is either a valid integer or a "No result" indicator, much like a NaN
    for floating point. Of course, it's possible to use two's complement
    and reserve 0x8000'0000 (or whatever number of zeros are needed) for the purpose.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to David Brown on Fri Aug 9 21:16:19 2024
    On 8/9/24 2:19 PM, David Brown wrote:
    On 09/08/2024 20:08, Bonita Montero wrote:
    Am 08.08.2024 um 19:47 schrieb David Brown:

    Think about negating a value.  For two's complement, that means
    inverting each bit and then adding 1.  For sign-magnitude, you
    invert  the sign bit. For ones' complement, you invert each bit.

    But with one's complement you have the same circuits for ading
    and substracting like with unsigned values.

    If you are trying to say that for two's complement, "a + b" and "a - b"
    use the same circuits regardless of whether you are doing signed or
    unsigned arithmetic, then that is correct.  It is one of the reasons why two's complement became the dominant format.


    No, a two's complement subtractor needs to invert the second operand,
    and inject a carry into the bottom bit. A one's complement subtractor
    just need to invert the second operand.

    The fact that you want that carry input in the logic anyway for
    add-with-carry to make bigger additions, says that this doesn't really
    cost anything.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Ben Bacarisse on Sun Aug 11 12:33:11 2024
    Ben Bacarisse <ben@bsb.me.uk> writes:

    Thiago Adams <thiago.adams@gmail.com> writes:

    What is your opinion about this:

    unsigned int u1 = -1;

    Generally -1 is used to get the maximum value.

    Yes, that's a common usage, though I prefer either -1u or ~0u.

    Is this guaranteed to work?

    How about this one?

    unsigned int u2 = -2;
    Does it makes sense? Maybe a warning here?

    Warnings are almost always good, especially if they can be configured.
    For example you can ask gcc to warn about converting -1 to unsigned
    while leaving -1u and ~0u alone.

    Ick. That choice is exactly backwards IMO. Converting -1 to
    an unsigned type always sets all the bits. Converting -1u to
    an unsigned type can easily do the wrong thing, depending
    on the target type.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Vir Campestris@21:1/5 to Tim Rentsch on Sun Aug 11 21:08:45 2024
    On 11/08/2024 20:33, Tim Rentsch wrote:
    Ick. That choice is exactly backwards IMO. Converting -1 to
    an unsigned type always sets all the bits. Converting -1u to
    an unsigned type can easily do the wrong thing, depending
    on the target type.

    "Converting -1 to an unsigned type always sets all the bits"

    In any normal twos complement architecture that's the case. But there
    are a few oddballs out there where -1 is +1, except that the dedicated
    sign bit is set.

    Andy

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to Vir Campestris on Sun Aug 11 16:45:12 2024
    On 8/11/24 4:08 PM, Vir Campestris wrote:
    On 11/08/2024 20:33, Tim Rentsch wrote:
    Ick.  That choice is exactly backwards IMO.  Converting -1 to
    an unsigned type always sets all the bits.  Converting -1u to
    an unsigned type can easily do the wrong thing, depending
    on the target type.

    "Converting -1 to an unsigned type always sets all the bits"

    In any normal twos complement architecture that's the case. But there
    are a few oddballs out there where -1 is +1, except that the dedicated
    sign bit is set.

    Andy

    But, when that -1 value is converted to an unsigned type, that VALUE
    will be adjusted modulo the appropriate power of two.

    signed to unsigned conversion works on VALUE, not bit pattern, so is
    invariant with the representation of the negative values.

    Yes, in a union with a signed and unsigned, the type punning will let
    you see the representation of the types, but assignment works on values.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to Vir Campestris on Sun Aug 11 16:53:20 2024
    On 8/11/24 16:08, Vir Campestris wrote:
    ...
    "Converting -1 to an unsigned type always sets all the bits"

    In any normal twos complement architecture that's the case. But there
    are a few oddballs out there where -1 is +1, except that the dedicated
    sign bit is set.

    There may be hardware where that is true, but a conforming
    implementation of C targeting that hardware cannot use the hardware's
    result. It must fix up the result produced by the hardware to match the
    result required by the C standard.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Keith Thompson on Sun Aug 11 13:57:07 2024
    Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:

    Thiago Adams <thiago.adams@gmail.com> writes:

    More samples..
    max uint64 + 1 is signed 128 bits in gcc and unsigned long long in clang >>
    #ifdef __clang__
    static_assert(TYPE_IS(9223372036854775808, unsigned long long ));
    #else
    static_assert(TYPE_IS(9223372036854775808, __int128));
    #endif

    https://godbolt.org/z/hveY44ov4

    9223372036854775808 is 2**63, or INT64_MAX-1, not UINT64_MAX-1.

    Of course you meant INT64_MAX + 1 (and presumably UINT64_MAX + 1).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Vir Campestris on Sun Aug 11 23:07:32 2024
    Vir Campestris <vir.campestris@invalid.invalid> writes:

    On 11/08/2024 20:33, Tim Rentsch wrote:

    Ick. That choice is exactly backwards IMO. Converting -1 to
    an unsigned type always sets all the bits. Converting -1u to
    an unsigned type can easily do the wrong thing, depending
    on the target type.

    "Converting -1 to an unsigned type always sets all the bits"

    In any normal twos complement architecture that's the case. But there
    are a few oddballs out there where -1 is +1, except that the dedicated
    sign bit is set.

    What you say is right if the transformation occurs by means of
    type punning, as for example if we had a union with both a
    signed int member and an unsigned int member. Reading the
    unsigned int member after having assigned to the signed int
    member depends on whether signed int uses ones' complement,
    two's complement, or signed magnitude.

    My comment though is about conversion, not about type punning.
    The rules for conversion (both explicit conversion, when a cast
    operator is used, and implicit conversion, such as when an
    assignment is performed (and lots of other places)) are defined
    in terms of values, not representations. Thus, in this code

    signed char minus_one = -1;
    unsigned u = minus_one;
    unsigned long bigu = minus_one;
    unsigned long long biggeru = minus_one;
    printf( " printing unsigned : %u\n", (unsigned) minus_one );

    in every case we get unsigned values (of several lengths) with
    all value bits set, because of the rules for how values are
    converted between a signed type (such as signed char) and an
    unsigned type.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Vir Campestris@21:1/5 to James Kuyper on Mon Aug 12 11:47:11 2024
    On 11/08/2024 21:53, James Kuyper wrote:
    On 8/11/24 16:08, Vir Campestris wrote:
    ...
    "Converting -1 to an unsigned type always sets all the bits"

    In any normal twos complement architecture that's the case. But there
    are a few oddballs out there where -1 is +1, except that the dedicated
    sign bit is set.

    There may be hardware where that is true, but a conforming
    implementation of C targeting that hardware cannot use the hardware's
    result. It must fix up the result produced by the hardware to match the result required by the C standard.


    On 11/08/2024 21:45, Richard Damon wrote:

    But, when that -1 value is converted to an unsigned type, that VALUE
    will be adjusted modulo the appropriate power of two.

    signed to unsigned conversion works on VALUE, not bit pattern, so is invariant with the representation of the negative values.

    Yes, in a union with a signed and unsigned, the type punning will let
    you see the representation of the types, but assignment works on values.

    Ah, thank you both. It's academic interest only of course!

    Andy

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From dave_thompson_2@comcast.net@21:1/5 to Keith.S.Thompson+u@gmail.com on Sun Aug 25 16:52:30 2024
    On Fri, 02 Aug 2024 19:40:32 -0700, Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    Thiago Adams <thiago.adams@gmail.com> writes:

    I think both cases (overflow and wraparound) should have warnings.

    You're free to think that, of course, but wraparound behavior is well
    defined and unambiguous. I wouldn't mind an *optional* warning, but
    plenty of programmers might deliberately write something like

    const unsigned int max = -1;

    with the reasonable expectation that it will set max to INT_MAX.

    (cough) UINT_MAX (cough)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)