What is your opinion about this:
unsigned int u1 = -1;
Generally -1 is used to get the maximum value.
Is this guaranteed to work?
How about this one?
unsigned int u2 = -2;
What is your opinion about this:
unsigned int u1 = -1;
Generally -1 is used to get the maximum value.
Is this guaranteed to work?
How about this one?
unsigned int u2 = -2;
Does it makes sense? Maybe a warning here?
What is your opinion about this:
unsigned int u1 = -1;
Generally -1 is used to get the maximum value.
Is this guaranteed to work?
How about this one?
unsigned int u2 = -2;
Does it makes sense? Maybe a warning here?
Thiago Adams wrote:...
How about this one?
unsigned int u2 = -2;
Does it makes sense? Maybe a warning here?
I cannot think of any situations where that would make sense, but i also cannot guarantee that there are not any.
In article <v8inds$2qpqh$1@dont-email.me>,
Thiago Adams <thiago.adams@gmail.com> wrote:
...
So it seams that anything is ok for unsigned but not for signed.
Maybe because all computer gave same answer for unsigned but this is not
true for signed?
I think it is because it wants to (still) support representations other
than 2s complement. I think POSIX requires 2s complement, and I expect the
C standard to (eventually) follow suit.
So it seams that anything is ok for unsigned but not for signed.
Maybe because all computer gave same answer for unsigned but this is not
true for signed?
On 02/08/2024 15:33, Kenny McCormack wrote:
In article <v8inds$2qpqh$1@dont-email.me>,
Thiago Adams <thiago.adams@gmail.com> wrote:
...
So it seams that anything is ok for unsigned but not for signed.I think it is because it wants to (still) support representations other
Maybe because all computer gave same answer for unsigned but this is not >>> true for signed?
than 2s complement. I think POSIX requires 2s complement, and I expect the >> C standard to (eventually) follow suit.
C23 assumes 2s complement. However overflow on signed integers will still
be considered UB: too many compilers depend on it.
But even if well-defined (eg. that UB was removed so that overflow just
wraps as it does with unsigned), some here, whose initials may or may not
be DB, consider such overflow Wrong and a bug in a program.
However they don't consider overflow of unsigned values wrong at all,
simply because C allows that behaviour.
But I don't get it. If my calculation gives the wrong results because I've chosen a u32 type instead of u64, that's just as much a bug as using i32 instead of i64.
Bart <bc@freeuk.com> writes:
[...]
C23 assumes 2s complement. However overflow on signed integers will
still be considered UB: too many compilers depend on it.
But even if well-defined (eg. that UB was removed so that overflow
just wraps as it does with unsigned), some here, whose initials may or
may not be DB, consider such overflow Wrong and a bug in a program.
However they don't consider overflow of unsigned values wrong at all,
simply because C allows that behaviour.
But I don't get it. If my calculation gives the wrong results because
I've chosen a u32 type instead of u64, that's just as much a bug as
using i32 instead of i64.
There is a difference in that unsigned "overflow" might give
(consistent) results you didn't want, but signed overflow has undefined behavior.
On 8/2/24 14:48, Keith Thompson wrote:
Bart <bc@freeuk.com> writes:
[...]
C23 assumes 2s complement. However overflow on signed integers will
still be considered UB: too many compilers depend on it.
But even if well-defined (eg. that UB was removed so that overflow
just wraps as it does with unsigned), some here, whose initials may or
may not be DB, consider such overflow Wrong and a bug in a program.
However they don't consider overflow of unsigned values wrong at all,
simply because C allows that behaviour.
But I don't get it. If my calculation gives the wrong results because
I've chosen a u32 type instead of u64, that's just as much a bug as
using i32 instead of i64.
There is a difference in that unsigned "overflow" might give
(consistent) results you didn't want, but signed overflow has undefined
behavior.
When David was expressing the opinion Bart is talking about above, he
was talking about whether it was desirable for unsigned overflow to have undefined behavior, not about the fact that, in C, it does have
undefined behavior. He argued that signed overflow almost always is the result of a logical error, and the typical behavior when it does
overflow, is seldom the desired way of handling those cases. Also, he
pointed out that making it undefined behavior enables some convenient optimizations.
For instance, the expression (num*2)/2 always has the same value as
'num' itself, except when the multiplication overflows. If overflow has undefined behavior, the cases where it does overflow can be ignored, permitting (num*2)/2 to be optimized to simply num.
On 02/08/2024 15:33, Kenny McCormack wrote:
In article <v8inds$2qpqh$1@dont-email.me>,
Thiago Adams <thiago.adams@gmail.com> wrote:
...
So it seams that anything is ok for unsigned but not for signed.
Maybe because all computer gave same answer for unsigned but this is not >>> true for signed?
I think it is because it wants to (still) support representations other
than 2s complement. I think POSIX requires 2s complement, and I
expect the
C standard to (eventually) follow suit.
C23 assumes 2s complement. However overflow on signed integers will
still be considered UB: too many compilers depend on it.
But even if well-defined (eg. that UB was removed so that overflow just
wraps as it does with unsigned), some here, whose initials may or may
not be DB, consider such overflow Wrong and a bug in a program.
However they don't consider overflow of unsigned values wrong at all,
simply because C allows that behaviour.
But I don't get it. If my calculation gives the wrong results because
I've chosen a u32 type instead of u64, that's just as much a bug as
using i32 instead of i64.
Thiago Adams <thiago.adams@gmail.com> writes:
[...]
Here a sample with signed int that has a overflow warning.
#include <stdio.h>
int main()
{
constexpr int a = 2147483647;
constexpr int b = 1;
constexpr int c = a+b;
}
https://godbolt.org/z/ca31r8EMK
It's reasonable to warn about a+b, since it has undefined behavior.
In fact gcc warns about the expression a+b, since it has undefined
behavior, and issues a fatal error message about its use in a context requiring a constant expression, since that's a constraint violation.
I think both cases (overflow and wraparound) should have warnings.
You're free to think that, of course, but wraparound behavior is well
defined and unambiguous. I wouldn't mind an *optional* warning, but
plenty of programmers might deliberately write something like
const unsigned int max = -1;
with the reasonable expectation that it will set max to INT_MAX.
Comparing with __builtin_add_overflow it also reports wraparound.
#include <stdio.h>
int main()
{
unsigned int r;
if (__builtin_add_overflow(0,-1, &r) != 0)
{
printf("fail");
}
}
Of course __builtin_add_overflow is a non-standard gcc extension. The documentation says:
-- Built-in Function: bool __builtin_add_overflow (TYPE1 a, TYPE2 b,
TYPE3 *res)
...
These built-in functions promote the first two operands into
infinite precision signed type and perform addition on those
promoted operands. The result is then cast to the type the third
pointer argument points to and stored there. If the stored result
is equal to the infinite precision result, the built-in functions
return 'false', otherwise they return 'true'. As the addition is
performed in infinite signed precision, these built-in functions
have fully defined behavior for all argument values.
It returns true if the result is equal to what would be computed in
infinite signed precision, so it treats both signed overflow and
unsigned wraparound as "overflow". It looks like a useful function, and
if you use it with an unsigned target, it's because you *want* to detect wraparound.
Since the mid of the 70s all new machines work all with 2s complement.
There will be never computers with different notations since the 2s complement makes the circuit design easier.
On Sun, 4 Aug 2024 20:29:21 +0200, Bonita Montero wrote:
Since the mid of the 70s all new machines work all with 2s complement.
There will be never computers with different notations since the 2s
complement makes the circuit design easier.
This may be hard to believe, but I think in the early days 2s-complement arithmetic was seen as something exotic, like advanced mathematics or something. To some, sign-magnitude seemed more “intuitive”.
As for ones-complement ... I don’t know to explain that.
Am 08.08.2024 um 19:47 schrieb David Brown:
Think about negating a value. For two's complement, that means
inverting each bit and then adding 1. For sign-magnitude, you
invert the sign bit. For ones' complement, you invert each bit.
But with one's complement you have the same circuits for ading
and substracting like with unsigned values.
Am 09.08.2024 um 20:19 schrieb David Brown:
On 09/08/2024 20:08, Bonita Montero wrote:
Am 08.08.2024 um 19:47 schrieb David Brown:
Think about negating a value. For two's complement, that means
inverting each bit and then adding 1. For sign-magnitude, you
invert the sign bit. For ones' complement, you invert each bit.
But with one's complement you have the same circuits for ading
and substracting like with unsigned values.
If you are trying to say that for two's complement, "a + b" and "a -
b" use the same circuits regardless of whether you are doing signed or
unsigned arithmetic, then that is correct. It is one of the reasons
why two's complement became the dominant format.
... and you've got one more value since there's no negative and
positive zero.
On 09/08/2024 20:08, Bonita Montero wrote:
Am 08.08.2024 um 19:47 schrieb David Brown:
Think about negating a value. For two's complement, that means
inverting each bit and then adding 1. For sign-magnitude, you
invert the sign bit. For ones' complement, you invert each bit.
But with one's complement you have the same circuits for ading
and substracting like with unsigned values.
If you are trying to say that for two's complement, "a + b" and "a - b"
use the same circuits regardless of whether you are doing signed or
unsigned arithmetic, then that is correct. It is one of the reasons why two's complement became the dominant format.
Thiago Adams <thiago.adams@gmail.com> writes:
What is your opinion about this:
unsigned int u1 = -1;
Generally -1 is used to get the maximum value.
Yes, that's a common usage, though I prefer either -1u or ~0u.
Is this guaranteed to work?
How about this one?
unsigned int u2 = -2;
Does it makes sense? Maybe a warning here?
Warnings are almost always good, especially if they can be configured.
For example you can ask gcc to warn about converting -1 to unsigned
while leaving -1u and ~0u alone.
Ick. That choice is exactly backwards IMO. Converting -1 to
an unsigned type always sets all the bits. Converting -1u to
an unsigned type can easily do the wrong thing, depending
on the target type.
On 11/08/2024 20:33, Tim Rentsch wrote:
Ick. That choice is exactly backwards IMO. Converting -1 to
an unsigned type always sets all the bits. Converting -1u to
an unsigned type can easily do the wrong thing, depending
on the target type.
"Converting -1 to an unsigned type always sets all the bits"
In any normal twos complement architecture that's the case. But there
are a few oddballs out there where -1 is +1, except that the dedicated
sign bit is set.
Andy
"Converting -1 to an unsigned type always sets all the bits"
In any normal twos complement architecture that's the case. But there
are a few oddballs out there where -1 is +1, except that the dedicated
sign bit is set.
Thiago Adams <thiago.adams@gmail.com> writes:
More samples..
max uint64 + 1 is signed 128 bits in gcc and unsigned long long in clang >>
#ifdef __clang__
static_assert(TYPE_IS(9223372036854775808, unsigned long long ));
#else
static_assert(TYPE_IS(9223372036854775808, __int128));
#endif
https://godbolt.org/z/hveY44ov4
9223372036854775808 is 2**63, or INT64_MAX-1, not UINT64_MAX-1.
On 11/08/2024 20:33, Tim Rentsch wrote:
Ick. That choice is exactly backwards IMO. Converting -1 to
an unsigned type always sets all the bits. Converting -1u to
an unsigned type can easily do the wrong thing, depending
on the target type.
"Converting -1 to an unsigned type always sets all the bits"
In any normal twos complement architecture that's the case. But there
are a few oddballs out there where -1 is +1, except that the dedicated
sign bit is set.
On 8/11/24 16:08, Vir Campestris wrote:
...
"Converting -1 to an unsigned type always sets all the bits"
In any normal twos complement architecture that's the case. But there
are a few oddballs out there where -1 is +1, except that the dedicated
sign bit is set.
There may be hardware where that is true, but a conforming
implementation of C targeting that hardware cannot use the hardware's
result. It must fix up the result produced by the hardware to match the result required by the C standard.
But, when that -1 value is converted to an unsigned type, that VALUE
will be adjusted modulo the appropriate power of two.
signed to unsigned conversion works on VALUE, not bit pattern, so is invariant with the representation of the negative values.
Yes, in a union with a signed and unsigned, the type punning will let
you see the representation of the types, but assignment works on values.
Thiago Adams <thiago.adams@gmail.com> writes:
I think both cases (overflow and wraparound) should have warnings.
You're free to think that, of course, but wraparound behavior is well
defined and unambiguous. I wouldn't mind an *optional* warning, but
plenty of programmers might deliberately write something like
const unsigned int max = -1;
with the reasonable expectation that it will set max to INT_MAX.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 546 |
Nodes: | 16 (0 / 16) |
Uptime: | 158:17:10 |
Calls: | 10,384 |
Calls today: | 1 |
Files: | 14,056 |
Messages: | 6,416,482 |