Thought people here might be interested in this image on Jens Gustedt's
blog, which translates section 6.2.5, "Types", of the C23 standard
into a graph of inclusions:
https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23-basic-types/
On 02.04.2025 07:59, Alexis wrote:
Thought people here might be interested in this image on Jens Gustedt's
blog, which translates section 6.2.5, "Types", of the C23 standard
into a graph of inclusions:
https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23-basic-types/
A nice overview. - I have questions on some of these types...
The _Decimal* types - are these just types with other implicit
encodings, say, BCD encoded, or some such?
On 02.04.2025 07:59, Alexis wrote:
Thought people here might be interested in this image on Jens Gustedt's
blog, which translates section 6.2.5, "Types", of the C23 standard
into a graph of inclusions:
https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23-basic-types/
A nice overview. - I have questions on some of these types...
The _Decimal* types - are these just types with other implicit
encodings, say, BCD encoded, or some such?
The nullptr_t seems to be a special beast concerning the "NULL"
entity; what purpose does that type serve, where is it used?
I see the 'bool' but recently seen mentioned some '_Bool' type.
The latter was probably chosen in that special syntax to avoid
conflicts during "C" language evolution?
How do regular "C" programmers handle that multitude of boolean
types; ranging from use of 'int', over own "bool" types, then
'_Bool', and now 'bool'? Since it's a very basic type it looks
like you need hard transitions in evolution of your "C" code?
Thought people here might be interested in this image on Jens Gustedt's
blog, which translates section 6.2.5, "Types", of the C23 standard
into a graph of inclusions:
https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23-basic-types/
On 02/04/2025 06:59, Alexis wrote:
Thought people here might be interested in this image on Jens Gustedt's
blog, which translates section 6.2.5, "Types", of the C23 standard
into a graph of inclusions:
https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23-basic-types/
So much for C being a 'simple' language.
On Wed, 2 Apr 2025 10:57:29 +0100
bart <bc@freeuk.com> wibbled:
On 02/04/2025 06:59, Alexis wrote:
Thought people here might be interested in this image on Jens Gustedt's
blog, which translates section 6.2.5, "Types", of the C23 standard
into a graph of inclusions:
https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23-basic-types/ >>>
So much for C being a 'simple' language.
C should be left alone. It does what it needs to do for a systems language. Almost no use uses it for applications any more and sophisticated processing using complex types for example are far better done in C++.
IMO, YMMV.
On 02/04/2025 12:14, Muttley@DastardlyHQ.org wrote:
On Wed, 2 Apr 2025 10:57:29 +0100
bart <bc@freeuk.com> wibbled:
On 02/04/2025 06:59, Alexis wrote:
Thought people here might be interested in this image on Jens Gustedt's >>>> blog, which translates section 6.2.5, "Types", of the C23 standard
into a graph of inclusions:
https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23-basic-types/ >>>>
So much for C being a 'simple' language.
C should be left alone. It does what it needs to do for a systems language. >> Almost no use uses it for applications any more and sophisticated processing >> using complex types for example are far better done in C++.
IMO, YMMV.
The C standards committee knows what C is used for. You can be quite >confident that they have heard plenty of people say that "C should be
left alone", as well as other people say "We would like feature X to be >standardised in C".
Changes and new features are not added to the C standards just for fun,
or just to annoy people - they are there because some people want them
and expect that they can write better / faster / clearer / safer /
easier code as a result.
On Wed, 2 Apr 2025 10:57:29 +0100
bart <bc@freeuk.com> wibbled:
On 02/04/2025 06:59, Alexis wrote:
Thought people here might be interested in this image on Jens Gustedt's
blog, which translates section 6.2.5, "Types", of the C23 standard
into a graph of inclusions:
https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23-basic-types/ >>>
So much for C being a 'simple' language.
C should be left alone. It does what it needs to do for a systems language. Almost no use uses it for applications any more and sophisticated processing using complex types for example are far better done in C++.
On Wed, 2 Apr 2025 15:35:31 +0200
David Brown <david.brown@hesbynett.no> wibbled:
On 02/04/2025 12:14, Muttley@DastardlyHQ.org wrote:
On Wed, 2 Apr 2025 10:57:29 +0100
bart <bc@freeuk.com> wibbled:
On 02/04/2025 06:59, Alexis wrote:
Thought people here might be interested in this image on Jens Gustedt's >>>>> blog, which translates section 6.2.5, "Types", of the C23 standard
into a graph of inclusions:
https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23-basic-types/
So much for C being a 'simple' language.
C should be left alone. It does what it needs to do for a systems language. >>> Almost no use uses it for applications any more and sophisticated processing
using complex types for example are far better done in C++.
IMO, YMMV.
The C standards committee knows what C is used for. You can be quite
confident that they have heard plenty of people say that "C should be
left alone", as well as other people say "We would like feature X to be
standardised in C".
I suspect the people who are happy with C never have any correspondence with anyone from the committee so they get an entirely biased sample. Just like its usually only people who had a bad experience that fill in "How did we do" surveys.
Changes and new features are not added to the C standards just for fun,
or just to annoy people - they are there because some people want them
and expect that they can write better / faster / clearer / safer /
easier code as a result.
And add complexity to compilers.
So what exactly is better / faster / clearer / safer in C23?
Em 4/2/2025 11:05 AM, Muttley@DastardlyHQ.org escreveu:
So what exactly is better / faster / clearer / safer in C23?
We already had some C23 topics here.
My list
- #warning (better)
- typeof/auto (better only when strictly necessary)
- digit separator (better, safer)
- binary literal useful
- #elifdef, OK not a problem, not complex..
- _has_include useful
- [[nodiscard]] safer (although I think it could be better defined)
- static_assert no param (clear)
Muttley@dastardlyhq.org wrote:
On Wed, 2 Apr 2025 10:57:29 +0100
bart <bc@freeuk.com> wibbled:
On 02/04/2025 06:59, Alexis wrote:
Thought people here might be interested in this image on Jens Gustedt's >>>> blog, which translates section 6.2.5, "Types", of the C23 standard
into a graph of inclusions:
https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23-basic-types/ >>>>
So much for C being a 'simple' language.
C should be left alone. It does what it needs to do for a systems language. >> Almost no use uses it for applications any more and sophisticated processing >> using complex types for example are far better done in C++.
C99 has VMT (variable modified types). Thanks to VMT and complex types
C99 can naturaly do numeric computing that previously was done using
Fortran 77. Offical C++ has no VMT. C++ mechanizms look nicer,
but can be less efficient than using VMT-s, so C has advantage for
basic numeric "cores".
Thought people here might be interested in this image on Jens Gustedt's
blog, which translates section 6.2.5, "Types", of the C23 standard into
a graph of inclusions:
https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23-basic-types/
On 02/04/2025 16:05, Muttley@DastardlyHQ.org wrote:
I suspect the people who are happy with C never have any correspondence with >> anyone from the committee so they get an entirely biased sample. Just like >> its usually only people who had a bad experience that fill in "How did we do"
surveys.
And I suspect that you haven't a clue who the C standards committee talk
to - and who those people in turn have asked.
11. nullptr for clarity and safety.
12. Some improvements to variadic macros.
18. "unreachable()" is now standard.
19. printf (and friends) support for things like "%w32i" as the format >specifier for int32_t, so that we no longer need the ugly PRIi32 style
of macro for portable code with fixed-size types.
On Wed, 2 Apr 2025 11:12:07 -0300
Thiago Adams <thiago.adams@gmail.com> wibbled:
Em 4/2/2025 11:05 AM, Muttley@DastardlyHQ.org escreveu:
So what exactly is better / faster / clearer / safer in C23?
We already had some C23 topics here.
My list
- #warning (better)
- typeof/auto (better only when strictly necessary)
Auto as per C++ where its used as a substitute for unknown/long winded templated types or in range based loops? C doesn't have those so there's no reason to have it. If you don't know what type you're dealing with in C then you'll soon be up poo creek.
- digit separator (better, safer)
Meh.
- binary literal useful
We've had bitfields for years which cover most use cases.
On 02/04/2025 16:12, Muttley@DastardlyHQ.org wrote:
Meh.
What's the problem with it? Here, tell me at a glance the magnitude of
this number:
10000000000
We've had bitfields for years which cover most use cases.
Bitfields and binary literals are completely different things! A binary >literal looks like this (if I got the prefix right):
0b1_1101_1101 // the decimal value 447 or hex value 1DD
0b11011101 // same thing without the separators
On Wed, 2 Apr 2025 16:59:45 +0200
David Brown <david.brown@hesbynett.no> wibbled:
On 02/04/2025 16:05, Muttley@DastardlyHQ.org wrote:
I suspect the people who are happy with C never have any correspondence with
anyone from the committee so they get an entirely biased sample. Just like >>> its usually only people who had a bad experience that fill in "How did we do"
surveys.
And I suspect that you haven't a clue who the C standards committee talk
to - and who those people in turn have asked.
By imference you do - so who are they?
11. nullptr for clarity and safety.
Never understood that in C++ never mind C. NULL has worked fine for 50 years.
On 02/04/2025 16:26, Muttley@DastardlyHQ.org wrote:
On Wed, 2 Apr 2025 16:59:45 +0200
David Brown <david.brown@hesbynett.no> wibbled:
On 02/04/2025 16:05, Muttley@DastardlyHQ.org wrote:
I suspect the people who are happy with C never have any correspondence >with
anyone from the committee so they get an entirely biased sample. Just like >>>> its usually only people who had a bad experience that fill in "How did we >do"
surveys.
And I suspect that you haven't a clue who the C standards committee talk >>> to - and who those people in turn have asked.
By imference you do - so who are they?
11. nullptr for clarity and safety.
Never understood that in C++ never mind C. NULL has worked fine for 50 years.
And it's been a hack for 50 years. Especially when it is just:
#define NULL 0
On Wed, 2 Apr 2025 16:59:45 +0200ist first.
David Brown <david.brown@hesbynett.no> wibbled:
On 02/04/2025 16:05, Muttley@DastardlyHQ.org wrote:
18. "unreachable()" is now standard.
Googled it - don't see the point.
On Wed, 2 Apr 2025 16:33:46 +0100
bart <bc@freeuk.com> gabbled:
On 02/04/2025 16:12, Muttley@DastardlyHQ.org wrote:
Meh.
What's the problem with it? Here, tell me at a glance the magnitude of
this number:
10000000000
And how often do you hard code values that large into a program? Almost
never I imagine unless its some hex value to set flags in a word.
On Wed, 2 Apr 2025 11:12:07 -0300
Thiago Adams <thiago.adams@gmail.com> wibbled:
Em 4/2/2025 11:05 AM, Muttley@DastardlyHQ.org escreveu:
So what exactly is better / faster / clearer / safer in C23?
We already had some C23 topics here.
My list
- #warning (better)
- typeof/auto (better only when strictly necessary)
Auto as per C++ where its used as a substitute for unknown/long winded >templated types or in range based loops? C doesn't have those so there's no >reason to have it. If you don't know what type you're dealing with in C then >you'll soon be up poo creek.
- digit separator (better, safer)
Meh.
- binary literal useful
We've had bitfields for years which cover most use cases.
On Wed, 2 Apr 2025 16:59:45 +0200
David Brown <david.brown@hesbynett.no> wibbled:
On 02/04/2025 16:05, Muttley@DastardlyHQ.org wrote:
I suspect the people who are happy with C never have any correspondence with
anyone from the committee so they get an entirely biased sample. Just like >>> its usually only people who had a bad experience that fill in "How did we do"
surveys.
And I suspect that you haven't a clue who the C standards committee talk
to - and who those people in turn have asked.
By imference you do - so who are they?
11. nullptr for clarity and safety.
Never understood that in C++ never mind C. NULL has worked fine for 50 years.
12. Some improvements to variadic macros.
Might be useful. Would be nice to pass the "..." args directly through to lower
level functions without having to convert them to a va_list first.
18. "unreachable()" is now standard.
Googled it - don't see the point. More syntatic noise.
19. printf (and friends) support for things like "%w32i" as the format
specifier for int32_t, so that we no longer need the ugly PRIi32 style
of macro for portable code with fixed-size types.
If you do a lot of cross platform code might be useful.
To be honest you can do most of you posted already - just compile C with a C++
compiler. Seems a case of catch up me-too.
On 02/04/2025 16:26, Muttley@DastardlyHQ.org wrote:
On Wed, 2 Apr 2025 16:59:45 +0200
David Brown <david.brown@hesbynett.no> wibbled:
On 02/04/2025 16:05, Muttley@DastardlyHQ.org wrote:
I suspect the people who are happy with C never have any
correspondence with
anyone from the committee so they get an entirely biased sample.
Just like
its usually only people who had a bad experience that fill in "How
did we do"
surveys.
And I suspect that you haven't a clue who the C standards committee talk >>> to - and who those people in turn have asked.
By imference you do - so who are they?
11. nullptr for clarity and safety.
Never understood that in C++ never mind C. NULL has worked fine for 50
years.
And it's been a hack for 50 years. Especially when it is just:
#define NULL 0
You also need to include some header (which one?) in order to use it.
I'd hope you wouldn't need to do that for nullptr, but backwards compatibility may require it (because of any forward-thinking
individuals who have already defined their own 'nullptr').
Changes and new features are not added to the C standards just for fun,
or just to annoy people - they are there because some people want them
On 02/04/2025 17:26, Muttley@DastardlyHQ.org wrote:
On Wed, 2 Apr 2025 16:59:45 +0200
David Brown <david.brown@hesbynett.no> wibbled:
On 02/04/2025 16:05, Muttley@DastardlyHQ.org wrote:
I suspect the people who are happy with C never have any correspondence with
anyone from the committee so they get an entirely biased sample. Just like >>>> its usually only people who had a bad experience that fill in "How did we do"
surveys.
And I suspect that you haven't a clue who the C standards committee talk >>> to - and who those people in turn have asked.
By imference you do - so who are they?
That's an unwarranted inference. I assume that they talk with compiler >developers, library developers, and representatives of at least some
users (typically from large companies or major projects). And those
people will have contact with and feedback from their users and
developers.
On 02/04/2025 17:38, bart wrote:
On 02/04/2025 16:26, Muttley@DastardlyHQ.org wrote:
Never understood that in C++ never mind C. NULL has worked fine for
50 years.
And it's been a hack for 50 years. Especially when it is just:
#define NULL 0
The common definition in C is :
#define NULL ((void*) 0)
Some compilers might have an extension, such as gcc's "__null", that are
used instead to allow better static error checking.
(In C++, it is often defined to 0, because the rules for implicit
conversions from void* are different in C++.)
You also need to include some header (which one?) in order to use it.
<stddef.h>, as pretty much any C programmer will know.
On 02/04/2025 18:29, David Brown wrote:
On 02/04/2025 17:38, bart wrote:
On 02/04/2025 16:26, Muttley@DastardlyHQ.org wrote:
Never understood that in C++ never mind C. NULL has worked fine for
50 years.
And it's been a hack for 50 years. Especially when it is just:
#define NULL 0
The common definition in C is :
#define NULL ((void*) 0)
Some compilers might have an extension, such as gcc's "__null", that are
used instead to allow better static error checking.
(In C++, it is often defined to 0, because the rules for implicit
conversions from void* are different in C++.)
You also need to include some header (which one?) in order to use it.
<stddef.h>, as pretty much any C programmer will know.
This program:
void* p = NULL;
reports that NULL is undefined, but that can be fixed by including any
of stdio.h, stdlib.h or string.h. Those are the first three I tried;
there may be others.
So it is not true that you need include stddef.h, nor obvious that that
is where NULL is defined, if you are used to having it available indirectly.
So it is not true that you need include stddef.h, nor obvious that that
is where NULL is defined, if you are used to having it available indirectly.
On 02/04/2025 16:26, Muttley@DastardlyHQ.org wrote:
On Wed, 2 Apr 2025 16:59:45 +0200
David Brown <david.brown@hesbynett.no> wibbled:
On 02/04/2025 16:05, Muttley@DastardlyHQ.org wrote:
I suspect the people who are happy with C never have any
correspondence with anyone from the committee so they get an
entirely biased sample. Just like its usually only people who had
a bad experience that fill in "How did we do"
surveys.
And I suspect that you haven't a clue who the C standards
committee talk to - and who those people in turn have asked.
By imference you do - so who are they?
11. nullptr for clarity and safety.
Never understood that in C++ never mind C. NULL has worked fine for
50 years.
And it's been a hack for 50 years. Especially when it is just:
#define NULL 0
You also need to include some header (which one?) in order to use it.
I'd hope you wouldn't need to do that for nullptr, but backwards compatibility may require it (because of any forward-thinking
individuals who have already defined their own 'nullptr').
Muttley@dastardlyhq.com writes:
On Wed, 2 Apr 2025 16:33:46 +0100
bart <bc@freeuk.com> gabbled:
On 02/04/2025 16:12, Muttley@DastardlyHQ.org wrote:
Meh.
What's the problem with it? Here, tell me at a glance the magnitude of
this number:
10000000000
And how often do you hard code values that large into a program? Almost
never I imagine unless its some hex value to set flags in a word.
Every day, several times a day. 16 hex digit constants are very
common in my work. The digit separator really helps with readability, although I would have preferred '_' over "'".
[...]
From the next version beyond C23, so far there is :
1. Declarations in "if" and "switch" statements, like those in "for"
loops, helps keep local variable scopes small and neat.
2. Ranges in case labels - that speaks for itself (though again I used
it already as a gcc extension).
[...]
On 02.04.2025 18:20, Scott Lurndal wrote:
Muttley@dastardlyhq.com writes:
On Wed, 2 Apr 2025 16:33:46 +0100
bart <bc@freeuk.com> gabbled:
On 02/04/2025 16:12, Muttley@DastardlyHQ.org wrote:
Meh.
What's the problem with it? Here, tell me at a glance the magnitude of >>>> this number:
10000000000
And how often do you hard code values that large into a program? Almost
never I imagine unless its some hex value to set flags in a word.
I can't tell generally; it certainly depends on the application
contexts.
And of course for bases lower than 10 the numeric literals grow
in length, so its usefulness is probably most obvious in binary
literals. But why restrict a readability feature to binary only?
It's useful and it doesn't hurt (WRT compatibility).
Every day, several times a day. 16 hex digit constants are very
common in my work. The digit separator really helps with readability,
although I would have preferred '_' over "'".
Obviously a question of opinion depending on where one comes from.
Muttley@dastardlyhq.com writes:
On Wed, 2 Apr 2025 16:33:46 +0100
bart <bc@freeuk.com> gabbled:
On 02/04/2025 16:12, Muttley@DastardlyHQ.org wrote:
Meh.
What's the problem with it? Here, tell me at a glance the magnitude of
this number:
10000000000
And how often do you hard code values that large into a program? Almost
never I imagine unless its some hex value to set flags in a word.
Every day, several times a day. 16 hex digit constants are very
common in my work. The digit separator really helps with readability, although I would have preferred '_' over "'".
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
[...]
Obviously a question of opinion depending on where one comes from.
Verilog uses _ as a digit separator.
I see the 'bool' but recently seen mentioned some '_Bool' type.
The latter was probably chosen in that special syntax to avoid
conflicts during "C" language evolution?
How do regular "C" programmers handle that multitude of boolean
types; ranging from use of 'int', over own "bool" types, then
'_Bool', and now 'bool'? Since it's a very basic type it looks
like you need hard transitions in evolution of your "C" code?
On 2025-04-02, Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
On 02.04.2025 07:59, Alexis wrote:
Thought people here might be interested in this image on Jens Gustedt's
blog, which translates section 6.2.5, "Types", of the C23 standard
into a graph of inclusions:
https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23-basic-types/
A nice overview. - I have questions on some of these types...
The _Decimal* types - are these just types with other implicit
encodings, say, BCD encoded, or some such?
IEEE 754 defines decimal floating point types now, so that's what
that is about. The spec allows for the significand to be encoded
using Binary Integer Decimal, or to use Densely Packed Decimal.
(In my opinion, things would be better if headers were not allowed
to behave as if they include other headers, or provide identifiers
also given in other heards. Not in ISO C, and not in POSIX.
Every identifier should be declared in exactly one home header,
and no other header should provide that definition. [...])
bart <bc@freeuk.com> writes:[...]
So it is not true that you need include stddef.h, nor obvious
that that is where NULL is defined, if you are used to having it
available indirectly.
Indeed, and it is well documented.
For example, in the POSIX description for the string functions
you'll find the following statement:
[CX] Inclusion of the <string.h> header may also make visible
all symbols from <stddef.h>. [Option End]
This is true for a number of POSIX headers, include those you
enumerate above.
[CX] marks a POSIX extension to ISO C.
When a thing exists, the job of the standard is to standardize what
exists, and not invent some caricature of it.
On 02/04/2025 17:20, Scott Lurndal wrote:
Muttley@dastardlyhq.com writes:
On Wed, 2 Apr 2025 16:33:46 +0100
bart <bc@freeuk.com> gabbled:
On 02/04/2025 16:12, Muttley@DastardlyHQ.org wrote:
Meh.
What's the problem with it? Here, tell me at a glance the magnitude of >>>> this number:
10000000000
And how often do you hard code values that large into a program? Almost
never I imagine unless its some hex value to set flags in a word.
Every day, several times a day. 16 hex digit constants are very
common in my work. The digit separator really helps with readability,
although I would have preferred '_' over "'".
Oh, I thought C23 used '_', since Python uses that. I prefer single
quote as that is not shifted on my keyboard. (My language projects just
allow both!)
So what exactly is better / faster / clearer / safer in C23?
On Wed, 2 Apr 2025 14:05:17 -0000 (UTC)
Muttley@DastardlyHQ.org wrote:
So what exactly is better / faster / clearer / safer in C23?
Are you banned in Wikipedia?!
On 4/2/25 14:02, Kaz Kylheku wrote:
...
When a thing exists, the job of the standard is to standardize what
exists, and not invent some caricature of it.
In the process of standardization, the committee is supposed to exercise
it's judgement, and if that judgement says that there's a better way to
do something than the way for with there is existing practice, they have
an obligation to correct the design of that feature accordingly.
On 02.04.2025 09:32, Kaz Kylheku wrote:
On 2025-04-02, Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
On 02.04.2025 07:59, Alexis wrote:
A nice overview. - I have questions on some of these types...
Thought people here might be interested in this image on Jens Gustedt's >>>> blog, which translates section 6.2.5, "Types", of the C23 standard
into a graph of inclusions:
https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23-basic-types/ >>>
The _Decimal* types - are these just types with other implicit
encodings, say, BCD encoded, or some such?
IEEE 754 defines decimal floating point types now, so that's what
that is about. The spec allows for the significand to be encoded
using Binary Integer Decimal, or to use Densely Packed Decimal.
Thanks for the hint and keywords. It seems my BCD guess was not far
from what these two IEEE formats actually are.
Does that now mean that every conforming C23 compiler must support
yet more numeric types, including multiple implementation-versions
of the whole arithmetic functions and operators necessary?
I wonder why these variants had been introduced.
In many other languages you have abstractions of numeric types, not
every implicitly encoding variant revealed an the programming level.
On 02/04/2025 17:20, Scott Lurndal wrote:
Muttley@dastardlyhq.com writes:
On Wed, 2 Apr 2025 16:33:46 +0100
bart <bc@freeuk.com> gabbled:
On 02/04/2025 16:12, Muttley@DastardlyHQ.org wrote:
Meh.
What's the problem with it? Here, tell me at a glance the magnitude of >>>> this number:
10000000000
And how often do you hard code values that large into a program? Almost
never I imagine unless its some hex value to set flags in a word.
Every day, several times a day. 16 hex digit constants are very
common in my work. The digit separator really helps with readability,
although I would have preferred '_' over "'".
Oh, I thought C23 used '_', since Python uses that. I prefer single
quote as that is not shifted on my keyboard. (My language projects just
allow both!)
That fact that it is not widespread is a problem however, so I can't use either without restricting the compilers that can be used.
For example gcc 14.x on Windows accepts it with -std=c23 only; gcc on
WSL doesn't; tcc doesn't.
On 02.04.2025 18:20, Scott Lurndal wrote:
Muttley@dastardlyhq.com writes:
On Wed, 2 Apr 2025 16:33:46 +0100
bart <bc@freeuk.com> gabbled:
On 02/04/2025 16:12, Muttley@DastardlyHQ.org wrote:
Meh.
What's the problem with it? Here, tell me at a glance the magnitude of >>>> this number:
10000000000
And how often do you hard code values that large into a program? Almost
never I imagine unless its some hex value to set flags in a word.
I can't tell generally; it certainly depends on the application
contexts.
And of course for bases lower than 10 the numeric literals grow
in length, so its usefulness is probably most obvious in binary
literals. But why restrict a readability feature to binary only?
It's useful and it doesn't hurt (WRT compatibility).
Every day, several times a day. 16 hex digit constants are very
common in my work. The digit separator really helps with readability,
although I would have preferred '_' over "'".
Obviously a question of opinion depending on where one comes from.
I see a couple options for the group separator. Spaces (as used in
Algol 68) are probably most readable, but maybe a no-go in "C".
Locale specific separators (dot and comma vs. comma and dot, in
fractional numbers) and the problem of commas infer own semantics.
The single quote is actually what I found well suited in the past;
it stems (I think) from the convention used in Switzerland. The
underscore you mention didn't occur to me as option, but it's not
bad as well.
On 02/04/2025 18:29, David Brown wrote:
On 02/04/2025 17:38, bart wrote:
On 02/04/2025 16:26, Muttley@DastardlyHQ.org wrote:
Never understood that in C++ never mind C. NULL has worked fine for
50 years.
And it's been a hack for 50 years. Especially when it is just:
#define NULL 0
The common definition in C is :
#define NULL ((void*) 0)
Some compilers might have an extension, such as gcc's "__null", that
are used instead to allow better static error checking.
(In C++, it is often defined to 0, because the rules for implicit
conversions from void* are different in C++.)
You also need to include some header (which one?) in order to use it.
<stddef.h>, as pretty much any C programmer will know.
This program:
void* p = NULL;
reports that NULL is undefined, but that can be fixed by including any
of stdio.h, stdlib.h or string.h. Those are the first three I tried;
there may be others.
So it is not true that you need include stddef.h, nor obvious that that
is where NULL is defined, if you are used to having it available
indirectly.
On 03/04/2025 05:43, Janis Papanagnou wrote:
In many other languages you have abstractions of numeric types, not
every implicitly encoding variant revealed an the programming level.
That's often fine within a program, but sometimes you need to exchange
data with other programs. In particular, C is the standard language for common libraries - being able to reliably and consistently exchange data
with other languages and other machines is thus very important.
scott@slp53.sl.home (Scott Lurndal) writes:
bart <bc@freeuk.com> writes:[...]
On 02/04/2025 18:29, David Brown wrote:
On 02/04/2025 17:38, bart wrote:
You also need to include some header (which one?) in order to use it. >>>><stddef.h>, as pretty much any C programmer will know.
This program:
void* p = NULL;
reports that NULL is undefined, but that can be fixed by including any
of stdio.h, stdlib.h or string.h. Those are the first three I tried;
there may be others.
So it is not true that you need include stddef.h, nor obvious that that
is where NULL is defined, if you are used to having it available indirectly.
Indeed, and it is well documented.
For example, in the POSIX description for the string functions you'll
find the following statement:
[CX] Inclusion of the <string.h> header may also make visible all
symbols from <stddef.h>. [Option End]
This is true for a number of POSIX headers, include those you enumerate
above.
[CX] marks a POSIX extension to ISO C.
Interesting. The C standard says that <string.h> defines NULL and
size_t, both of which are also defined in <stddef.h>. A number of other symbols from <stddef.h> are also defined in other headers. A conforming implementation may not make any other declarations from <stddef.h>
visible as a result of including <string.h>. I wonder why POSIX has
that "extension".
Muttley@DastardlyHQ.org writes:
On Wed, 2 Apr 2025 16:59:45 +0200ist first.
David Brown <david.brown@hesbynett.no> wibbled:
On 02/04/2025 16:05, Muttley@DastardlyHQ.org wrote:
18. "unreachable()" is now standard.
Googled it - don't see the point.
That's a defect in your understanding, not a defect in the standard.
I've found the gcc equivelent useful often in standalone
applications (OS, Hypervisor, standalone utilities, etc).
On 2025-04-02, bart <bc@freeuk.com> wrote:
So it is not true that you need include stddef.h, nor obvious that that
is where NULL is defined, if you are used to having it available indirectly.
It's documented as the canonical source of NULL.
In C90, now 35 years ago, it was written up like this:
7.1.6 Common definitions <stddef.h>
The following types and macros are defined in the standard header
<stddef.h>. Some are also defined in other headers, as noted in their
respective subclauses.
...
The macros are
NULL
which expands to an implementation-defined null pointer constant: and
offsetof(type, member-designator)
... etc
There is no other easy way to find that out. An implementation could directly stick #define NULL into every header that is either allowed or required to reveal that macro, and so from that you would not know which headers are required to provide it.
Many things are not going to be "obvious" if you don't use documentation.
(In my opinion, things would be better if headers were not allowed to behave as
if they include other headers, or provide identifiers also given in other heards. Not in ISO C, and not in POSIX. Every identifier should be declared in
exactly one home header, and no other header should provide that definition. Programs ported from one Unix to another sometimes break for no other reason than this! On the original platform, a header provided a certain identifier which it is not required to; on the new platform that same header doesn't do that.)
Muttley@dastardlyhq.com writes:
On Wed, 2 Apr 2025 16:33:46 +0100
bart <bc@freeuk.com> gabbled:
On 02/04/2025 16:12, Muttley@DastardlyHQ.org wrote:
Meh.
What's the problem with it? Here, tell me at a glance the magnitude of >>>this number:
10000000000
And how often do you hard code values that large into a program? Almost >>never I imagine unless its some hex value to set flags in a word.
Every day, several times a day. 16 hex digit constants are very
common in my work. The digit separator really helps with readability,
On 02/04/2025 17:26, Muttley@DastardlyHQ.org wrote:
On Wed, 2 Apr 2025 16:59:45 +0200
David Brown <david.brown@hesbynett.no> wibbled:
On 02/04/2025 16:05, Muttley@DastardlyHQ.org wrote:
I suspect the people who are happy with C never have any correspondence >with
anyone from the committee so they get an entirely biased sample. Just like >>>> its usually only people who had a bad experience that fill in "How did we >do"
surveys.
And I suspect that you haven't a clue who the C standards committee talk >>> to - and who those people in turn have asked.
By imference you do - so who are they?
That's an unwarranted inference. I assume that they talk with compiler
11. nullptr for clarity and safety.
Never understood that in C++ never mind C. NULL has worked fine for 50 years.
If ignorance really is bliss, you must be the happiest person around.
Or you can read one of my other posts pointing out the advantages of
nullptr.
A number of these changes did come over from C++, yes. That does not
mean they are not useful or wanted in C - it means the C world is happy
to let C++ go first, then copy what has been shown to be useful. I
think that is a good strategy.
Some people (including me) will choose to use C++, but others prefer to
(or are required to) use C.
On 2025-04-03, bart <bc@freeuk.com> wrote:
Oh, I thought C23 used '_', since Python uses that. I prefer single
quote as that is not shifted on my keyboard. (My language projects just
allow both!)
I made , (comma) the digit separator in TXR Lisp. Nobody uses _ in the
real world.
On 4/2/2025 8:16 AM, Muttley@DastardlyHQ.org wrote:
On Wed, 2 Apr 2025 14:12:18 -0000 (UTC)https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23-basic-types/
antispam@fricas.org (Waldek Hebisch) wibbled:
Muttley@dastardlyhq.org wrote:
On Wed, 2 Apr 2025 10:57:29 +0100
bart <bc@freeuk.com> wibbled:
On 02/04/2025 06:59, Alexis wrote:
Thought people here might be interested in this image on Jens Gustedt's >>>>>> blog, which translates section 6.2.5, "Types", of the C23 standard >>>>>> into a graph of inclusions:
So much for C being a 'simple' language.
C should be left alone. It does what it needs to do for a systems language.
Almost no use uses it for applications any more and sophisticated >processing
using complex types for example are far better done in C++.
C99 has VMT (variable modified types). Thanks to VMT and complex types
C99 can naturaly do numeric computing that previously was done using
Fortran 77. Offical C++ has no VMT. C++ mechanizms look nicer,
Officially no, but I've never come across a C++ compiler that didn't support >> them given they're all C compilers too.
All C++ compilers are also C compilers?
On Wed, 2 Apr 2025 16:38:03 +0100
bart <bc@freeuk.com> wrote:
On 02/04/2025 16:26, Muttley@DastardlyHQ.org wrote:
On Wed, 2 Apr 2025 16:59:45 +0200
David Brown <david.brown@hesbynett.no> wibbled:
On 02/04/2025 16:05, Muttley@DastardlyHQ.org wrote:
I suspect the people who are happy with C never have any
correspondence with anyone from the committee so they get an
entirely biased sample. Just like its usually only people who had
a bad experience that fill in "How did we do"
surveys.
And I suspect that you haven't a clue who the C standards
committee talk to - and who those people in turn have asked.
By imference you do - so who are they?
11. nullptr for clarity and safety.
Never understood that in C++ never mind C. NULL has worked fine for
50 years.
And it's been a hack for 50 years. Especially when it is just:
#define NULL 0
You also need to include some header (which one?) in order to use it.
I'd hope you wouldn't need to do that for nullptr, but backwards
compatibility may require it (because of any forward-thinking
individuals who have already defined their own 'nullptr').
C23 is rather bold in that regard, adding non-underscored keywords as
if there was no yesterday. IMHO, for no good reasons.
On 02.04.2025 16:59, David Brown wrote:
[...]
From the next version beyond C23, so far there is :
1. Declarations in "if" and "switch" statements, like those in "for"
loops, helps keep local variable scopes small and neat.
Oh, I thought that would already be supported in some existing "C"
version for the 'if'; I probably confused that with C++.
On Wed, 02 Apr 2025 16:16:27 GMT
scott@slp53.sl.home (Scott Lurndal) wibbled:
Muttley@DastardlyHQ.org writes:
On Wed, 2 Apr 2025 16:59:45 +0200ist first.
David Brown <david.brown@hesbynett.no> wibbled:
On 02/04/2025 16:05, Muttley@DastardlyHQ.org wrote:
18. "unreachable()" is now standard.
Googled it - don't see the point.
That's a defect in your understanding, not a defect in the standard.
I've found the gcc equivelent useful often in standalone
applications (OS, Hypervisor, standalone utilities, etc).
Enlighten me then.
On 03/04/2025 10:45, Muttley@DastardlyHQ.org wrote:
On Wed, 02 Apr 2025 16:16:27 GMT
scott@slp53.sl.home (Scott Lurndal) wibbled:
Muttley@DastardlyHQ.org writes:
On Wed, 2 Apr 2025 16:59:45 +0200ist first.
David Brown <david.brown@hesbynett.no> wibbled:
On 02/04/2025 16:05, Muttley@DastardlyHQ.org wrote:
18. "unreachable()" is now standard.
Googled it - don't see the point.
That's a defect in your understanding, not a defect in the standard.
I've found the gcc equivelent useful often in standalone
applications (OS, Hypervisor, standalone utilities, etc).
Enlighten me then.
I can't tell you what Scott uses it for, but I have used gcc's >__builtin_unreachable() a fair number of times in my coding. I use it
to inform both the compiler and human readers that a path is unreachable:
switch (x) {
case 1 : ...
case 2 : ...
case 3 : ...
default : __builtin_unreachable();
}
I can also use it to inform the compiler about data :
if ((x < 0) || (x > 10)) __builtin_unreachable();
// x must be 1 .. 10
Good use of __builtin_unreachable() can result in smaller and faster
code, and possibly improved static error checking. It is related to the
Thought people here might be interested in this image on Jens
Gustedt's blog, which translates section 6.2.5, "Types", of the C23
standard into a graph of inclusions:
https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23-basic-types/
Alexis.
On 03/04/2025 10:45, Muttley@DastardlyHQ.org wrote:
On Wed, 02 Apr 2025 16:16:27 GMT
scott@slp53.sl.home (Scott Lurndal) wibbled:
Muttley@DastardlyHQ.org writes:
On Wed, 2 Apr 2025 16:59:45 +0200ist first.
David Brown <david.brown@hesbynett.no> wibbled:
On 02/04/2025 16:05, Muttley@DastardlyHQ.org wrote:
18. "unreachable()" is now standard.
Googled it - don't see the point.
That's a defect in your understanding, not a defect in the
standard.
I've found the gcc equivelent useful often in standalone
applications (OS, Hypervisor, standalone utilities, etc).
Enlighten me then.
I can't tell you what Scott uses it for, but I have used gcc's __builtin_unreachable() a fair number of times in my coding. I use
it to inform both the compiler and human readers that a path is
unreachable:
switch (x) {
case 1 : ...
case 2 : ...
case 3 : ...
default : __builtin_unreachable();
}
I can also use it to inform the compiler about data :
if ((x < 0) || (x > 10)) __builtin_unreachable();
// x must be 1 .. 10
Mostly I have it wrapped in macros that let me conveniently have
run-time checking during testing or debugging, and extra efficiency
in the code when I am confident it is bug-free.
Good use of __builtin_unreachable() can result in smaller and faster
code, and possibly improved static error checking. It is related to
the C++23 "assume" attribute (which is also available as a gcc
extension in any C and C++ version).
All C++ compilers are also C compilers?
Name a current one (ie not a cross compiler from the 90s) that isn't.
On 02/04/2025 22:24, Michael S wrote:
On Wed, 2 Apr 2025 16:38:03 +0100
bart <bc@freeuk.com> wrote:
On 02/04/2025 16:26, Muttley@DastardlyHQ.org wrote:
On Wed, 2 Apr 2025 16:59:45 +0200
David Brown <david.brown@hesbynett.no> wibbled:
On 02/04/2025 16:05, Muttley@DastardlyHQ.org wrote:
I suspect the people who are happy with C never have any
correspondence with anyone from the committee so they get an
entirely biased sample. Just like its usually only people who had
a bad experience that fill in "How did we do"
surveys.
And I suspect that you haven't a clue who the C standards
committee talk to - and who those people in turn have asked.
By imference you do - so who are they?
11. nullptr for clarity and safety.
Never understood that in C++ never mind C. NULL has worked fine for
50 years.
And it's been a hack for 50 years. Especially when it is just:
#define NULL 0
You also need to include some header (which one?) in order to use it.
I'd hope you wouldn't need to do that for nullptr, but backwards
compatibility may require it (because of any forward-thinking
individuals who have already defined their own 'nullptr').
C23 is rather bold in that regard, adding non-underscored keywords as
if there was no yesterday. IMHO, for no good reasons.
It is bold, perhaps, but there are certainly good reasons.
This does mean that some pre-C23 code will be incompatible with C23.
On Wed, 2 Apr 2025 19:23:58 +0200
David Brown <david.brown@hesbynett.no> wibbled:
On 02/04/2025 17:26, Muttley@DastardlyHQ.org wrote:
On Wed, 2 Apr 2025 16:59:45 +0200
David Brown <david.brown@hesbynett.no> wibbled:
11. nullptr for clarity and safety.
Never understood that in C++ never mind C. NULL has worked fine for 50 years.
If ignorance really is bliss, you must be the happiest person around.
Or you can read one of my other posts pointing out the advantages of
nullptr.
Compile them into a book and publish it. In the meantime I have better things to do than trawl back through god knows how many posts to find them.
A number of these changes did come over from C++, yes. That does not
mean they are not useful or wanted in C - it means the C world is happy
to let C++ go first, then copy what has been shown to be useful. I
think that is a good strategy.
Some people (including me) will choose to use C++, but others prefer to
(or are required to) use C.
I can't imagine many situations outside of maybe speclialist hardware scenarios
where the C compiler isn't also a C++ compiler.
On 03/04/2025 10:49, Muttley@DastardlyHQ.org wrote:
If ignorance really is bliss, you must be the happiest person around.
Or you can read one of my other posts pointing out the advantages of
nullptr.
Compile them into a book and publish it. In the meantime I have better things
to do than trawl back through god knows how many posts to find them.
It was in this thread!
I can't imagine many situations outside of maybe speclialist hardware >scenarios
where the C compiler isn't also a C++ compiler.
It is fair to say that the most used C compilers - gcc and clang - are >usually combined with C++ compilers. (The other big C++ compiler, MSVC, >doesn't have decent modern C support.) But that does /not/ mean that
people who want a bit more than older C standards support will want to >compile their C code with a C++ compiler! There are countless reasons
why that is an unrealistic idea.
On 03/04/2025 10:51, Muttley@DastardlyHQ.org wrote:
All C++ compilers are also C compilers?
Name a current one (ie not a cross compiler from the 90s) that isn't.
Most compilers handling both C and C++ sure have a common code base, but
why does it matter? C and C++ are two different languages with a
different standard and quite a few different behaviors and even accepted >syntax. C has not been a "subset" of C++ for a very long time, although
this is something still said on a regular basis. It was maybe true in
the early days of C++ but hasn't been in ages.
You're probably referring to the C++ front-end of GCC and Clang (which >strives to support the same things as GCC to be a drop-in replacement),
which supports compiler-specific extensions for both C and C++, some of
them borrowing from one another (like C getting some features that were
only available in C++, and conversely). But that's not standard C or
C++, so that point is kind of moot. If you want to write
standard-compliant code only, most of what's been added in C since C99
is not available in C++. For instance, if I'm not mistaken, designated >initializers, which are very handy and have been available in C since
C99 (25 years ago) have appeared only in C++20, about 20 years later.
"Interestingly", committees seem to differ largely on the topic: the C++ >committee has been promoting making C a strict subset of C++ for years,
while the C committee is a lot less enthused by that idea. C does >occasionally and slowly borrow some features from C++ when they do bring >value without breaking C, but that's pretty much the extent of it. As of >2025, making C a strict standardized subset of C++ would benefit neither.
David Brown <david.brown@hesbynett.no> writes:
On 03/04/2025 02:41, Keith Thompson wrote:[...]
scott@slp53.sl.home (Scott Lurndal) writes:
For example, in the POSIX description for the string functions you'llInteresting. The C standard says that <string.h> defines NULL and
find the following statement:
[CX] Inclusion of the <string.h> header may also make
visible all
symbols from <stddef.h>. [Option End]
This is true for a number of POSIX headers, include those you enumerate >>>> above.
[CX] marks a POSIX extension to ISO C.
size_t, both of which are also defined in <stddef.h>. A number of other >>> symbols from <stddef.h> are also defined in other headers. A conforming >>> implementation may not make any other declarations from <stddef.h>
visible as a result of including <string.h>. I wonder why POSIX has
that "extension".
The documentation quoted by Scott says "may". To me, it seems pretty
obvious why they have this. It means that their definition of
<string.h> can start with
#include <stddef.h>
rather than go through the merry dance of conditional compilation,
defining and undefining these macros, "__null_is_defined" macros, and
the rest of it. This all gets particularly messy when some standard
headers (generally those that are part of "freestanding" C - including
<stddef.h>) often come with the compiler, while other parts (like
<string.h>) generally come with the library. On some systems, these
two parts are from entirely separate groups, and may use different
conventions for their various "__is_defined" tag macros.
Yes, implementers *may* be so lazy that they don't bother to define
their standard headers in the manner required by the C standard.
Building an implementation from separate parts can make things more difficult. That's no excuse for getting things wrong.
Maybe you could have an implementation that conforms to POSIX without attempting to conform to ISO C, but POSIX is based on ISO C.
For instance, if I'm not mistaken,
designated initializers, which are very handy and have been available
in C since C99 (25 years ago) have appeared only in C++20, about 20
years later.
On 03.04.2025 01:32, Scott Lurndal wrote:
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
[...]
Obviously a question of opinion depending on where one comes from.
Verilog uses _ as a digit separator.
And Kornshell's 'printf' uses ',' for output formatting as in
$ printf "%,d\n" 1234567
1,234,567
Maybe it should be configurable?
On 2025-04-03, bart <bc@freeuk.com> wrote:
On 02/04/2025 17:20, Scott Lurndal wrote:
Muttley@dastardlyhq.com writes:
On Wed, 2 Apr 2025 16:33:46 +0100
bart <bc@freeuk.com> gabbled:
On 02/04/2025 16:12, Muttley@DastardlyHQ.org wrote:
Meh.
What's the problem with it? Here, tell me at a glance the magnitude of >>>>> this number:
10000000000
And how often do you hard code values that large into a program? Almost >>>> never I imagine unless its some hex value to set flags in a word.
Every day, several times a day. 16 hex digit constants are very
common in my work. The digit separator really helps with readability,
although I would have preferred '_' over "'".
Oh, I thought C23 used '_', since Python uses that. I prefer single
quote as that is not shifted on my keyboard. (My language projects just
allow both!)
I made , (comma) the digit separator in TXR Lisp. Nobody uses _ in the
real world.
On 03/04/2025 09:59, David Brown wrote:
It is bold, perhaps, but there are certainly good reasons.
Perhaps go bolder and drop the need to explicitly include those 30 or
so standard headers. It's ridiculous having to micro-manage the
availablity of fundamental language features ('uint8_t' for example!)
in every module.
When I suggested this is the past, people were up in arms about the
overheads of having to compile all those headers (in 2017, they were
3-5KB lines in all for gcc on Windows/Linux).
Yet the same people think nothing of using libraries like SDL2 (50K
lines of headers) or GTK2 (350K lines).
This does mean that some pre-C23 code will be incompatible with
C23.
This was also my view in the past, to draw a line under 'old' C and
to start using 'new' C.
I understand C23 mode will be enabled by a compiler option
(-std=c23);
the same method could have been used to enable all std
headers, and for that to be the default.
Hello World then becomes this one-liner:
int main() {puts("Hello, World!");}
On 03/04/2025 09:59, David Brown wrote:
On 02/04/2025 22:24, Michael S wrote:
On Wed, 2 Apr 2025 16:38:03 +0100
bart <bc@freeuk.com> wrote:
On 02/04/2025 16:26, Muttley@DastardlyHQ.org wrote:
On Wed, 2 Apr 2025 16:59:45 +0200
David Brown <david.brown@hesbynett.no> wibbled:
On 02/04/2025 16:05, Muttley@DastardlyHQ.org wrote:
I suspect the people who are happy with C never have any
correspondence with anyone from the committee so they get an
entirely biased sample. Just like its usually only people who had >>>>>>> a bad experience that fill in "How did we do"
surveys.
And I suspect that you haven't a clue who the C standards
committee talk to - and who those people in turn have asked.
By imference you do - so who are they?
11. nullptr for clarity and safety.
Never understood that in C++ never mind C. NULL has worked fine for
50 years.
And it's been a hack for 50 years. Especially when it is just:
#define NULL 0
You also need to include some header (which one?) in order to use it.
I'd hope you wouldn't need to do that for nullptr, but backwards
compatibility may require it (because of any forward-thinking
individuals who have already defined their own 'nullptr').
C23 is rather bold in that regard, adding non-underscored keywords as
if there was no yesterday. IMHO, for no good reasons.
It is bold, perhaps, but there are certainly good reasons.
Perhaps go bolder and drop the need to explicitly include those 30 or so standard headers. It's ridiculous having to micro-manage the availablity
of fundamental language features ('uint8_t' for example!) in every module.
When I suggested this is the past, people were up in arms about the
overheads of having to compile all those headers (in 2017, they were
3-5KB lines in all for gcc on Windows/Linux).
Yet the same people think nothing of using libraries like SDL2 (50K
lines of headers) or GTK2 (350K lines).
This does mean that some pre-C23 code will be incompatible with C23.
This was also my view in the past, to draw a line under 'old' C and to
start using 'new' C.
I understand C23 mode will be enabled by a compiler option (-std=c23);
the same method could have been used to enable all std headers, and for
that to be the default.
Hello World then becomes this one-liner:
int main() {puts("Hello, World!");}
On 03/04/2025 02:41, Keith Thompson wrote:
scott@slp53.sl.home (Scott Lurndal) writes:
bart <bc@freeuk.com> writes:[...]
On 02/04/2025 18:29, David Brown wrote:
On 02/04/2025 17:38, bart wrote:
You also need to include some header (which one?) in order to use it. >>>>><stddef.h>, as pretty much any C programmer will know.
This program:
void* p = NULL;
reports that NULL is undefined, but that can be fixed by including any >>>> of stdio.h, stdlib.h or string.h. Those are the first three I tried;
there may be others.
So it is not true that you need include stddef.h, nor obvious that that >>>> is where NULL is defined, if you are used to having it available indirectly.
Indeed, and it is well documented.
For example, in the POSIX description for the string functions you'll
find the following statement:
[CX] Inclusion of the <string.h> header may also make visible all
symbols from <stddef.h>. [Option End]
This is true for a number of POSIX headers, include those you enumerate
above.
[CX] marks a POSIX extension to ISO C.
Interesting. The C standard says that <string.h> defines NULL and
size_t, both of which are also defined in <stddef.h>. A number of other
symbols from <stddef.h> are also defined in other headers. A conforming
implementation may not make any other declarations from <stddef.h>
visible as a result of including <string.h>. I wonder why POSIX has
that "extension".
The documentation quoted by Scott says "may". To me, it seems pretty
obvious why they have this. It means that their definition of
<string.h> can start with
#include <stddef.h>
rather than go through the merry dance of conditional compilation,
defining and undefining these macros, "__null_is_defined" macros, and
the rest of it.
On 4/2/25 14:02, Kaz Kylheku wrote:
...
When a thing exists, the job of the standard is to standardize what
exists, and not invent some caricature of it.
In the process of standardization, the committee is supposed to exercise
it's judgement, and if that judgement says that there's a better way to
do something than the way for with there is existing practice, they have
an obligation to correct the design of that feature accordingly.
scott@slp53.sl.home (Scott Lurndal) writes:
bart <bc@freeuk.com> writes:[...]
So it is not true that you need include stddef.h, nor obvious
that that is where NULL is defined, if you are used to having it
available indirectly.
Indeed, and it is well documented.
For example, in the POSIX description for the string functions
you'll find the following statement:
[CX] Inclusion of the <string.h> header may also make visible
all symbols from <stddef.h>. [Option End]
This is true for a number of POSIX headers, include those you
enumerate above.
[CX] marks a POSIX extension to ISO C.
How strange. I don't know why anyone would ever want either to
rely on or to take advantage of this property.
On Thu, 3 Apr 2025 11:41:31 +0200
David Brown <david.brown@hesbynett.no> wrote:
On 03/04/2025 10:45, Muttley@DastardlyHQ.org wrote:
On Wed, 02 Apr 2025 16:16:27 GMT
scott@slp53.sl.home (Scott Lurndal) wibbled:
Muttley@DastardlyHQ.org writes:
On Wed, 2 Apr 2025 16:59:45 +0200ist first.
David Brown <david.brown@hesbynett.no> wibbled:
On 02/04/2025 16:05, Muttley@DastardlyHQ.org wrote:
18. "unreachable()" is now standard.
Googled it - don't see the point.
That's a defect in your understanding, not a defect in the
standard.
I've found the gcc equivelent useful often in standalone
applications (OS, Hypervisor, standalone utilities, etc).
Enlighten me then.
I can't tell you what Scott uses it for, but I have used gcc's
__builtin_unreachable() a fair number of times in my coding. I use
it to inform both the compiler and human readers that a path is
unreachable:
switch (x) {
case 1 : ...
case 2 : ...
case 3 : ...
default : __builtin_unreachable();
}
I can also use it to inform the compiler about data :
if ((x < 0) || (x > 10)) __builtin_unreachable();
// x must be 1 .. 10
Mostly I have it wrapped in macros that let me conveniently have
run-time checking during testing or debugging, and extra efficiency
in the code when I am confident it is bug-free.
Good use of __builtin_unreachable() can result in smaller and faster
code, and possibly improved static error checking. It is related to
the C++23 "assume" attribute (which is also available as a gcc
extension in any C and C++ version).
In theory, compilers can use unreachable() to generated better code.
In practice, every single time I looked at compiler output, it made no difference.
On Thu, 3 Apr 2025 11:41:31 +0200
David Brown <david.brown@hesbynett.no> wibbled:
On 03/04/2025 10:45, Muttley@DastardlyHQ.org wrote:
On Wed, 02 Apr 2025 16:16:27 GMT
scott@slp53.sl.home (Scott Lurndal) wibbled:
Muttley@DastardlyHQ.org writes:
On Wed, 2 Apr 2025 16:59:45 +0200ist first.
David Brown <david.brown@hesbynett.no> wibbled:
On 02/04/2025 16:05, Muttley@DastardlyHQ.org wrote:
18. "unreachable()" is now standard.
Googled it - don't see the point.
That's a defect in your understanding, not a defect in the standard.
I've found the gcc equivelent useful often in standalone
applications (OS, Hypervisor, standalone utilities, etc).
Enlighten me then.
I can't tell you what Scott uses it for, but I have used gcc's
__builtin_unreachable() a fair number of times in my coding. I use it
to inform both the compiler and human readers that a path is unreachable:
What for? The compiler doesn't care and a human reader would probably
prefer a meaningful comment if its not obvious. If you're worried about the code accidently going there use an assert.
switch (x) {
case 1 : ...
case 2 : ...
case 3 : ...
default : __builtin_unreachable();
}
I can also use it to inform the compiler about data :
if ((x < 0) || (x > 10)) __builtin_unreachable();
// x must be 1 .. 10
And that'll do what? You want the compiler to compile in a hidden value check?
Good use of __builtin_unreachable() can result in smaller and faster
code, and possibly improved static error checking. It is related to the
Sorry, don't see how. If you think a piece of code is unreachable then don't put it in in the first place!
On Wed, 02 Apr 2025 16:20:05 GMT
scott@slp53.sl.home (Scott Lurndal) wibbled:
Muttley@dastardlyhq.com writes:
On Wed, 2 Apr 2025 16:33:46 +0100
bart <bc@freeuk.com> gabbled:
On 02/04/2025 16:12, Muttley@DastardlyHQ.org wrote:
Meh.
What's the problem with it? Here, tell me at a glance the magnitude of >>>>this number:
10000000000
And how often do you hard code values that large into a program? Almost >>>never I imagine unless its some hex value to set flags in a word.
Every day, several times a day. 16 hex digit constants are very
common in my work. The digit separator really helps with readability,
Oh really? What are you doing, hardcoding password hashes?
I think NULL should have been promoted to keyword, just like true and
false.
On Thu, 3 Apr 2025 13:49:48 +0100
bart <bc@freeuk.com> wrote:
I understand C23 mode will be enabled by a compiler option
(-std=c23);
In 2025.
An expectations are, however, that several years down the road it would
be a default. Then people would have to specify compiler options in
order to get older standard. And at some point older standards will be dropped. Not only K&r and C90. C99 will be dropped as well. Not that I
expect to live that long.
On Thu, 3 Apr 2025 13:49:48 +0100
bart <bc@freeuk.com> wrote:
On 03/04/2025 09:59, David Brown wrote:
It is bold, perhaps, but there are certainly good reasons.
Perhaps go bolder and drop the need to explicitly include those 30 or
so standard headers. It's ridiculous having to micro-manage the
availablity of fundamental language features ('uint8_t' for example!)
in every module.
I don't find it ridiculous.
When I suggested this is the past, people were up in arms about the
overheads of having to compile all those headers (in 2017, they were
3-5KB lines in all for gcc on Windows/Linux).
Overhead is a smaller concern. Name clashes are bigger concern.
Yet the same people think nothing of using libraries like SDL2 (50K
lines of headers) or GTK2 (350K lines).
This does mean that some pre-C23 code will be incompatible with
C23.
This was also my view in the past, to draw a line under 'old' C and
to start using 'new' C.
I understand C23 mode will be enabled by a compiler option
(-std=c23);
In 2025.
An expectations are, however, that several years down the road it would
be a default. Then people would have to specify compiler options in
order to get older standard. And at some point older standards will be dropped. Not only K&r and C90. C99 will be dropped as well. Not that I
expect to live that long.
the same method could have been used to enable all std
headers, and for that to be the default.
Hello World then becomes this one-liner:
int main() {puts("Hello, World!");}
Somehow I don't feel excited by the prospect.
bart <bc@freeuk.com> writes:
On 03/04/2025 14:44, Michael S wrote:
Overhead is a smaller concern. Name clashes are bigger concern.
Examples? Somebody would be foolhardy to use names like 'printf' or
'exit' for their own, unrelated functions. (Compilers will anyway warn
about that.)
I've written my own printf and exit implementations in the
past. Not all C code has a runtime that provides those name.
On 03/04/2025 14:44, Michael S wrote:
Overhead is a smaller concern. Name clashes are bigger concern.
Examples? Somebody would be foolhardy to use names like 'printf' or
'exit' for their own, unrelated functions. (Compilers will anyway warn
about that.)
On 4/2/2025 11:06 PM, Tim Rentsch wrote:
Kaz Kylheku <643-408-1753@kylheku.com> writes:
[some symbols are defined in more than one header]
(In my opinion, things would be better if headers were not allowed
to behave as if they include other headers, or provide identifiers
also given in other heards. Not in ISO C, and not in POSIX.
Every identifier should be declared in exactly one home header,
and no other header should provide that definition. [...])
Not always practical. A good example is the type size_t. If a
function takes an argument of type size_t, then the symbol size_t
should be defined, no matter which header the function is being
declared in. Similarly for NULL for any function that has defined
behavior on some cases of arguments that include NULL. No doubt
there are other compelling examples.
Yes, basically true.
Headers including headers that have needed functionality makes sense.
At the other extreme, say we have a header, lets just call it
"windows.h", which then proceeds to include nearly everything in the
OS "core". No need to include different headers for the different OS subsystems, this header has got you covered.
But, then one proceeds to "#include" all of the other C files into a
single big translation unit, because it is faster to do everything all
at once than to deal with "windows.h" for each individually (because
even a moderate sized program is still smaller than all the stuff this
header pulls in).
But, then one has to care about relative order of headers, say:
If you want all this extra stuff, "windows.h" needs to be included
first, as the other headers will define _WIN32_LEAN_AND_MEAN (or
something to this effect) which then causes it to omit all of the
stuff that is less likely to be needed.
So, say:
#include <GL/gl.h>
#include <windows.h>
Will give different results from:
#include <windows.h>
#include <GL/gl.h>
...
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
On 03.04.2025 01:32, Scott Lurndal wrote:
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
[...]
Obviously a question of opinion depending on where one comes from.
Verilog uses _ as a digit separator.
And Kornshell's 'printf' uses ',' for output formatting as in
$ printf "%,d\n" 1234567
1,234,567
Maybe it should be configurable?
It is already configurable in ksh
$ LANG=en_US.utf8 printf "$%'10.2f\n" $(( ( 7540.0 * 118.70 ) + ( 2295.0 * 412.88 ) ))
$1,842,557.60
bart <bc@freeuk.com> writes:
[...]
I understand C23 mode will be enabled by a compiler option (-std=c23);
the same method could have been used to enable all std headers, and
for that to be the default.
The standard says exactly nothing about compiler options. "-std=c23"
is a convention used by *some* compilers (gcc and other compilers
designed to be compatible with it).
Hello World then becomes this one-liner:
int main() {puts("Hello, World!");}
A compiler could provide such an option as a non-conforming extension
with no change in the standard. I'm not aware that any compiler
has done so, or that there's been any demand for it. One reason
for the lack of demand might be that any code that depends on it
is not portable. (Older versions of MS Visual Studio create a
"stdafx.h" header, but newer versions appear to have dropped that.)
David Brown <david.brown@hesbynett.no> writes:
[...]
It is bold, perhaps, but there are certainly good reasons. As far as
I can see we have some keywords that have dropped their
underscore-capital form:
alignas
alignof
bool
static_assert
thread_local
The underscore-capital forms still exist as alternate spellings.
Dropping _Bool et al would have broken existing code.
And we have some new ones :
constexpr
false
nullptr
true
typeof
typeof_unequal
That last one is "typeof_unqual".
scott@slp53.sl.home (Scott Lurndal) writes:
Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
scott@slp53.sl.home (Scott Lurndal) writes:
bart <bc@freeuk.com> writes:[...]
So it is not true that you need include stddef.h, nor obvious
that that is where NULL is defined, if you are used to having it
available indirectly.
Indeed, and it is well documented.
For example, in the POSIX description for the string functions
you'll find the following statement:
[CX] Inclusion of the <string.h> header may also make visible
all symbols from <stddef.h>. [Option End]
This is true for a number of POSIX headers, include those you
enumerate above.
[CX] marks a POSIX extension to ISO C.
How strange. I don't know why anyone would ever want either to
rely on or to take advantage of this property.
Some existing unix implementations at the time the standard was adopted
had that behavior and the committee was not willing to break existing
implementations.
You mean the POSIX standard, yes? The C standard does not permit
<string.h> to include <stddef.h>.
On Thu, 3 Apr 2025 15:05:59 +0200
Opus <ifonly@youknew.org> wrote:
For instance, if I'm not mistaken,
designated initializers, which are very handy and have been available
in C since C99 (25 years ago) have appeared only in C++20, about 20
years later.
AFAIK, even C++23 provides only a subset of C99 designated initializers.
The biggest difference is that in C++ initializers have to be
specified in the same order as declarations for respective fields.
On 4/3/2025 1:12 AM, Keith Thompson wrote:
Kaz Kylheku <643-408-1753@kylheku.com> writes:
On 2025-04-03, bart <bc@freeuk.com> wrote:
On 02/04/2025 17:20, Scott Lurndal wrote:
Muttley@dastardlyhq.com writes:
On Wed, 2 Apr 2025 16:33:46 +0100
bart <bc@freeuk.com> gabbled:
On 02/04/2025 16:12, Muttley@DastardlyHQ.org wrote:
Meh.
What's the problem with it? Here, tell me at a glance the magnitude of >>>>>>> this number:
10000000000
And how often do you hard code values that large into a program? Almost >>>>>> never I imagine unless its some hex value to set flags in a word.
Every day, several times a day. 16 hex digit constants are very
common in my work. The digit separator really helps with readability, >>>>> although I would have preferred '_' over "'".
Oh, I thought C23 used '_', since Python uses that. I prefer single
quote as that is not shifted on my keyboard. (My language projects just >>>> allow both!)
I made , (comma) the digit separator in TXR Lisp. Nobody uses _ in the
real world.
I understand that in some countries, that is the decimal point. That is >>> not relevant in programming languages that use a period for that and are >>> not localized.
Comma means I can just copy and paste a figure from a financial document >>> or application, or any other document which uses that convention.
The comma couldn't be used in C without the possibility of breaking
existing code, since 123,456 is already a valid expression, and is
likely to occur in a context like `foo(123,456)`.
C23 borrowed 123'456 from C++ rather than 123_456 (which I would have
preferred). C++ chose 123'456 because the C++ already used the
underscore for user-defined literals. Apparently some countries, such
as Switzerland, use the apostrophe as a digit separator.
In my compiler, I did both ' and _, ...
Personally though, I prefer using _ as a digit separator in these scenarios.
But, yeah, can't use comma without creating syntactic ambiguity.
So, extended features:
_UBitInt(5) cr, cg, cb;
_UBitInt(16) clr;
clr = (_UBitInt(16)) { 0b0u1, cr, cg, cb };
Composes an RGB555 value.
cg = clr[9:5]; //extract bits
clr[9:5] = cg; //assign bits
clr[15:10] = clr[9:5]; //copy bits from one place to another.
And:
(_UBitInt(16)) { 0b0u1, cr, cg, cb } = clr;
Decomposing it into components, any fixed-width constants being treated
as placeholders.
On 2025-04-03, BGB <cr88192@gmail.com> wrote:
In my compiler, I did both ' and _, ...
Personally though, I prefer using _ as a digit separator in these scenarios. >>
But, yeah, can't use comma without creating syntactic ambiguity.
False; you can't use comma because of an /existing/ ambiguity.
Comma separation causes problems when arguments can be empty!
In C preprocessing MAC() is actually a macro with one argument,
which is empty.
Kaz Kylheku <643-408-1753@kylheku.com> writes:
[some symbols are defined in more than one header]
(In my opinion, things would be better if headers were not allowed
to behave as if they include other headers, or provide identifiers
also given in other heards. Not in ISO C, and not in POSIX.
Every identifier should be declared in exactly one home header,
and no other header should provide that definition. [...])
Not always practical. A good example is the type size_t. If a
function takes an argument of type size_t, then the symbol size_t
should be defined, no matter which header the function is being
declared in. Similarly for NULL for any function that has defined
behavior on some cases of arguments that include NULL. No doubt
there are other compelling examples.
Kaz Kylheku <643-408-1753@kylheku.com> writes:
[some symbols are defined in more than one header]
(In my opinion, things would be better if headers were not allowed
to behave as if they include other headers, or provide identifiers
also given in other heards. Not in ISO C, and not in POSIX.
Every identifier should be declared in exactly one home header,
and no other header should provide that definition. [...])
Not always practical. A good example is the type size_t. If a
function takes an argument of type size_t, then the symbol size_t
should be defined, no matter which header the function is being
declared in.
Similarly for NULL for any function that has defined
behavior on some cases of arguments that include NULL.
No doubt
there are other compelling examples.
[...]
I know people can use pre-processor conditional compilation based on __STDC_VERSION__ to complain if code is compiled with an unexpected or unsupported standard, but few people outside of library header authors actually do that. I'd really like :
#pragma STDC VERSION C17
to force the compiler to use the equivalent of "-std=c17
-pedantic-errors" in gcc.
Kaz Kylheku <643-408-1753@kylheku.com> writes:
[...]
One programming language that has comma separators is Fortran,
by the way. Fortran persisted in providing this feature in spite of
shooting itself in the foot with ambiguities.
When Fortran was being designed, people were naive in writing
compilers. They thought that it would simplify things if they
removed all spaces from the code before lexically scanning it and
parsing.
Thus "DO I = 1, 10" becomes "DOI=1,10" and "FO I = 1, 10"
becomes "FOI=1,10"
After that you have to figure out that "DOI=1,10" is the
header of a DO loop which steps I from 1 to 10,
whereas "FOI=1,10" assigns 110 to variable FOI.
I don't think that's correct. My quick experiments with gfortran
indicate that commas are *not* treated as digit separators.
The classic Fortran (or FORTRAN?) error was that:
DO 10 I = 1,100
(a loop with bounds 1 to 100) was written as:
DO 10 I = 1.100
(which assigns the value 1.100 to the variable DO10I).
An urban legend says that this error caused the loss of a spacecraft.
In fact the error was caught and corrected before launch.
Wow, consistency. And no dangling comma nonsense to deal with in
complex, variadic macros!
Would MAC("foo" "bar") have one argument or two?
On 02/04/2025 23:43, Janis Papanagnou wrote:
On 02.04.2025 16:59, David Brown wrote:
[...]
From the next version beyond C23, so far there is :
1. Declarations in "if" and "switch" statements, like those in "for"
loops, helps keep local variable scopes small and neat.
Oh, I thought that would already be supported in some existing "C"
version for the 'if'; I probably confused that with C++.
C++17 has it.
I guess the C committee waited until C++17 had been common enough that
they could see if it was useful in real code, and if it lead to any unexpected problems in code or compilers before copying it for C.
On Thu, 3 Apr 2025 11:41:31 +0200
David Brown <david.brown@hesbynett.no> wibbled:
I can't tell you what Scott uses it for, but I have used gcc's
__builtin_unreachable() a fair number of times in my coding. I use it
to inform both the compiler and human readers that a path is unreachable:
What for? The compiler doesn't care and a human reader would probably
prefer a meaningful comment if its not obvious. If you're worried about the code accidently going there use an assert.
switch (x) {
case 1 : ...
case 2 : ...
case 3 : ...
default : __builtin_unreachable();
}
I can also use it to inform the compiler about data :
if ((x < 0) || (x > 10)) __builtin_unreachable();
// x must be 1 .. 10
And that'll do what? You want the compiler to compile in a hidden value check?
Good use of __builtin_unreachable() can result in smaller and faster
code, and possibly improved static error checking. It is related to the
Sorry, don't see how. If you think a piece of code is unreachable then don't put it in in the first place!
Apparently some countries, such as Switzerland, use the apostrophe as a
digit separator.
Commas are overwhelmingly used to separate list elements in programming languages.
Here, tell me at a glance the magnitude of
this number:
10000000000
On 03/04/2025 20:37, Kaz Kylheku wrote:
On 2025-04-03, BGB <cr88192@gmail.com> wrote:
In my compiler, I did both ' and _, ...
Personally though, I prefer using _ as a digit separator in these scenarios.
But, yeah, can't use comma without creating syntactic ambiguity.
False; you can't use comma because of an /existing/ ambiguity.
Commas are overwhelmingly used to separate list elements in programming languages.
They only become possible for numeric separators if you abandon any sort
of normal syntax and use one based, for example, on Lisp.
Even then, someone looking at your language and seeing:
55,688
isn't going to to see the number 55688, they will see two numbers, 55
and 688,
because that is what is what they expect from a typical
programming language.
Even when they normally use "," for decimal point, they're not going to
see 55.688 either, for the same reason.
In my view, comma is 100 times more valuable as a list separator, than
in being able to write 1,000,000 (which I can do as 1'000'000 or
1_000_000 or even 1 million).
Oh, I thought C23 used '_', since Python uses that.
On Wed, 2 Apr 2025 11:12:07 -0300 Thiago Adams <thiago.adams@gmail.com> wibbled:
- digit separator (better, safer)
Meh.
Muttley@DastardlyHQ.org writes:
On Wed, 02 Apr 2025 16:20:05 GMT
scott@slp53.sl.home (Scott Lurndal) wibbled:
Muttley@dastardlyhq.com writes:
On Wed, 2 Apr 2025 16:33:46 +0100
bart <bc@freeuk.com> gabbled:
On 02/04/2025 16:12, Muttley@DastardlyHQ.org wrote:
Meh.
What's the problem with it? Here, tell me at a glance the magnitude of >>>>>this number:
10000000000
And how often do you hard code values that large into a program? Almost >>>>never I imagine unless its some hex value to set flags in a word.
Every day, several times a day. 16 hex digit constants are very
common in my work. The digit separator really helps with readability,
Oh really? What are you doing, hardcoding password hashes?
Modeling a very complicated 64-bit system-on-chip.
On 4/2/2025 1:09 PM, Chris M. Thomasson wrote:
On 4/2/2025 8:16 AM, Muttley@DastardlyHQ.org wrote:
On Wed, 2 Apr 2025 14:12:18 -0000 (UTC)
antispam@fricas.org (Waldek Hebisch) wibbled:
Muttley@dastardlyhq.org wrote:
On Wed, 2 Apr 2025 10:57:29 +0100
bart <bc@freeuk.com> wibbled:
On 02/04/2025 06:59, Alexis wrote:
Thought people here might be interested in this image on Jens
Gustedt's
blog, which translates section 6.2.5, "Types", of the C23 standard >>>>>>> into a graph of inclusions:
https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23- >>>>>>> basic-types/
So much for C being a 'simple' language.
C should be left alone. It does what it needs to do for a systems
language.
Almost no use uses it for applications any more and sophisticated
processing
using complex types for example are far better done in C++.
C99 has VMT (variable modified types). Thanks to VMT and complex types >>>> C99 can naturaly do numeric computing that previously was done using
Fortran 77. Offical C++ has no VMT. C++ mechanizms look nicer,
Officially no, but I've never come across a C++ compiler that didn't
support
them given they're all C compilers too.
All C++ compilers are also C compilers?
To answer my own sarcastic question: No way. :^)
Human readers prefer clear code to comments. Comments get out of sync -
code does not.
Ignorance is curable - wilful ignorance is much more stubborn. But I
will try.
Let me give you an example, paraphrased from the C23 standards:
#include <stddef.h>
enum Colours { red, green, blue };
unsigned int colour_to_hex(enum Colours c) {
switch (c) {
case red : return 0xff'00'00;
case green : return 0x00'ff'00;
case blue : return 0x00'00'ff;
}
unreachable();
}
With "unreachable()", "gcc -std=c23 -O2 -Wall" gives :
colour_to_hex:
mov edi, edi
mov eax, DWORD PTR CSWTCH.1[0+rdi*4]
ret
Without it, it gives :
colour_to_hex:
cmp edi, 2
ja .L1
mov edi, edi
mov eax, DWORD PTR CSWTCH.1[0+rdi*4]
.L1:
ret
Neither "// This should never be reached" nor "assert(false);" is a
suitable alternative.
Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
scott@slp53.sl.home (Scott Lurndal) writes:
bart <bc@freeuk.com> writes:
[...]
So it is not true that you need include stddef.h, nor obvious
that that is where NULL is defined, if you are used to having it
available indirectly.
Indeed, and it is well documented.
For example, in the POSIX description for the string functions
you'll find the following statement:
[CX] Inclusion of the <string.h> header may also make visible
all symbols from <stddef.h>. [Option End]
This is true for a number of POSIX headers, include those you
enumerate above.
[CX] marks a POSIX extension to ISO C.
How strange. I don't know why anyone would ever want either to
rely on or to take advantage of this property.
Some existing unix implementations at the time the standard was
adopted had that behavior and the committee was not willing to
break existing implementations.
Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:
scott@slp53.sl.home (Scott Lurndal) writes:
Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
scott@slp53.sl.home (Scott Lurndal) writes:
bart <bc@freeuk.com> writes:
[...]
So it is not true that you need include stddef.h, nor obvious
that that is where NULL is defined, if you are used to having
it available indirectly.
Indeed, and it is well documented.
For example, in the POSIX description for the string functions
you'll find the following statement:
[CX] Inclusion of the <string.h> header may also make
visible all symbols from <stddef.h>. [Option End]
This is true for a number of POSIX headers, include those you
enumerate above.
[CX] marks a POSIX extension to ISO C.
How strange. I don't know why anyone would ever want either to
rely on or to take advantage of this property.
Some existing unix implementations at the time the standard was
adopted had that behavior and the committee was not willing to
break existing implementations.
You mean the POSIX standard, yes? The C standard does not permit
<string.h> to include <stddef.h>.
Yes, and POSIX explictly marks it as an extension to the C
standard.
On 4/4/2025 2:43 AM, Muttley@DastardlyHQ.org wrote:
On Thu, 3 Apr 2025 16:01:18 -0700
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> wibbled:
On 4/2/2025 1:09 PM, Chris M. Thomasson wrote:
On 4/2/2025 8:16 AM, Muttley@DastardlyHQ.org wrote:
On Wed, 2 Apr 2025 14:12:18 -0000 (UTC)
antispam@fricas.org (Waldek Hebisch) wibbled:
Muttley@dastardlyhq.org wrote:
On Wed, 2 Apr 2025 10:57:29 +0100
bart <bc@freeuk.com> wibbled:
On 02/04/2025 06:59, Alexis wrote:
Thought people here might be interested in this image on Jens >>>>>>>>> Gustedt's
blog, which translates section 6.2.5, "Types", of the C23 standard >>>>>>>>> into a graph of inclusions:
https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23- >>>>>>>>> basic-types/
So much for C being a 'simple' language.
C should be left alone. It does what it needs to do for a systems >>>>>>> language.
Almost no use uses it for applications any more and sophisticated >>>>>>> processing
using complex types for example are far better done in C++.
C99 has VMT (variable modified types). Thanks to VMT and complex types >>>>>> C99 can naturaly do numeric computing that previously was done using >>>>>> Fortran 77. Offical C++ has no VMT. C++ mechanizms look nicer,
Officially no, but I've never come across a C++ compiler that didn't >>>>> support
them given they're all C compilers too.
All C++ compilers are also C compilers?
To answer my own sarcastic question: No way. :^)
So name one that isn't. Fairly simple way to prove your point.
Try to compile this in a C++ compiler:
_____________
#include <stdlib.h>
#include <stdio.h>
int main() {
void *p = malloc(sizeof(int));
int *ip = p;
free(p);
printf("done\n");
return 0;
}
_____________
What am I missing?
On 03.04.2025 16:58, David Brown wrote:
[...]
I know people can use pre-processor conditional compilation based on
__STDC_VERSION__ to complain if code is compiled with an unexpected or
unsupported standard, but few people outside of library header authors
actually do that. I'd really like :
#pragma STDC VERSION C17
to force the compiler to use the equivalent of "-std=c17
-pedantic-errors" in gcc.
(I understand the wish to have that #pragma supported.)
Can there be linking problems when different "C" modules have
been compiled with different '-std=cXX' or '#pragma STDC ...'
settings? - The question just occurred to me.
On 2025-04-03, BGB <cr88192@gmail.com> wrote:
On 4/3/2025 1:12 AM, Keith Thompson wrote:
Kaz Kylheku <643-408-1753@kylheku.com> writes:
On 2025-04-03, bart <bc@freeuk.com> wrote:
On 02/04/2025 17:20, Scott Lurndal wrote:
Muttley@dastardlyhq.com writes:
On Wed, 2 Apr 2025 16:33:46 +0100
But, yeah, can't use comma without creating syntactic ambiguity.
False; you can't use comma because of an /existing/ ambiguity.
(In fact you could still use a comma; the "only" problem is you would
break some programs. If this is your own language that nobody else
uses, that might not be a significant objection.)
When you've designed the language such that f(1,234.00) is a function
call with two arguments, equivalent to f(1, 124.00), that's where
you created the ambiguity.
Your rules for tokenizing and parsing may be unambiguous, but it's
visually ambiguous to a human.
You should have seen it coming when allowing comma punctuators to
separate arguments, without any surrounding whitespace being required.
Now you can't have nice things, like the comma digit separators that
everyone uses in the English speaking world that uses . for the
decimal separators.
By the way ...
One programming language that has comma separators is Fortran,
by the way. Fortran persisted in providing this feature in spite of
shooting itself in the foot with ambiguities.
When Fortran was being designed, people were naive in writing
compilers. They thought that it would simplify things if they
removed all spaces from the code before lexically scanning it and
parsing.
Thus "DO I = 1, 10" becomes "DOI=1,10" and "FO I = 1, 10"
becomes "FOI=1,10"
After that you have to figure out that "DOI=1,10" is the
header of a DO loop which steps I from 1 to 10,
whereas "FOI=1,10" assigns 110 to variable FOI.
Removing spaces before scanning anythning is a bad idea.
Not requiring spaces between certain tokens is also a bad idea.
In the token sequence 3) we wouldn't want to require a space
between 3 and ).
But it's a good idea to require 1,3 to be 1, 3 (if two numeric
tokens separated by a comma are intended and not the
number 1,3).
Commas are "fluff punctuators". They could be removed without
making a difference to the abstract syntax.
Fun fact: early Lisp (when it was called LISP) had commas
in lists. They were optional. (1, 2, 3) or (1 2 3). Your
choice.
Comma separation causes problems when arguments can be empty!
In C preprocessing MAC() is actually a macro with one argument,
which is empty. MAC(,) is a macro with two empty arguments
and so on. You cannot write a macro call with zero arguments.
Now, if macros didn't use commas, there wouldn't be a problem
at all: MAC() -> zero args; MAC(abc) -> one arg;
MAC(abc 2) -> two args.
Wow, consistency. And no dangling comma nonsense to deal with in
complex, variadic macros!
On 03/04/2025 16:26, Scott Lurndal wrote:
bart <bc@freeuk.com> writes:
On 03/04/2025 14:44, Michael S wrote:
Overhead is a smaller concern. Name clashes are bigger concern.
Examples? Somebody would be foolhardy to use names like 'printf' or
'exit' for their own, unrelated functions. (Compilers will anyway warn
about that.)
I've written my own printf and exit implementations in the
past. Not all C code has a runtime that provides those name.
Then you have to specify, somehow, that you don't want those
automatically included.
On Thu, 3 Apr 2025 15:58:05 +0200
David Brown <david.brown@hesbynett.no> wibbled:
Human readers prefer clear code to comments. Comments get out of sync -
code does not.
Thats not a reason for not using comments.
Its very easy to understand your
own code that you've just written - not so much for someone else or for you years down the line.
Ignorance is curable - wilful ignorance is much more stubborn. But I
will try.
Guffaw! You should do standup.
Let me give you an example, paraphrased from the C23 standards:
#include <stddef.h>
enum Colours { red, green, blue };
unsigned int colour_to_hex(enum Colours c) {
switch (c) {
case red : return 0xff'00'00;
case green : return 0x00'ff'00;
case blue : return 0x00'00'ff;
}
unreachable();
}
With "unreachable()", "gcc -std=c23 -O2 -Wall" gives :
colour_to_hex:
mov edi, edi
mov eax, DWORD PTR CSWTCH.1[0+rdi*4]
ret
Without it, it gives :
colour_to_hex:
cmp edi, 2
ja .L1
mov edi, edi
mov eax, DWORD PTR CSWTCH.1[0+rdi*4]
.L1:
ret
Except its not unreachable is it?
There's nothing in C to prevent you
calling that function with a value other than defined in the enum so what happens if there's a bug and it hits unreachable?
Oh thats right , its
"undefined" ie , a crash or hidden bug with bugger all info.
Neither "// This should never be reached" nor "assert(false);" is a
suitable alternative.
In your opinion. I would never use that example above, its just asking for trouble down the line.
Also FWIW, putting seperators in the hex values makes it less readable to me not more.
antispam@fricas.org (Waldek Hebisch) writes:
[...]
People should know language they use. The whole point of using[...]
a different language is because of some special features. So
one should know them.
Whitespace actually may be quite good list separator. But using
commas in numbers is too confusing, there are too many conventions
used when printing numbers. My favorite is underscore for grouping,
1_000.005 has only one sensible meaning, while 1.000,005 and 1,000.005
can be easily confused.
People should know the language they use.
On 03.04.2025 11:03, David Brown wrote:
On 02/04/2025 23:43, Janis Papanagnou wrote:
On 02.04.2025 16:59, David Brown wrote:
[...]
From the next version beyond C23, so far there is :
1. Declarations in "if" and "switch" statements, like those in "for"
loops, helps keep local variable scopes small and neat.
Oh, I thought that would already be supported in some existing "C"
version for the 'if'; I probably confused that with C++.
C++17 has it.
I guess the C committee waited until C++17 had been common enough that
they could see if it was useful in real code, and if it lead to any
unexpected problems in code or compilers before copying it for C.
Really, that recent!? - I was positive that I used it long before 2017
during the days when I did quite regularly C++ programming. - Could it
be that some GNU compiler (C++ or "C") supported that before it became
C++ standard?
Janis
On Thu, 03 Apr 2025 14:14:20 GMT
scott@slp53.sl.home (Scott Lurndal) wibbled:
Muttley@DastardlyHQ.org writes:
On Wed, 02 Apr 2025 16:20:05 GMT
scott@slp53.sl.home (Scott Lurndal) wibbled:
Muttley@dastardlyhq.com writes:
On Wed, 2 Apr 2025 16:33:46 +0100
bart <bc@freeuk.com> gabbled:
On 02/04/2025 16:12, Muttley@DastardlyHQ.org wrote:
Meh.
What's the problem with it? Here, tell me at a glance the magnitude of >>>>>>this number:
10000000000
And how often do you hard code values that large into a program? Almost >>>>>never I imagine unless its some hex value to set flags in a word.
Every day, several times a day. 16 hex digit constants are very
common in my work. The digit separator really helps with readability,
Oh really? What are you doing, hardcoding password hashes?
Modeling a very complicated 64-bit system-on-chip.
If you're hardcoding all that you're doing it wrong. Should be in some kind >of loaded config file.
On 03.04.2025 13:07, Muttley@DastardlyHQ.org wrote:
On Thu, 3 Apr 2025 11:41:31 +0200
David Brown <david.brown@hesbynett.no> wibbled:
[ "unreachable()" is now standard. ]
I can't tell you what Scott uses it for, but I have used gcc'sWhat for? The compiler doesn't care and a human reader would probably
__builtin_unreachable() a fair number of times in my coding. I use it
to inform both the compiler and human readers that a path is unreachable: >>
prefer a meaningful comment if its not obvious. If you're worried about the >> code accidently going there use an assert.
switch (x) {
case 1 : ...
case 2 : ...
case 3 : ...
default : __builtin_unreachable();
}
I can also use it to inform the compiler about data :
if ((x < 0) || (x > 10)) __builtin_unreachable();
// x must be 1 .. 10
And that'll do what? You want the compiler to compile in a hidden value check?
I also don't see a point here; myself I'd write some sort of assertion
in such cases, depending on the application case either just temporary
for tests or a static one with sensible handling of the case.
Good use of __builtin_unreachable() can result in smaller and faster
code, and possibly improved static error checking. It is related to the
Sorry, don't see how. If you think a piece of code is unreachable then don't >> put it in in the first place!
Let me give that another spin...
In cases like above 'switch' code I have the habit to (often) provide
a default branch that contains a fprintf(stderr, "Internal error: ..."
or a similar logging command and some form of exit or trap/catch code.
I want some safety for the cases where in the _evolving_ program bugs
sneak in by an oversight.[*]
Personally I don't care about a compiler who is clever enough to warn
me, say, about a lacking default branch but not clever enough to notice
that it's intentionally, cannot be reached (say, in context of enums).
I can understand that it might be of use for others, though. (There's certainly some demand if it's now standard.)
I'm uninformed about __builtin_unreachable(), I don't know whether it
can be overloaded, user-defined, or anything.
If that's not the case
I'd anyway write my own "Internal error: unexpected ..." function to
use that in all such cases for error detection and tracking of bugs.
Janis
[*] This habit is actually a very old one and most probably resulting
from an early observation with one of my first Simula programs coded
on a mainframe that told me: "Internal error! Please contact the NCC
in Oslo." - BTW; a nice suggestion, but useless since back these days
there was no Email available to me and the NCC was in another country.
On Fri, 4 Apr 2025 03:25:23 -0700
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> wibbled:
On 4/4/2025 2:43 AM, Muttley@DastardlyHQ.org wrote:
On Thu, 3 Apr 2025 16:01:18 -0700
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> wibbled:
On 4/2/2025 1:09 PM, Chris M. Thomasson wrote:
On 4/2/2025 8:16 AM, Muttley@DastardlyHQ.org wrote:
On Wed, 2 Apr 2025 14:12:18 -0000 (UTC)
antispam@fricas.org (Waldek Hebisch) wibbled:
Muttley@dastardlyhq.org wrote:Officially no, but I've never come across a C++ compiler that didn't >>>>>> support
On Wed, 2 Apr 2025 10:57:29 +0100
bart <bc@freeuk.com> wibbled:
On 02/04/2025 06:59, Alexis wrote:
Thought people here might be interested in this image on Jens >>>>>>>>>> Gustedt's
blog, which translates section 6.2.5, "Types", of the C23 standard >>>>>>>>>> into a graph of inclusions:
https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23- >>>>>>>>>> basic-types/
So much for C being a 'simple' language.
C should be left alone. It does what it needs to do for a systems >>>>>>>> language.
Almost no use uses it for applications any more and sophisticated >>>>>>>> processing
using complex types for example are far better done in C++.
C99 has VMT (variable modified types). Thanks to VMT and complex types
C99 can naturaly do numeric computing that previously was done using >>>>>>> Fortran 77. Offical C++ has no VMT. C++ mechanizms look nicer, >>>>>>
them given they're all C compilers too.
All C++ compilers are also C compilers?
To answer my own sarcastic question: No way. :^)
So name one that isn't. Fairly simple way to prove your point.
Try to compile this in a C++ compiler:
_____________
#include <stdlib.h>
#include <stdio.h>
int main() {
void *p = malloc(sizeof(int));
int *ip = p;
free(p);
printf("done\n");
return 0;
}
_____________
$ cc -v
Apple clang version 16.0.0 (clang-1600.0.26.6)
Target: arm64-apple-darwin24.3.0
Thread model: posix
InstalledDir: /Library/Developer/CommandLineTools/usr/bin
$ cc t.c
$ a.out
done
What am I missing?
You tell me mate.
On 04/04/2025 12:28, Muttley@DastardlyHQ.org wrote:
On Fri, 4 Apr 2025 03:25:23 -0700
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> wibbled:
On 4/4/2025 2:43 AM, Muttley@DastardlyHQ.org wrote:
On Thu, 3 Apr 2025 16:01:18 -0700
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> wibbled:
On 4/2/2025 1:09 PM, Chris M. Thomasson wrote:
On 4/2/2025 8:16 AM, Muttley@DastardlyHQ.org wrote:
On Wed, 2 Apr 2025 14:12:18 -0000 (UTC)
antispam@fricas.org (Waldek Hebisch) wibbled:
Muttley@dastardlyhq.org wrote:Officially no, but I've never come across a C++ compiler that didn't >>>>>>> support
On Wed, 2 Apr 2025 10:57:29 +0100
bart <bc@freeuk.com> wibbled:
On 02/04/2025 06:59, Alexis wrote:
Thought people here might be interested in this image on Jens >>>>>>>>>>> Gustedt's
blog, which translates section 6.2.5, "Types", of the C23 standard >>>>>>>>>>> into a graph of inclusions:
https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23- >>>>>>>>>>> basic-types/
So much for C being a 'simple' language.
C should be left alone. It does what it needs to do for a systems >>>>>>>>> language.
Almost no use uses it for applications any more and sophisticated >>>>>>>>> processing
using complex types for example are far better done in C++.
C99 has VMT (variable modified types). Thanks to VMT and complex >types
C99 can naturaly do numeric computing that previously was done using >>>>>>>> Fortran 77. Offical C++ has no VMT. C++ mechanizms look nicer, >>>>>>>
them given they're all C compilers too.
All C++ compilers are also C compilers?
To answer my own sarcastic question: No way. :^)
So name one that isn't. Fairly simple way to prove your point.
Try to compile this in a C++ compiler:
_____________
#include <stdlib.h>
#include <stdio.h>
int main() {
void *p = malloc(sizeof(int));
int *ip = p;
free(p);
printf("done\n");
return 0;
}
_____________
$ cc -v
Apple clang version 16.0.0 (clang-1600.0.26.6)
Target: arm64-apple-darwin24.3.0
Thread model: posix
InstalledDir: /Library/Developer/CommandLineTools/usr/bin
$ cc t.c
$ a.out
done
What am I missing?
You tell me mate.
You are using a combined C and C++ compiler in C mode, and it compiles
the C program as C. In that sense, most C++ compilers are also C
On 04/04/2025 11:40, Muttley@DastardlyHQ.org wrote:
On Thu, 3 Apr 2025 15:58:05 +0200
David Brown <david.brown@hesbynett.no> wibbled:
Human readers prefer clear code to comments. Comments get out of sync - >>> code does not.
Thats not a reason for not using comments.
It is a reason for never using a comment when you can express the same
thing in code.
If that's your problem, write better code - not more comments.
Comments should say /why/ you are doing something, not /what/ you are doing.
Except its not unreachable is it?
It /is/ unreachable. That's why I wrote it.
There's nothing in C to prevent you
calling that function with a value other than defined in the enum so what
happens if there's a bug and it hits unreachable?
There's nothing in the English language preventing me from calling you a >"very stable genius" - but I can assure you that it is not going to happen.
Oh thats right , its
"undefined" ie , a crash or hidden bug with bugger all info.
Welcome to the world of software development. If I specify a function
as working for input values "red", "green", and "blue", and you choose
to misuse it, that is /your/ fault, not mine. I write the code to work
with valid inputs and give no promises about what will happen with any
other input.
Also FWIW, putting seperators in the hex values makes it less readable to me >> not more.
Again, that's /your/ problem.
On 04/04/2025 04:50, Janis Papanagnou wrote:
Really, that recent!? - I was positive that I used it long before 2017
during the days when I did quite regularly C++ programming. - Could it
be that some GNU compiler (C++ or "C") supported that before it became
C++ standard?
Janis
To be clear, we are talking about :
if (int x = get_next_value(); x > 10) {
// We got a big value!
}
It was added in C++17. <https://en.cppreference.com/w/cpp/language/if>
gcc did not have it as an extension, but they might have had it in the pre-standardised support for C++17 (before C++17 was published, gcc had "-std=c++1z" to get as many proposed C++17 features as possible before
they were standardised. gcc has similar "pre-standard" support for all
C and C++ versions).
On Fri, 4 Apr 2025 13:39:06 +0200
David Brown <david.brown@hesbynett.no> wibbled:
On 04/04/2025 11:40, Muttley@DastardlyHQ.org wrote:
On Thu, 3 Apr 2025 15:58:05 +0200
David Brown <david.brown@hesbynett.no> wibbled:
Human readers prefer clear code to comments. Comments get out of sync - >>>> code does not.
Thats not a reason for not using comments.
It is a reason for never using a comment when you can express the same
thing in code.
If that's your problem, write better code - not more comments.
Ah, the typical arrogant programmer who thinks their code is so well written that anyone can understand it and comments arn't required. Glad I don't have to work on anything you've written.
Comments should say /why/ you are doing something, not /what/ you are doing.
Rubbish. A lot of the time what is being done is just as obtuse as why.
Except its not unreachable is it?
It /is/ unreachable. That's why I wrote it.
Really?
int main()
{
colour_to_hex(10);
return 0;
}
You have no idea how someone might try and use that function in the future.
Just assuming they'll always pass parameters within limits is not just cretinous, its dangerous.
There's nothing in C to prevent you
calling that function with a value other than defined in the enum so what >>> happens if there's a bug and it hits unreachable?
There's nothing in the English language preventing me from calling you a
"very stable genius" - but I can assure you that it is not going to happen.
Poor analogy.
Oh thats right , its
"undefined" ie , a crash or hidden bug with bugger all info.
Welcome to the world of software development. If I specify a function
as working for input values "red", "green", and "blue", and you choose
to misuse it, that is /your/ fault, not mine. I write the code to work
with valid inputs and give no promises about what will happen with any
other input.
Its your fault if it dies in a heap with no info or worse returns but does some random shit.
Any well written API function should do at least basic
sanity checking on its inputs and return a fail or assert unless its very low level and speed is the priority eg strlen().
But then you're arrogant, so no surprise really.
Also FWIW, putting seperators in the hex values makes it less readable to me
not more.
Again, that's /your/ problem.
See above.
On Fri, 04 Apr 2025 13:42:04 GMT
scott@slp53.sl.home (Scott Lurndal) wibbled:
Muttley@DastardlyHQ.org writes:
On Thu, 03 Apr 2025 14:14:20 GMT
scott@slp53.sl.home (Scott Lurndal) wibbled:
Muttley@DastardlyHQ.org writes:
On Wed, 02 Apr 2025 16:20:05 GMT
scott@slp53.sl.home (Scott Lurndal) wibbled:
Muttley@dastardlyhq.com writes:Oh really? What are you doing, hardcoding password hashes?
On Wed, 2 Apr 2025 16:33:46 +0100
bart <bc@freeuk.com> gabbled:
On 02/04/2025 16:12, Muttley@DastardlyHQ.org wrote:
Meh.
What's the problem with it? Here, tell me at a glance the magnitude of >>>>>>>>this number:
10000000000
And how often do you hard code values that large into a program? Almost >>>>>>>never I imagine unless its some hex value to set flags in a word.
Every day, several times a day. 16 hex digit constants are very >>>>>>common in my work. The digit separator really helps with readability, >>>>>
Modeling a very complicated 64-bit system-on-chip.
If you're hardcoding all that you're doing it wrong. Should be in some kind >>>of loaded config file.
You're flailing around in the dark. Again.
Its good practice. Feel free to not follow it.
On 04/04/2025 16:02, Muttley@DastardlyHQ.org wrote:
You are using a combined C and C++ compiler in C mode, and it compiles
the C program as C. In that sense, most C++ compilers are also C
Err yes! Thats the whole point!!
Then if we back up the thread to where you said C programmers could just
use a C++ compiler to get new features, you were clearly wrong. Of
course, we all knew you were wrong already, the only question was in
what way you were wrong.
On 04/04/2025 16:10, Muttley@DastardlyHQ.org wrote:
On Fri, 4 Apr 2025 13:39:06 +0200
David Brown <david.brown@hesbynett.no> wibbled:
On 04/04/2025 11:40, Muttley@DastardlyHQ.org wrote:
On Thu, 3 Apr 2025 15:58:05 +0200
David Brown <david.brown@hesbynett.no> wibbled:
Human readers prefer clear code to comments. Comments get out of sync - >>>>> code does not.
Thats not a reason for not using comments.
It is a reason for never using a comment when you can express the same
thing in code.
If that's your problem, write better code - not more comments.
Ah, the typical arrogant programmer who thinks their code is so well written >> that anyone can understand it and comments arn't required. Glad I don't have >> to work on anything you've written.
Arrogance would be judging my code without having seen it. Writing code
that is clear and does not require comments to say what it does is not >arrogance - it is good coding.
Rubbish. A lot of the time what is being done is just as obtuse as why.
That can /occasionally/ be the case. But if it happens a lot of the
time, you are writing poor code. It's time to refactor or rename.
int main()
{
colour_to_hex(10);
return 0;
}
UB. It's /your/ fault.
Just assuming they'll always pass parameters within limits is not just
cretinous, its dangerous.
Nope. It is how software development works. If you don't understand
On Fri, 4 Apr 2025 15:46:48 +0200
David Brown <david.brown@hesbynett.no> wibbled:
On 04/04/2025 12:28, Muttley@DastardlyHQ.org wrote:
On Fri, 4 Apr 2025 03:25:23 -0700types
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> wibbled:
On 4/4/2025 2:43 AM, Muttley@DastardlyHQ.org wrote:
On Thu, 3 Apr 2025 16:01:18 -0700
"Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> wibbled:
On 4/2/2025 1:09 PM, Chris M. Thomasson wrote:
On 4/2/2025 8:16 AM, Muttley@DastardlyHQ.org wrote:
On Wed, 2 Apr 2025 14:12:18 -0000 (UTC)
antispam@fricas.org (Waldek Hebisch) wibbled:
Muttley@dastardlyhq.org wrote:
On Wed, 2 Apr 2025 10:57:29 +0100C99 has VMT (variable modified types). Thanks to VMT and complex
bart <bc@freeuk.com> wibbled:
On 02/04/2025 06:59, Alexis wrote:
Thought people here might be interested in this image on Jens >>>>>>>>>>>> Gustedt's
blog, which translates section 6.2.5, "Types", of the C23 standard >>>>>>>>>>>> into a graph of inclusions:
https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23-
basic-types/
So much for C being a 'simple' language.
C should be left alone. It does what it needs to do for a systems >>>>>>>>>> language.
Almost no use uses it for applications any more and sophisticated >>>>>>>>>> processing
using complex types for example are far better done in C++. >>>>>>>>>
C99 can naturaly do numeric computing that previously was done using >>>>>>>>> Fortran 77. Offical C++ has no VMT. C++ mechanizms look nicer, >>>>>>>>Officially no, but I've never come across a C++ compiler that didn't >>>>>>>> support
them given they're all C compilers too.
All C++ compilers are also C compilers?
To answer my own sarcastic question: No way. :^)
So name one that isn't. Fairly simple way to prove your point.
Try to compile this in a C++ compiler:
_____________
#include <stdlib.h>
#include <stdio.h>
int main() {
void *p = malloc(sizeof(int));
int *ip = p;
free(p);
printf("done\n");
return 0;
}
_____________
$ cc -v
Apple clang version 16.0.0 (clang-1600.0.26.6)
Target: arm64-apple-darwin24.3.0
Thread model: posix
InstalledDir: /Library/Developer/CommandLineTools/usr/bin
$ cc t.c
$ a.out
done
What am I missing?
You tell me mate.
You are using a combined C and C++ compiler in C mode, and it compiles
the C program as C. In that sense, most C++ compilers are also C
Err yes! Thats the whole point!!
On 04/04/2025 14:38, David Brown wrote:
On 04/04/2025 04:50, Janis Papanagnou wrote:
Really, that recent!? - I was positive that I used it long before 2017
during the days when I did quite regularly C++ programming. - Could it
be that some GNU compiler (C++ or "C") supported that before it became
C++ standard?
Janis
To be clear, we are talking about :
if (int x = get_next_value(); x > 10) {
// We got a big value!
}
It was added in C++17. <https://en.cppreference.com/w/cpp/language/if>
gcc did not have it as an extension, but they might have had it in the
pre-standardised support for C++17 (before C++17 was published, gcc
had "-std=c++1z" to get as many proposed C++17 features as possible
before they were standardised. gcc has similar "pre-standard" support
for all C and C++ versions).
So, this is a proposal still for C, as it doesn't work for any current version of C (I should have read the above more carefully first!).
There are appear to be two new features:
* Allowing a declaration where a conditional expresson normally goes
* Having several things there separated with ";" (yes, here ";" is a separator, not a terminator).
Someone said they weren't excited by my proposal of being able to leave
out '#include <stdio.>'. Well I'm not that excited by this.
In fact I would actively avoid such a feature, as it adds clutter to
code. It might look at first as though it saves you having to add a
separate declaration, until you're writing the pattern for the fourth
time in your function and realised you now have 4 declarations for 'x'!
And also the type of 'x' is hardcoded in four places instead of one (so
if 'get_next_value' changes its return type, you now have more
maintenance and a real risk of missing out one).
(If you say that those 4 instances could call different functions so
each 'x' is a different type, then it would be a different kind of anti-pattern.)
Currently it would need this (it is assumed that 'x' is needed in the
body):
int x;
if ((x = getnextvalue()) > 10) {
// We got a big value!
}
It's a little cleaner. (Getting rid of those outer parameter parentheses would be far more useful IMO.)
(My language can already do this stuff:
if int x := get_next_value(); x > 10 then
println "big"
fi
But it is uncommon, and it would be over-cluttery even here. However I
don't have the local scope of 'x' that presumably is the big deal in the
C++ feature.)
On Fri, 4 Apr 2025 17:28:42 +0200
David Brown <david.brown@hesbynett.no> gabbled:
On 04/04/2025 16:02, Muttley@DastardlyHQ.org wrote:
You are using a combined C and C++ compiler in C mode, and it compiles >>>> the C program as C. In that sense, most C++ compilers are also C
Err yes! Thats the whole point!!
Then if we back up the thread to where you said C programmers could
just use a C++ compiler to get new features, you were clearly wrong.
Of course, we all knew you were wrong already, the only question was
in what way you were wrong.
You think having to add an extra cast is so onorous that it doesn't count
as C any more? Any decent C dev would add it by default. Obviously I don't include you in that grouping.
On Wed, 2 Apr 2025 14:12:18 -0000 (UTC)
antispam@fricas.org (Waldek Hebisch) wibbled:
Muttley@dastardlyhq.org wrote:
On Wed, 2 Apr 2025 10:57:29 +0100
bart <bc@freeuk.com> wibbled:
On 02/04/2025 06:59, Alexis wrote:
Thought people here might be interested in this image on Jens Gustedt's >>>>> blog, which translates section 6.2.5, "Types", of the C23 standard
into a graph of inclusions:
https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23-basic-types/ >>>>>
So much for C being a 'simple' language.
C should be left alone. It does what it needs to do for a systems language. >>> Almost no use uses it for applications any more and sophisticated processing
using complex types for example are far better done in C++.
C99 has VMT (variable modified types). Thanks to VMT and complex types
C99 can naturaly do numeric computing that previously was done using >>Fortran 77. Offical C++ has no VMT. C++ mechanizms look nicer,
Officially no, but I've never come across a C++ compiler that didn't support them given they're all C compilers too.
On Wed, 2 Apr 2025 14:12:18 -0000 (UTC)...
antispam@fricas.org (Waldek Hebisch) wibbled:
C99 has VMT (variable modified types). Thanks to VMT and complex types
C99 can naturaly do numeric computing that previously was done using >>Fortran 77. Offical C++ has no VMT. C++ mechanizms look nicer,
Officially no, but I've never come across a C++ compiler that didn't support them given they're all C compilers too.
James Kuyper <jameskuyper@alumni.caltech.edu> writes:
Muttley@dastardlyhq.org wrote:
On Wed, 2 Apr 2025 14:12:18 -0000 (UTC)...
antispam@fricas.org (Waldek Hebisch) wibbled:
C99 has VMT (variable modified types). Thanks to VMT and complex types >>>> C99 can naturaly do numeric computing that previously was done using
Fortran 77. Offical C++ has no VMT. C++ mechanizms look nicer,
Officially no, but I've never come across a C++ compiler that didn't support
them given they're all C compilers too.
There exist many programs that can compile either C code and C++ code,
depending either upon the extension of the file name or explicit command
line options to determine which language's rules to apply. That doesn't
qualify. Do you know of any compiler that accepts VMTs when compiling
according to C++ rules? If so, please provide an example. It will help
if the code has some features that are well-formed code in C++, but
syntax errors in C, to make it clear that C++'s rules are being implemented.
g++ and clang++ both do so:
int main() {
class foo { };
int len = 42;
int vla[len];
}
Both warn about the variable length array when invoked with "-pedantic"
and reject it with "-pedantic-errors".
Microsoft's C and C++ compilers do not support VLAs. (Their C compiler
never supported C99, and VLAs were made optional in C11, so that's not a coformance issue.)
David Brown <david.brown@hesbynett.no> writes:
[...]
It is easy to write code that is valid C23, using a new feature copied
from C++, but which is not valid C++ :
constexpr size_t N = sizeof(int);
int * p = malloc(N);
It's much easier than that.
int class;
Every C compiler will accept that. Every C++ compiler will reject
it. (I think the standard only requires a diagnostic, which can
be non-fatal, but I'd be surprised to see a C or C++ compiler that
generates an object file after encountering a syntax error).
Muttley seems to think that because, for example, "gcc -c foo.c"
will compile C code and "gcc -c foo.cpp" will compile C++ code,
the C and C++ compilers are the same compiler. In fact they're
distinct frontends with shared backend code, invoked differently
based on the source file suffix. (And "g++" is recommended for C++
code, but let's not get into that.)
For the same compiler to compile both C and C++, assuming you don't unreasonably stretch the meaning of "same compiler", you'd have to
have a parser that conditionally recognizes "class" as a keyword or
as an identifier, among a huge number of other differences between
the two grammars. As far as I know, nobody does that.
You and I know he's wrong. Arguing with him is a waste of everyone's
time.
* Where are the fixed-width types from stdint.h?
On Thu, 3 Apr 2025 15:05:59 +0200
Opus <ifonly@youknew.org> wrote:
For instance, if I'm not mistaken,
designated initializers, which are very handy and have been available
in C since C99 (25 years ago) have appeared only in C++20, about 20
years later.
AFAIK, even C++23 provides only a subset of C99 designated initializers.
The biggest difference is that in C++ initializers have to be
specified in the same order as declarations for respective fields.
antispam@fricas.org (Waldek Hebisch) writes:
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:[...]
Not always practical. A good example is the type size_t. If a
function takes an argument of type size_t, then the symbol size_t
should be defined, no matter which header the function is being
declared in.
Why? One can use a type without a name for such type.
Convenience and existing practice. Sure, an implementation of
<string.h> could provide a declaration of memcpy() without making
size_t visible, but what would be the point?
On 03.04.2025 16:58, David Brown wrote:
[...]
I know people can use pre-processor conditional compilation based on
__STDC_VERSION__ to complain if code is compiled with an unexpected or
unsupported standard, but few people outside of library header authors
actually do that. I'd really like :
#pragma STDC VERSION C17
to force the compiler to use the equivalent of "-std=c17
-pedantic-errors" in gcc.
(I understand the wish to have that #pragma supported.)
On Wed, 2 Apr 2025 16:33:46 +0100, bart wrote:
Here, tell me at a glance the magnitude of
this number:
10000000000
#define THOUSAND 1000
#define MILLION (THOUSAND * THOUSAND)
#define BILLION (THOUSAND * MILLION)
uint64 num = 10 * BILLION;
Much easier to figure out, don’t you think?
On 04/04/2025 04:01, Lawrence D'Oliveiro wrote:
On Wed, 2 Apr 2025 16:33:46 +0100, bart wrote:
Here, tell me at a glance the magnitude of
this number:
10000000000
#define THOUSAND 1000
#define MILLION (THOUSAND * THOUSAND)
#define BILLION (THOUSAND * MILLION)
uint64 num = 10 * BILLION;
Much easier to figure out, don’t you think?
Try 20 * BILLION; it will overflow if not careful.
I'd normally write '20 billion' outside of C, since I use such
numbers, with lots of zeros, constantly when writing test code.
But when it isn't all zeros, or the base isn't 10, then numeric
separators are better.
On Wed, 2 Apr 2025 16:33:46 +0100, bart wrote:
Here, tell me at a glance the magnitude of
this number:
10000000000
#define THOUSAND 1000
#define MILLION (THOUSAND * THOUSAND)
#define BILLION (THOUSAND * MILLION)
uint64 num = 10 * BILLION;
Much easier to figure out, don’t you think?
antispam@fricas.org (Waldek Hebisch) writes:
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:[...]
Not always practical. A good example is the type size_t. If a
function takes an argument of type size_t, then the symbol size_t
should be defined, no matter which header the function is being
declared in.
Why? One can use a type without a name for such type.
Convenience and existing practice. Sure, an implementation of
<string.h> could provide a declaration of memcpy() without making
size_t visible, but what would be the point?
Every identifier should be declared in exactly one home header,
and no other header should provide that definition.
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:...
Not always practical. A good example is the type size_t. If a
function takes an argument of type size_t, then the symbol size_t
should be defined, no matter which header the function is being
declared in.
Why? One can use a type without a name for such type.
On Mon, 7 Apr 2025 19:02:34 +0100
bart <bc@freeuk.com> wrote:
On 04/04/2025 04:01, Lawrence D'Oliveiro wrote:
On Wed, 2 Apr 2025 16:33:46 +0100, bart wrote:
Here, tell me at a glance the magnitude of
this number:
10000000000
#define THOUSAND 1000
#define MILLION (THOUSAND * THOUSAND)
#define BILLION (THOUSAND * MILLION)
uint64 num = 10 * BILLION;
Much easier to figure out, don’t you think?
Try 20 * BILLION; it will overflow if not careful.
I'd normally write '20 billion' outside of C, since I use such
numbers, with lots of zeros, constantly when writing test code.
But when it isn't all zeros, or the base isn't 10, then numeric
separators are better.
Is not it "20 milliards" in British English?
On 07/04/2025 19:12, Michael S wrote:
On Mon, 7 Apr 2025 19:02:34 +0100
bart <bc@freeuk.com> wrote:
On 04/04/2025 04:01, Lawrence D'Oliveiro wrote:
On Wed, 2 Apr 2025 16:33:46 +0100, bart wrote:
Here, tell me at a glance the magnitude of
this number:
10000000000
#define THOUSAND 1000
#define MILLION (THOUSAND * THOUSAND)
#define BILLION (THOUSAND * MILLION)
uint64 num = 10 * BILLION;
Much easier to figure out, don’t you think?
Try 20 * BILLION; it will overflow if not careful.
(Actually both 10/20 billion will overflow u32; I was thinking of 20
billion billion overflowing u64.)
I'd normally write '20 billion' outside of C, since I use such
numbers, with lots of zeros, constantly when writing test code.
But when it isn't all zeros, or the base isn't 10, then numeric
separators are better.
Is not it "20 milliards" in British English?
We (UK) now use 'billion' for 1E9; in the past it meant 1E12.
'Milliardo' is Italian for 'billion'; perhaps in a few other languages too.
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
antispam@fricas.org (Waldek Hebisch) writes:
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:[...]
Not always practical. A good example is the type size_t. If a
function takes an argument of type size_t, then the symbol size_t
should be defined, no matter which header the function is being
declared in.
Why? One can use a type without a name for such type.
Convenience and existing practice. Sure, an implementation of
<string.h> could provide a declaration of memcpy() without making
size_t visible, but what would be the point?
Cleanliness of definitions? Consistency? Fragment that you
replaced by [...] contained a proposal:
Every identifier should be declared in exactly one home header,
and no other header should provide that definition.
That would be pretty clean and consistent rule: if you need some
standard symbol, then you should include corresponding header.
On Mon, 7 Apr 2025 19:02:34 +0100
bart <bc@freeuk.com> wrote:
On 04/04/2025 04:01, Lawrence D'Oliveiro wrote:
On Wed, 2 Apr 2025 16:33:46 +0100, bart wrote:
Here, tell me at a glance the magnitude of
this number:
10000000000
#define THOUSAND 1000
#define MILLION (THOUSAND * THOUSAND)
#define BILLION (THOUSAND * MILLION)
uint64 num = 10 * BILLION;
Much easier to figure out, don’t you think?
Try 20 * BILLION; it will overflow if not careful.
I'd normally write '20 billion' outside of C, since I use such
numbers, with lots of zeros, constantly when writing test code.
But when it isn't all zeros, or the base isn't 10, then numeric
separators are better.
Is not it "20 milliards" in British English?
Lawrence D'Oliveiro <ldo@nz.invalid> wrote at 03:01 this Friday (GMT):
On Wed, 2 Apr 2025 16:33:46 +0100, bart wrote:
Here, tell me at a glance the magnitude of
this number:
10000000000
#define THOUSAND 1000
#define MILLION (THOUSAND * THOUSAND)
#define BILLION (THOUSAND * MILLION)
uint64 num = 10 * BILLION;
Much easier to figure out, don’t you think?
I used to do a bit of code for a codebase that did that with SECONDS and MINUTES since (almost) every "time" variable was in milliseconds, and it
was very nice. That is just my subjective opinion, though. :P
it was more like
#define SECONDS *10
#define MINUTES SECONDS*60
#define HOURS MINUTES*60
, though. Probably would be more notably annoying to debug in weird
cases if the whole language/codebase wasnt borked spagetti :D
On 07/04/2025 19:12, Michael S wrote:
On Mon, 7 Apr 2025 19:02:34 +0100
bart <bc@freeuk.com> wrote:
On 04/04/2025 04:01, Lawrence D'Oliveiro wrote:
On Wed, 2 Apr 2025 16:33:46 +0100, bart wrote:
Here, tell me at a glance the magnitude of
this number:
10000000000
#define THOUSAND 1000
#define MILLION (THOUSAND * THOUSAND)
#define BILLION (THOUSAND * MILLION)
uint64 num = 10 * BILLION;
Much easier to figure out, don’t you think?
Try 20 * BILLION; it will overflow if not careful.
(Actually both 10/20 billion will overflow u32; I was thinking of 20
billion billion overflowing u64.)
I'd normally write '20 billion' outside of C, since I use such
numbers, with lots of zeros, constantly when writing test code.
But when it isn't all zeros, or the base isn't 10, then numeric
separators are better.
Is not it "20 milliards" in British English?
We (UK) now use 'billion' for 1E9; in the past it meant 1E12.
'Milliardo' is Italian for 'billion'; perhaps in a few other languages too.
On 07.04.2025 20:18, bart wrote:
On 07/04/2025 19:12, Michael S wrote:
On Mon, 7 Apr 2025 19:02:34 +0100
bart <bc@freeuk.com> wrote:
On 04/04/2025 04:01, Lawrence D'Oliveiro wrote:
On Wed, 2 Apr 2025 16:33:46 +0100, bart wrote:
Here, tell me at a glance the magnitude of
this number:
10000000000
#define THOUSAND 1000
#define MILLION (THOUSAND * THOUSAND)
#define BILLION (THOUSAND * MILLION)
uint64 num = 10 * BILLION;
Much easier to figure out, don’t you think?
Try 20 * BILLION; it will overflow if not careful.
(Actually both 10/20 billion will overflow u32; I was thinking of 20 billion billion overflowing u64.)
I'd normally write '20 billion' outside of C, since I use such
numbers, with lots of zeros, constantly when writing test code.
But when it isn't all zeros, or the base isn't 10, then numeric
separators are better.
Is not it "20 milliards" in British English?
We (UK) now use 'billion' for 1E9; in the past it meant 1E12.
'Milliardo' is Italian for 'billion'; perhaps in a few other
languages too.
"In a few other languages"? - That was not my impression;
and a quick look into Wikipedia seems to support that.
The global map[*] is interesting!
(Read the articles for the details, the historic base, and
especially what's standard in countries, and why the common
standard is in some cases like GB not used primarily today.)
Janis
https://upload.wikimedia.org/wikipedia/commons/7/7c/World_map_of_long_and_short_scales.svg
Green - long scale
Blue - short scale
Turquoise - both, long and short
Yellow - other scales
ObC: I am currently roughing out a proposal for the ISO folks to
introduce the 288-bit long long long long long long long long long int,
or universe_t for short, so that programs will be able to keep track of
those 100 tredecimillion atoms. Each universe_t will be able to count
atoms in almost five million observable universes, which should be
enough to be going on with.
On 07.04.2025 21:29, Richard Heathfield wrote:
ObC: I am currently roughing out a proposal for the ISO folks to
introduce the 288-bit long long long long long long long long long int,
or universe_t for short, so that programs will be able to keep track of
those 100 tredecimillion atoms. Each universe_t will be able to count
atoms in almost five million observable universes, which should be
enough to be going on with.
Thus artificially restricting the foundational research not only of theoretical physics but also of pure mathematics and philosophy? ;-)
Mind that "640kB is enough" experience! :-)
More seriously; there's already tools and libraries that support
"arbitrary" precision were necessary. Not in an 'int' type, though.
On Mon, 7 Apr 2025 22:14:20 +0200
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
On 07.04.2025 20:18, bart wrote:
On 07/04/2025 19:12, Michael S wrote:
On Mon, 7 Apr 2025 19:02:34 +0100
bart <bc@freeuk.com> wrote:
On 04/04/2025 04:01, Lawrence D'Oliveiro wrote:
On Wed, 2 Apr 2025 16:33:46 +0100, bart wrote:
Here, tell me at a glance the magnitude of
this number:
10000000000
#define THOUSAND 1000
#define MILLION (THOUSAND * THOUSAND)
#define BILLION (THOUSAND * MILLION)
uint64 num = 10 * BILLION;
Much easier to figure out, don’t you think?
Try 20 * BILLION; it will overflow if not careful.
(Actually both 10/20 billion will overflow u32; I was thinking of 20
billion billion overflowing u64.)
I'd normally write '20 billion' outside of C, since I use such
numbers, with lots of zeros, constantly when writing test code.
But when it isn't all zeros, or the base isn't 10, then numeric
separators are better.
Is not it "20 milliards" in British English?
We (UK) now use 'billion' for 1E9; in the past it meant 1E12.
'Milliardo' is Italian for 'billion'; perhaps in a few other
languages too.
"In a few other languages"? - That was not my impression;
and a quick look into Wikipedia seems to support that.
The global map[*] is interesting!
(Read the articles for the details, the historic base, and
especially what's standard in countries, and why the common
standard is in some cases like GB not used primarily today.)
Janis
https://upload.wikimedia.org/wikipedia/commons/7/7c/World_map_of_long_and_short_scales.svg
Green - long scale
Blue - short scale
Turquoise - both, long and short
Yellow - other scales
I think that this map misses one important detail - majority of "blue" non-English-speaking countries spell 1e9 as milliard/miliard.
I.e. for that specific scale they are aligned with "green" countries.
If you don't believe me, try google translate.
On 07.04.2025 20:18, bart wrote:
On 07/04/2025 19:12, Michael S wrote:
On Mon, 7 Apr 2025 19:02:34 +0100
bart <bc@freeuk.com> wrote:
On 04/04/2025 04:01, Lawrence D'Oliveiro wrote:
On Wed, 2 Apr 2025 16:33:46 +0100, bart wrote:
Here, tell me at a glance the magnitude of
this number:
10000000000
#define THOUSAND 1000
#define MILLION (THOUSAND * THOUSAND)
#define BILLION (THOUSAND * MILLION)
uint64 num = 10 * BILLION;
Much easier to figure out, don’t you think?
Try 20 * BILLION; it will overflow if not careful.
(Actually both 10/20 billion will overflow u32; I was thinking of 20
billion billion overflowing u64.)
I'd normally write '20 billion' outside of C, since I use such
numbers, with lots of zeros, constantly when writing test code.
But when it isn't all zeros, or the base isn't 10, then numeric
separators are better.
Is not it "20 milliards" in British English?
We (UK) now use 'billion' for 1E9; in the past it meant 1E12.
'Milliardo' is Italian for 'billion'; perhaps in a few other languages too.
"In a few other languages"? - That was not my impression;
and a quick look into Wikipedia seems to support that.
The global map[*] is interesting!
(Read the articles for the details, the historic base, and
especially what's standard in countries, and why the common
standard is in some cases like GB not used primarily today.)
https://upload.wikimedia.org/wikipedia/commons/7/7c/World_map_of_long_and_short_scales.svg
On 04/04/2025 04:01, Lawrence D'Oliveiro wrote:
On Wed, 2 Apr 2025 16:33:46 +0100, bart wrote:
Here, tell me at a glance the magnitude of
this number:
10000000000
#define THOUSAND 1000
#define MILLION (THOUSAND * THOUSAND)
#define BILLION (THOUSAND * MILLION)
uint64 num = 10 * BILLION;
Much easier to figure out, don’t you think?
(In source code, it would also be useful to use 1e9 or 1e12,
unfortunately those normally yield floating point values. I can't do
much about that in C, but I will see what can be done with my own stuff.)
Is not it "20 milliards" in British English?
There is quite a lot of programming languages that have whitespace
separated lists. Most of them have "Algol like" syntax.
There exist many programs that can compile either C code and C++ code, depending either upon the extension of the file name or explicit command
line options to determine which language's rules to apply.
On 07.04.2025 22:49, Michael S wrote:...
On Mon, 7 Apr 2025 22:14:20 +0200
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
"In a few other languages"? - That was not my impression;
and a quick look into Wikipedia seems to support that.
The global map[*] is interesting!
(Read the articles for the details, the historic base, and
especially what's standard in countries, and why the common
standard is in some cases like GB not used primarily today.)
Janis
https://upload.wikimedia.org/wikipedia/commons/7/7c/World_map_of_long_and_short_scales.svg
Green - long scale
Blue - short scale
Turquoise - both, long and short
Yellow - other scales
I think that this map misses one important detail - majority of "blue"
non-English-speaking countries spell 1e9 as milliard/miliard.
I.e. for that specific scale they are aligned with "green" countries.
If you don't believe me, try google translate.
I cannot tell whether google translate is sufficiently authoritative.
I used to do a bit of code for a codebase that did that with SECONDS and MINUTES since (almost) every "time" variable was in milliseconds, and it
was very nice. That is just my subjective opinion, though. :P
On Fri, 4 Apr 2025 21:08:36 -0400, James Kuyper wrote:
There exist many programs that can compile either C code and C++ code,
depending either upon the extension of the file name or explicit command
line options to determine which language's rules to apply.
But note that the *nix tradition is for the “cc” command to invoke nothing
more than a “driver” program, which processes each input file according to
its extension by spawning additional processes running the actual file- specific processors. And these processors include the linker, for
combining object files created by the various compilers into an actual executable (or perhaps a shared library).
On 07/04/2025 21:14, Janis Papanagnou wrote:
On 07.04.2025 20:18, bart wrote:
On 07/04/2025 19:12, Michael S wrote:
On Mon, 7 Apr 2025 19:02:34 +0100
bart <bc@freeuk.com> wrote:
On 04/04/2025 04:01, Lawrence D'Oliveiro wrote:
On Wed, 2 Apr 2025 16:33:46 +0100, bart wrote:
Here, tell me at a glance the magnitude of
this number:
10000000000
#define THOUSAND 1000
#define MILLION (THOUSAND * THOUSAND)
#define BILLION (THOUSAND * MILLION)
uint64 num = 10 * BILLION;
Much easier to figure out, don’t you think?
Try 20 * BILLION; it will overflow if not careful.
(Actually both 10/20 billion will overflow u32; I was thinking of 20
billion billion overflowing u64.)
I'd normally write '20 billion' outside of C, since I use such
numbers, with lots of zeros, constantly when writing test code.
But when it isn't all zeros, or the base isn't 10, then numeric
separators are better.
Is not it "20 milliards" in British English?
We (UK) now use 'billion' for 1E9; in the past it meant 1E12.
'Milliardo' is Italian for 'billion'; perhaps in a few other
languages too.
"In a few other languages"? - That was not my impression;
and a quick look into Wikipedia seems to support that.
The global map[*] is interesting!
(Read the articles for the details, the historic base, and
especially what's standard in countries, and why the common
standard is in some cases like GB not used primarily today.)
https://upload.wikimedia.org/wikipedia/commons/7/7c/World_map_of_long_and_short_scales.svg
I'd never heard of short and long scales. The full article is here:
https://en.wikipedia.org/wiki/Long_and_short_scales
I only knew about the old and new meanings of 'billion' in the UK, its
US meaning, and the use of 'milliard' (however it is spelt, since I'm
only familiar with it in speech), in Italian.
(In source code, it would also be useful to use 1e9 or 1e12,
unfortunately those normally yield floating point values. I can't do
much about that in C, but I will see what can be done with my own stuff.)
bart <bc@freeuk.com> writes:
[...]
Since numbers using exponents without also using decimal points are
rare in my code base, I've decided to experiment with numbers like
1e6 being integer constants rather that floats. (This is IN my
language.)
You might want to look at Ada for existing practice.
In C, a constant with either a decimal point or an exponent is floating-point. In Ada, 1.0e6 is floating-point and 1e6 is an
integer. Of course this isn't very helpful if you want to represent
numbers with a lot of non-zero digits; for that, you need digit
separators.
[...]
On 4/3/25 18:00, Waldek Hebisch wrote:
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:...
Not always practical. A good example is the type size_t. If a
function takes an argument of type size_t, then the symbol size_t
should be defined, no matter which header the function is being
declared in.
Why? One can use a type without a name for such type.
How would you declare a pointer to a function type such that it is
compatible with such a function's type?
When a variable is needed to store a value that would be passed as the
size_t argument to such a function, I would (in the absence of any
specific reason to do otherwise) want to declare that object to have the
type size_t.
Why should I have to #include a different header just because I want to
do these things?
On 07/04/2025 19:12, Michael S wrote:
On Mon, 7 Apr 2025 19:02:34 +0100
bart <bc@freeuk.com> wrote:
On 04/04/2025 04:01, Lawrence D'Oliveiro wrote:
On Wed, 2 Apr 2025 16:33:46 +0100, bart wrote:
Here, tell me at a glance the magnitude of
this number:
10000000000
#define THOUSAND 1000
#define MILLION (THOUSAND * THOUSAND)
#define BILLION (THOUSAND * MILLION)
uint64 num = 10 * BILLION;
Much easier to figure out, don’t you think?
Try 20 * BILLION; it will overflow if not careful.
I'd normally write '20 billion' outside of C, since I use such
numbers, with lots of zeros, constantly when writing test code.
But when it isn't all zeros, or the base isn't 10, then numeric
separators are better.
Is not it "20 milliards" in British English?
Yes. The British use
1 - one
10 - ten
100 - hundred
1 000 - thousand
10 000 - myriad
100 000 - pool
1 000 000 - million
1 000 000 000 - milliard
1 000 000 000 000 - billion
1 000 000 000 000 000 - billiard
1 000 000 000 000 000 000 - trillion
1 000 000 000 000 000 000 000 - trilliard
1 000 000 000 000 000 000 000 000 - snooker
except for journalists, politicians, stockbrokers, and anyone else who
spends far too much time talking to Americans.
The biggest number you're likely to need in the real world is 100 tredecimillion, which is approximately the number of atoms in the known universe.
ObC: I am currently roughing out a proposal for the ISO folks to
introduce the 288-bit long long long long long long long long long int,
or universe_t for short, so that programs will be able to keep track of
those 100 tredecimillion atoms. Each universe_t will be able to count
atoms in almost five million observable universes, which should be
enough to be going on with.
On 07/04/2025 21:29, Richard Heathfield wrote:
Is not it "20 milliards" in British English?
Yes. The British use
1 - one
10 - ten
100 - hundred
1 000 - thousand
10 000 - myriad
100 000 - pool
1 000 000 - million
1 000 000 000 - milliard
except for journalists, politicians, stockbrokers, and anyone else who
spends far too much time talking to Americans.
bart <bc@freeuk.com> writes:
[...]
Since numbers using exponents without also using decimal points are
rare in my code base, I've decided to experiment with numbers like 1e6
being integer constants rather that floats. (This is in my language.)
You might want to look at Ada for existing practice.
In C, a constant with either a decimal point or an exponent is floating-point. In Ada, 1.0e6 is floating-point and 1e6 is an integer.
Of course this isn't very helpful if you want to represent numbers with
a lot of non-zero digits; for that, you need digit separators.
"myriad" means 10,000, coming directly from the Greek. But the
word is usually used to mean "a great many" or "more than you can
count". (It's like the use of "40" in the Bible - I guess the
ancient Greeks were better at counting than the ancient Canaanites.)
You are unlikely to find the word "myriad" meaning specifically
10,000 outside of translated Classical Greek or Latin literature,
or in old military history contexts.
I have not heard of the word "pool" meaning 100,000. But then, I
am not as old as Richard :-)
In India and other parts of Asia, 100,000 has a specific name
such as "lakh" - written as 1,00,000 (it's not just the digit
separator that varies between country, but also where the
separators are placed).
The UK officially (as a government standard) used the "long
scale" (billion = 10 ^ 12) until 1974. Unofficially, it was
still sometimes used long afterwards - equally, the "short scale"
(billion = 10 ^ 9) was often used long before that. So the short
scale is the norm in the UK now (except for politicians talking
about national debt - "billions" doesn't sound as bad as
"trillions"), but Richard may have learned the long scale at school.
On Tue, 8 Apr 2025 10:29:13 +0200
David Brown <david.brown@hesbynett.no> wibbled:
On 07/04/2025 21:29, Richard Heathfield wrote:
Is not it "20 milliards" in British English?
Yes. The British use
No we don't.
1 - one
10 - ten
100 - hundred
1 000 - thousand
10 000 - myriad
100 000 - pool
1 000 000 - million
1 000 000 000 - milliard
Is this a late april fool?
Absolutely no one in britain says myriad for 10K , pool (wtf?) for 100K or milliard apart from maybe history of science professor and you'd probably be hard pressed to find many people who'd even heard of them in that context. The only reason I knew milliard is because I can speak (sort of) french and thats the french billion.
except for journalists, politicians, stockbrokers, and anyone else who
spends far too much time talking to Americans.
Pfft. The standard mathematical million-billion-trillion sequence has been used in the UK since at least I was at school almost 40 years ago.
Where do you get your information from, The Disney Guide to the UK?
Am 02.04.25 um 11:57 schrieb bart:
* Where are the fixed-width types from stdint.h?
Same as for size_t, etc: They don't exist. Those are not separate types,
just typedefs to some other types. E.g. uint16_t could be typedef'ed to unsigned int.
stdint.h et al are just ungainly bolt-ons, not fully supported by the
language.
No, they're fully supported by the language. They've been in the ISO standard since 1999.
My first draft did indeed give 'lakh', but in the light of 'billiard'
the totally fabricated 'pool' and 'snooker' had a (very light) touch of potential for humour. For anyone who hasn't heard of humour, it was very
big in the Sixties and to this day still makes occasional appearances
for old times' sake.
On 05/04/2025 18:56, Philipp Klaus Krause wrote:
Am 02.04.25 um 11:57 schrieb bart:
* Where are the fixed-width types from stdint.h?
Same as for size_t, etc: They don't exist. Those are not separate
types, just typedefs to some other types. E.g. uint16_t could be
typedef'ed to unsigned int.
This is the point I made a few weeks back, but others insisted they were
part of C:
Me:
stdint.h et al are just ungainly bolt-ons, not fully supported by the
language.
Keith Thompson:
No, they're fully supported by the language. They've been in the ISO standard since 1999.
This is an exchange posted on 20-Mar-2025 at 19:10 GMT (header shows
'Thu, 20 Mar 2025 12:10:22 -0700')
Clearly, they're not quite as fully supported as short, int etc; they
are usually just aliases. But that needn't stop them being shown on such
a chart.
On 08/04/2025 15:57, David Brown wrote:
On 08/04/2025 15:32, bart wrote:
[1] Maybe _t names are reserved, but this:
typedef struct {int x,y;} uint64_t;
On 08/04/2025 15:32, bart wrote:
On 05/04/2025 18:56, Philipp Klaus Krause wrote:
Am 02.04.25 um 11:57 schrieb bart:
* Where are the fixed-width types from stdint.h?
Same as for size_t, etc: They don't exist. Those are not separate
types, just typedefs to some other types. E.g. uint16_t could be
typedef'ed to unsigned int.
This is the point I made a few weeks back, but others insisted they
were part of C:
Me:
stdint.h et al are just ungainly bolt-ons, not fully supported by the >> >> language.
Keith Thompson:
No, they're fully supported by the language. They've been in the ISO >> > standard since 1999.
;
This is an exchange posted on 20-Mar-2025 at 19:10 GMT (header shows
'Thu, 20 Mar 2025 12:10:22 -0700')
Clearly, they're not quite as fully supported as short, int etc; they
are usually just aliases. But that needn't stop them being shown on
such a chart.
Standard aliases are part of the language standard, and therefore
standard and fully supported parts of the language.
and fully supported parts of the language.
"myriad" means 10,000, coming directly from the Greek. But the word is usually used to mean "a great many" or "more than you can count". [...]
You are unlikely to find the word "myriad" meaning specifically 10,000 outside of translated Classical Greek or Latin literature, or in old
military history contexts.
On 08/04/2025 12:54, Muttley@DastardlyHQ.org wrote:
On Tue, 8 Apr 2025 10:29:13 +0200
David Brown <david.brown@hesbynett.no> wibbled:
On 07/04/2025 21:29, Richard Heathfield wrote:
Is not it "20 milliards" in British English?
Yes. The British use
No we don't.
1 - one
10 - ten
100 - hundred
1 000 - thousand
10 000 - myriad
100 000 - pool
1 000 000 - million
1 000 000 000 - milliard
Is this a late april fool?
Absolutely no one in britain says myriad for 10K , pool (wtf?) for
100K or milliard apart from maybe history of science professor and
you'd probably be hard pressed to find many people who'd even heard
of them in that context. The only reason I knew milliard is because
I can speak (sort of) french and thats the french billion.
"myriad" means 10,000, coming directly from the Greek. But the word
is usually used to mean "a great many" or "more than you can count".
(It's like the use of "40" in the Bible - I guess the ancient Greeks
were better at counting than the ancient Canaanites.)
bart <bc@freeuk.com> writes:
On 08/04/2025 15:57, David Brown wrote:
On 08/04/2025 15:32, bart wrote:
[1] Maybe _t names are reserved, but this:
typedef struct {int x,y;} uint64_t;
7.34.15 Integer types <stdint.h>
Typedef names beginning with int or uint and ending with _t are potentially reserved identifiers and may be added to the types defined in the <stdint.h> header. Macro names beginning with INT or UINT and ending with _MAX, _MIN, _WIDTH, or _C are potentially reserved identifiers and may
be added to the macros defined in the <stdint.h> header.
On 07/04/2025 20:35, James Kuyper wrote:
On 4/3/25 18:00, Waldek Hebisch wrote:
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:...
Not always practical. A good example is the type size_t. If a
function takes an argument of type size_t, then the symbol size_t
should be defined, no matter which header the function is being
declared in.
Why? One can use a type without a name for such type.
How would you declare a pointer to a function type such that it is
compatible with such a function's type?
The C23 "typeof" operator lets you work with the type of a value or expression. So you first have an object or value of type "size_t",
that's all you need. Unfortunately, there are no convenient literal
suffixes that could be used here.
On 4/8/25 07:20, Michael S wrote:
On Tue, 8 Apr 2025 10:54:12 -0000 (UTC)...
The question to which I found no answer by googling is when
Americans themselves decided that billion means 1e9.
I generally find Wikipedia a more useful source that Google for this
kind of information.
The previously referenced Wikipedia article on long scales versus
short scales asserts that "The short scale was never widespread
before its general adoption in the United States. It has been taught
in American schools since the early 1800s"
It cites " Smith, David Eugene (1953) [first published 1925]. History
of Mathematics. Vol. II. Courier Dover Publications. p. 81. ISBN 978-0-486-20430-7." as the source for that claim.
It also says "The first American appearance of the short scale value
of billion as 109 was published in the Greenwood Book of 1729, written anonymously by Prof. Isaac Greenwood of Harvard College.", citing the
same reference. This does not contradict the first statement - it
might have taken 70 years to become widespread from the first time it appeared.
And finally, it says "In the United States, the short scale has been
taught in school since the early 19th century. It is therefore used exclusively"
Citing "Cambridge Dictionaries Online. Cambridge University Press.
Retrieved 21 August 2011." as it's source for that statement.
I think the entity '10000' is still used in some countries in Asia,
of course with their own (non-western) typography.
On 08/04/2025 12:54, Muttley@DastardlyHQ.org wrote:...
That was before he started school, so as far as he's concerned, itPfft. The standard mathematical million-billion-trillion sequence has been >> used in the UK since at least I was at school almost 40 years ago.
The UK officially (as a government standard) used the "long scale"
(billion = 10 ^ 12) until 1974.
On Tue, 8 Apr 2025 10:54:12 -0000 (UTC)...
The question to which I found no answer by googling is when Americans themselves decided that billion means 1e9.
On 07.04.2025 19:30, candycanearter07 wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> wrote at 03:01 this Friday (GMT):
On Wed, 2 Apr 2025 16:33:46 +0100, bart wrote:
Here, tell me at a glance the magnitude of
this number:
10000000000
#define THOUSAND 1000
#define MILLION (THOUSAND * THOUSAND)
#define BILLION (THOUSAND * MILLION)
uint64 num = 10 * BILLION;
Much easier to figure out, don’t you think?
Yes, where appropriate that's fine.
But that pattern doesn't work for numbers like 299792458 [m/s]
(i.e. in the general case, as opposed to primitive scalers).
And it's also not good for international languages (different
to US American and the like), where "billion" means something
else (namely 10^12, and not 10^9), so that its semantics isn't
unambiguously clear in the first place.
And sometimes you have large numeric literals and don't want
to add such CPP ballast just for readability; especially if
there is (or would be) a standard number grouping for literals
available.
So it's generally a gain to have a grouping syntax available.
I used to do a bit of code for a codebase that did that with SECONDS and
MINUTES since (almost) every "time" variable was in milliseconds, and it
was very nice. That is just my subjective opinion, though. :P
That actually depends on what you do. Milliseconds was (for our
applications) often either not good enough a resolution, or, on
a larger scale, unnecessary or reducing the available range.
Quasi "norming" an integral value to represent a milliseconds unit
I consider especially bad, although not that bad as units of 0.01s
(that I think have met in Javascript). I also seem to recall that
MS DOS had such arbitrary sub-seconds units, but I'm not quite sure
about that any more.
A better unit is, IMO, a second resolution (which at least is a
basic physical unit) and a separate integer for sub-seconds. (An
older Unix I used supported the not uncommon nanoseconds attribute
but where only milli- and micro-seconds were uses, the rest was 0.)
Or have an abstraction layer that hides all implementation details
and don't have to care any more about implementation details of
such "time types".
it was more like
#define SECONDS *10
#define MINUTES SECONDS*60
#define HOURS MINUTES*60
, though. Probably would be more notably annoying to debug in weird
cases if the whole language/codebase wasnt borked spagetti :D
Janis
On Tue, 8 Apr 2025 13:39:14 +0200
David Brown <david.brown@hesbynett.no> wrote:
On 08/04/2025 12:54, Muttley@DastardlyHQ.org wrote:
On Tue, 8 Apr 2025 10:29:13 +0200
David Brown <david.brown@hesbynett.no> wibbled:
On 07/04/2025 21:29, Richard Heathfield wrote:
Is not it "20 milliards" in British English?
Yes. The British use
No we don't.
1 - one
10 - ten
100 - hundred
1 000 - thousand
10 000 - myriad
100 000 - pool
1 000 000 - million
1 000 000 000 - milliard
Is this a late april fool?
Absolutely no one in britain says myriad for 10K , pool (wtf?) for
100K or milliard apart from maybe history of science professor and
you'd probably be hard pressed to find many people who'd even
heard of them in that context. The only reason I knew milliard is
because I can speak (sort of) french and thats the french billion.
"myriad" means 10,000, coming directly from the Greek. But the word
is usually used to mean "a great many" or "more than you can count".
(It's like the use of "40" in the Bible - I guess the ancient Greeks
were better at counting than the ancient Canaanites.)
In the Bible?
Or, may be, in imprecise translations of the Bible that confuse the
word רבבה that means 10000 with remotely similar word ארבעים that means 40 ?
bart <bc@freeuk.com> writes:
Clearly, they're not quite as fully supported as short, int etc; they
are usually just aliases. But that needn't stop them being shown on
such a chart.
Apparently the author of the chart chose to include types that are
defined by the core language, not by the library.
I think that was a
perfectly valid choice. Adding all the types specified in the library
would make the chart far too big and not much more informative.
If you don't like it, make your own chart.
On 08/04/2025 22:46, Keith Thompson wrote:
bart <bc@freeuk.com> writes:
Apparently the author of the chart chose to include types that are
defined by the core language, not by the library.
So here you're finally admitteding they are a different rank.
Actually I can't quite see the purpose of this chart, why it has to be
so complicated (even with bits missing) or who it is for.
bart <bc@freeuk.com> writes:
On 08/04/2025 22:46, Keith Thompson wrote:
bart <bc@freeuk.com> writes:
Apparently the author of the chart chose to include types
that are
defined by the core language, not by the library.
So here you're finally admitteding they are a different rank.
The core language and the library are equal in rank, both being
different parts of any implementation of C.
Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
James Kuyper <jameskuyper@alumni.caltech.edu> writes:
bart <bc@freeuk.com> writes:
On 08/04/2025 22:46, Keith Thompson wrote:
bart <bc@freeuk.com> writes:
Apparently the author of the chart chose to include types
that are
defined by the core language, not by the library.
So here you're finally admitteding they are a different rank.
The core language and the library are equal in rank, both being
different parts of any implementation of C.
This statement isn't exactly right. Some parts of the standard
library are available only in hosted implementations, and not in freestanding implementations.
True. Also, freestanding implementations must support <stddef.h>
and <stdint.h>, among several other headers.
On 08/04/2025 15:57, David Brown wrote:
On 08/04/2025 15:32, bart wrote:
On 05/04/2025 18:56, Philipp Klaus Krause wrote:
Am 02.04.25 um 11:57 schrieb bart:
* Where are the fixed-width types from stdint.h?
Same as for size_t, etc: They don't exist. Those are not separate
types, just typedefs to some other types. E.g. uint16_t could be
typedef'ed to unsigned int.
This is the point I made a few weeks back, but others insisted
they were part of C:
Me:
by the >> language.stdint.h et al are just ungainly bolt-ons, not fully supported
Keith Thompson:
No, they're fully supported by the language. They've been inthe ISO > standard since 1999.
This is an exchange posted on 20-Mar-2025 at 19:10 GMT (header
shows 'Thu, 20 Mar 2025 12:10:22 -0700')
Clearly, they're not quite as fully supported as short, int etc;
they are usually just aliases. But that needn't stop them being
shown on such a chart.
Standard aliases are part of the language standard, and therefore
standard and fully supported parts of the language.
So, should they have been on that chart?
and fully supported parts of the language.
Differences between 'unsigned long long int' and 'uint64_t' up to C23:
uint64_t unsigned long long int
Works without header No Yes
Literal suffix No Yes (ULL etc)
Dedicated printf format No Yes (%llu)
Dedicated scanf format No Yes (whatever that might be)
sizeof() might not be 8 No Maybe
Reserved word[1] No Yes
Outside lexical scope[2] No Yes
Incompatible with
unsigned long int No Yes
[1] Maybe _t names are reserved, but this:
typedef struct {int x,y;} uint64_t;
compiles cleanly with:
gcc -std=c23 -Wall -Wextra -pedantic
This means that they could legally be used for any user-defined types.
[2] This is possible with uint64_t:
#include <stdint.h>
int main() {
typedef struct {int x,y;} uint64_t;
}
You can shadow the names from stdint.h.
So I'd dispute they are as fully supported and 'special' as built-in
types.
On 4/8/25 07:39, David Brown wrote:
On 08/04/2025 12:54, Muttley@DastardlyHQ.org wrote:....
That was before he started school, so as far as he's concerned, itPfft. The standard mathematical million-billion-trillion sequence has been >>> used in the UK since at least I was at school almost 40 years ago.
The UK officially (as a government standard) used the "long scale"
(billion = 10 ^ 12) until 1974.
doesn't count.
"myriad" means 10,000, coming directly from the Greek. But the word
is usually used to mean "a great many" or "more than you can count".
(It's like the use of "40" in the Bible - I guess the ancient Greeks
were better at counting than the ancient Canaanites.)
=20
In the Bible?
Or, may be, in imprecise translations of the Bible that confuse the=20
word =D7=A8=D7=91=D7=91=D7=94 that means 10000 with remotely similar word = >=D7=90=D7=A8=D7=91=D7=A2=D7=99=D7=9D that means
40 ?
=D7=A8=D7=91=D7=91=D7=94=20
Differences between 'unsigned long long int' and 'uint64_t' up to C23:
uint64_t unsigned long long int
[...]
Literal suffix No Yes (ULL etc)
Dedicated printf format No Yes (%llu)
On Tue, 8 Apr 2025 20:53:45 +0300
Michael S <already5chosen@yahoo.com> wibbled:
"myriad" means 10,000, coming directly from the Greek. But the
word is usually used to mean "a great many" or "more than you can
count". (It's like the use of "40" in the Bible - I guess the
ancient Greeks were better at counting than the ancient
Canaanites.)
=20
In the Bible?
Or, may be, in imprecise translations of the Bible that confuse
the=20 word =D7=A8=D7=91=D7=91=D7=94 that means 10000 with remotely
similar word = =D7=90=D7=A8=D7=91=D7=A2=D7=99=D7=9D that means
40 ?
=D7=A8=D7=91=D7=91=D7=94=20
Any chance of using utf8 rather than whatever the hell encoding this
is.
On Thu, 3 Apr 2025 21:48:40 +0100, bart wrote:
Commas are overwhelmingly used to separate list elements in
programming languages.
Not just separate, but terminate.
All the reasonable languages allow
trailing commas.
On Wed, 9 Apr 2025 09:01:34 -0000 (UTC)
Muttley@DastardlyHQ.org wrote:
On Tue, 8 Apr 2025 20:53:45 +0300
Michael S <already5chosen@yahoo.com> wibbled:
"myriad" means 10,000, coming directly from the Greek. But the
word is usually used to mean "a great many" or "more than you can
count". (It's like the use of "40" in the Bible - I guess the
ancient Greeks were better at counting than the ancient
Canaanites.)
=20
In the Bible?
Or, may be, in imprecise translations of the Bible that confuse
the=20 word =D7=A8=D7=91=D7=91=D7=94 that means 10000 with remotely
similar word = =D7=90=D7=A8=D7=91=D7=A2=D7=99=D7=9D that means
40 ?
=D7=A8=D7=91=D7=91=D7=94=20
Any chance of using utf8 rather than whatever the hell encoding this
is.
It seems, UTF8 is the only option available in the editor of my
newsreader agent. Could it be that your user agent or your usenet
provider is at fault?
On Wed, 9 Apr 2025 12:23:40 +0300
Michael S <already5chosen@yahoo.com> wibbled:
On Wed, 9 Apr 2025 09:01:34 -0000 (UTC)
Muttley@DastardlyHQ.org wrote:
On Tue, 8 Apr 2025 20:53:45 +0300
Michael S <already5chosen@yahoo.com> wibbled:
"myriad" means 10,000, coming directly from the Greek. But the
word is usually used to mean "a great many" or "more than you
can count". (It's like the use of "40" in the Bible - I guess
the ancient Greeks were better at counting than the ancient
Canaanites.)
=20
In the Bible?
Or, may be, in imprecise translations of the Bible that confuse
the=20 word =D7=A8=D7=91=D7=91=D7=94 that means 10000 with
remotely similar word = =D7=90=D7=A8=D7=91=D7=A2=D7=99=D7=9D that
means 40 ?
=D7=A8=D7=91=D7=91=D7=94=20
Any chance of using utf8 rather than whatever the hell encoding
this is.
It seems, UTF8 is the only option available in the editor of my
newsreader agent. Could it be that your user agent or your usenet
provider is at fault?
Thats definately not uft8. Seems something is converting it to quoted printable encoding.
bart <bc@freeuk.com> writes:
On 08/04/2025 22:46, Keith Thompson wrote:
bart <bc@freeuk.com> writes:
Apparently the author of the chart chose to include types that are
defined by the core language, not by the library.
So here you're finally admitteding they are a different rank.
The core language and the library are equal in rank, both being
different parts of any implementation of C.
Actually I can't quite see the purpose of this chart, why it has to be
so complicated (even with bits missing) or who it is for.
Every category shown on that chart has rules that are apply only to
types in that category. The chart is for people who have not yet
memorized the relationships shown, and who need to understand the rules
that apply to each category. That clearly doesn't apply to you, since understanding the rules would make it more difficult for you to complain about them.
On Tue, 8 Apr 2025 20:53:45 +0300
Michael S <already5chosen@yahoo.com> wibbled:
=D7=A8=D7=91=D7=91=D7=94=20
Any chance of using utf8 rather than whatever the hell encoding this is.
On Tue, 8 Apr 2025 16:47:07 +0100
bart <bc@freeuk.com> wrote:
On 08/04/2025 15:57, David Brown wrote:
On 08/04/2025 15:32, bart wrote:
On 05/04/2025 18:56, Philipp Klaus Krause wrote:
Am 02.04.25 um 11:57 schrieb bart:
* Where are the fixed-width types from stdint.h?
Same as for size_t, etc: They don't exist. Those are not separate
types, just typedefs to some other types. E.g. uint16_t could be
typedef'ed to unsigned int.
This is the point I made a few weeks back, but others insisted
they were part of C:
Me:
by the >> language.; stdint.h et al are just ungainly bolt-ons, not fully supported
Keith Thompson:
; No, they're fully supported by the language. They've been inthe ISO > standard since 1999.
t;
This is an exchange posted on 20-Mar-2025 at 19:10 GMT (header
shows 'Thu, 20 Mar 2025 12:10:22 -0700')
Clearly, they're not quite as fully supported as short, int etc;
they are usually just aliases. But that needn't stop them being
shown on such a chart.
Standard aliases are part of the language standard, and therefore
standard and fully supported parts of the language.
So, should they have been on that chart?
> and fully supported parts of the language.
Differences between 'unsigned long long int' and 'uint64_t' up to C23:
uint64_t unsigned long long int
Works without header No Yes
Literal suffix No Yes (ULL etc)
Dedicated printf format No Yes (%llu)
Dedicated scanf format No Yes (whatever that might be)
sizeof() might not be 8 No Maybe
I don't think that 'No' above is correct.
Take, for example, ADI SHARC DSPs. Traditionally, sizeof(uint64_t) was
2.
I looked into the latest manual and see that now their compiler have
an option -char-size-8 and with this option sizeof(uint64_t)=8. But
this option is available only for those members of SHARC family that
have HW support for byte addressing. Even for those, -char-size-8 is
not a default.
Reserved word[1] No Yes
Outside lexical scope[2] No Yes
Incompatible with
unsigned long int No Yes
I don't understand why you say 'No'. AFAIK, on all existing systems
except 64-bit Unixen the answer is 'Yes'.
On 09.04.2025 11:01, Muttley@DastardlyHQ.org wrote:
On Tue, 8 Apr 2025 20:53:45 +0300
Michael S <already5chosen@yahoo.com> wibbled:
=D7=A8=D7=91=D7=91=D7=94=20
Any chance of using utf8 rather than whatever the hell encoding this is.
Could it be that your newsreader garbled that? Probably because
it doesn't expect or knows how to decode UTF-8 encoded Hebrew?
My newsreader can display it and text that contains this line
"רבבה that means 10000 with remotely similar word ארבעים"
is also identified as UTF-8.
On Wed, 9 Apr 2025 13:32:15 +0300
Michael S <already5chosen@yahoo.com> wibbled:
Message headers indicated that it is UTF-8 encoded as quoted printable.
I don't know (and don't want to know) too much about usenet mechanics,
but it seems to me that decent newsreader should decode it back into
UTF-8.
There's nothing in your header that says uft8:
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
Message headers indicated that it is UTF-8 encoded as quoted printable.
I don't know (and don't want to know) too much about usenet mechanics,
but it seems to me that decent newsreader should decode it back into
UTF-8.
On Wed, 9 Apr 2025 12:35:00 +0200
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wibbled:
On 09.04.2025 11:01, Muttley@DastardlyHQ.org wrote:
On Tue, 8 Apr 2025 20:53:45 +0300
Michael S <already5chosen@yahoo.com> wibbled:
=D7=A8=D7=91=D7=91=D7=94=20
Any chance of using utf8 rather than whatever the hell encoding
this is.
Could it be that your newsreader garbled that? Probably because
it doesn't expect or knows how to decode UTF-8 encoded Hebrew?
It wasn't uft8, it was QPE encoded utf8.
My newsreader can display it and text that contains this line
"רבבה that means 10000 with remotely similar word ארבעים"
is also identified as UTF-8.
That displays fine.
On Fri, 4 Apr 2025 02:57:10 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Thu, 3 Apr 2025 21:48:40 +0100, bart wrote:
Commas are overwhelmingly used to separate list elements in
programming languages.
Not just separate, but terminate.
I disagree. I am in favor of optional trailing commas rather than
mandatory ones.
All the reasonable languages allow
trailing commas.
Are your sure that C Standard does not allow trailing commas?
That is, they are obviously legal in initializer lists.
All compilers that I tried reject trailing comma in function calls.
But is it (rejection) really required by the Standard? I don't know.
On 09.04.2025 13:00, Muttley@DastardlyHQ.org wrote:
On Wed, 9 Apr 2025 13:32:15 +0300
Michael S <already5chosen@yahoo.com> wibbled:
Message headers indicated that it is UTF-8 encoded as quoted printable.
I don't know (and don't want to know) too much about usenet mechanics,
but it seems to me that decent newsreader should decode it back into
UTF-8.
There's nothing in your header that says uft8:
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
Just for comparison, here's what I see in that header...
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
On 4/8/25 04:39, David Brown wrote:
On 07/04/2025 20:35, James Kuyper wrote:
On 4/3/25 18:00, Waldek Hebisch wrote:
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:...
Not always practical. A good example is the type size_t. If a
function takes an argument of type size_t, then the symbol size_t
should be defined, no matter which header the function is being
declared in.
Why? One can use a type without a name for such type.
How would you declare a pointer to a function type such that it is
compatible with such a function's type?
The C23 "typeof" operator lets you work with the type of a value or
expression. So you first have an object or value of type "size_t",
that's all you need. Unfortunately, there are no convenient literal
suffixes that could be used here.
I can see how that would work with the return type of a function, but
how would it apply to an argument of a function?
On Tue, 8 Apr 2025 13:39:14 +0200
David Brown <david.brown@hesbynett.no> wrote:
On 08/04/2025 12:54, Muttley@DastardlyHQ.org wrote:
On Tue, 8 Apr 2025 10:29:13 +0200
David Brown <david.brown@hesbynett.no> wibbled:
On 07/04/2025 21:29, Richard Heathfield wrote:
Is not it "20 milliards" in British English?
Yes. The British use
No we don't.
1 - one
10 - ten
100 - hundred
1 000 - thousand
10 000 - myriad
100 000 - pool
1 000 000 - million
1 000 000 000 - milliard
Is this a late april fool?
Absolutely no one in britain says myriad for 10K , pool (wtf?) for
100K or milliard apart from maybe history of science professor and
you'd probably be hard pressed to find many people who'd even heard
of them in that context. The only reason I knew milliard is because
I can speak (sort of) french and thats the french billion.
"myriad" means 10,000, coming directly from the Greek. But the word
is usually used to mean "a great many" or "more than you can count".
(It's like the use of "40" in the Bible - I guess the ancient Greeks
were better at counting than the ancient Canaanites.)
In the Bible?
Or, may be, in imprecise translations of the Bible that confuse the
word רבבה that means 10000 with remotely similar word ארבעים that means
40 ?
On 08/04/2025 19:53, Michael S wrote:
On Tue, 8 Apr 2025 13:39:14 +0200
David Brown <david.brown@hesbynett.no> wrote:
On 08/04/2025 12:54, Muttley@DastardlyHQ.org wrote:
On Tue, 8 Apr 2025 10:29:13 +0200
David Brown <david.brown@hesbynett.no> wibbled:
On 07/04/2025 21:29, Richard Heathfield wrote:
Is not it "20 milliards" in British English?
Yes. The British use
No we don't.
1 - one
10 - ten
100 - hundred
1 000 - thousand
10 000 - myriad
100 000 - pool
1 000 000 - million
1 000 000 000 - milliard
Is this a late april fool?
Absolutely no one in britain says myriad for 10K , pool (wtf?) for
100K or milliard apart from maybe history of science professor and
you'd probably be hard pressed to find many people who'd even
heard of them in that context. The only reason I knew milliard is
because I can speak (sort of) french and thats the french billion.
"myriad" means 10,000, coming directly from the Greek. But the
word is usually used to mean "a great many" or "more than you can
count". (It's like the use of "40" in the Bible - I guess the
ancient Greeks were better at counting than the ancient
Canaanites.)
In the Bible?
Or, may be, in imprecise translations of the Bible that confuse the
word רבבה that means 10000 with remotely similar word ארבעים that means 40 ?
No, I simply mean that the number 40 is used many times in the Bible
to mean "a large number", rather than for a specific number.
Can you give me few example of use of number 40 in the meaning "a
large number"?
On 09/04/2025 15:56, Michael S wrote:
Can you give me few example of use of number 40 in the meaning "a
large number"?
I really don't want to go into a religious discussion here. It is the general opinion of academic Biblical scholars that the use of "40" in
the Bible is not trying to give an exact value - merely meaning "lots".
It presumably does not mean "vast numbers", nor "a few" - it is more
akin to "dozens" in colloquial English. I did not intentionally imply
it is used to mean "tens of thousands", if that is what you thought I
was saying.
If you want to discuss it more, feel free to email me - I am interested
in religious history, but it would be even less suitable for comp.lang.c
than etymology!
On 4/9/2025 8:01 AM, David Brown wrote:
On 09/04/2025 11:49, Michael S wrote:
On Fri, 4 Apr 2025 02:57:10 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Thu, 3 Apr 2025 21:48:40 +0100, bart wrote:
Commas are overwhelmingly used to separate list elements in
programming languages.
Not just separate, but terminate.
I disagree. I am in favor of optional trailing commas rather than
mandatory ones.
I am certainly in favour of them for things like initialiser lists and
enum declarations.
...
All the reasonable languages allow
trailing commas.
Are your sure that C Standard does not allow trailing commas?
That is, they are obviously legal in initializer lists.
All compilers that I tried reject trailing comma in function calls.
But is it (rejection) really required by the Standard? I don't know.
Yes. The syntax (in 6.5.2p1) is :
postfix-expression:
...
postfix-expression ( argument-expression-list opt )
...
argument-expression-list :
argument-expression
argument-expression-list , argument-expression
I don't think it is unreasonable to suggest that it might be nice to
allow a trailing comma, at least in variadic function calls, but the
syntax of C does not allow it.
Yeah, pretty much.
It might have also been interesting if C allowed optional named arguments: int foo(int x=3, int y=4)
{
return x+y;
}
foo() => 7
foo(.y=2) => 5
Likely would be following any fixed arguments (if present), and likely
(for sake of implementation sanity) named arguments and varargs being mutually exclusive (alternative being that named arguments precede
varargs if both are used).
Well, at least ".y=val" as "y: val" likely wouldn't go over well even if
it is what several other languages with this feature used (well or,
"y=val", which is used in some others).
In the most likely case, the named argument form would be transformed
into the equivalent fixed argument form at compile time.
So: "foo(.y=2)" would be functionally equivalent to "foo(3,2)".
On Fri, 4 Apr 2025 02:57:10 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Thu, 3 Apr 2025 21:48:40 +0100, bart wrote:
Commas are overwhelmingly used to separate list elements in
programming languages.
Not just separate, but terminate.
I disagree. I am in favor of optional trailing commas rather than
mandatory ones.
All the reasonable languages allow
trailing commas.
Are your sure that C Standard does not allow trailing commas?
That is, they are obviously legal in initializer lists.
All compilers that I tried reject trailing comma in function calls.
For example
void bar(int);
void foo(void) {
bar(1,);
}
MSVC:
comma.c(3): error C2059: syntax error: ')'
clang:
comma.c:3:9: error: expected expression
3 | bar(1,);
| ^
gcc:
comma.c: In function 'foo':
comma.c:3:9: error: expected expression before ')' token
3 | bar(1,);
| ^
comma.c:3:3: error: too many arguments to function 'bar'
3 | bar(1,);
| ^~~
comma.c:1:6: note: declared here
1 | void bar(int);
| ^~~
But is it (rejection) really required by the Standard? I don't know.
void foo(void) {
bar(1,);
}
MSVC:
comma.c(3): error C2059: syntax error: ')'
But is it (rejection) really required by the Standard? I don't know.
On Tue, 08 Apr 2025 23:12:13 -0700
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
James Kuyper <jameskuyper@alumni.caltech.edu> writes:
bart <bc@freeuk.com> writes:
On 08/04/2025 22:46, Keith Thompson wrote:
bart <bc@freeuk.com> writes:
Apparently the author of the chart chose to include types
that are
defined by the core language, not by the library.
So here you're finally admitteding they are a different rank.
The core language and the library are equal in rank, both being
different parts of any implementation of C.
This statement isn't exactly right. Some parts of the standard
library are available only in hosted implementations, and not in
freestanding implementations.
True. Also, freestanding implementations must support <stddef.h>
and <stdint.h>, among several other headers.
May be in some formal sense headers and library routines that are
mandatory for freestanding implementations belong to the same rank as
core language. But in practice there exists an obvious difference. In
the first case, name clashes are avoidable (sometimes with toothless
threat that they can happen in the future) and in the second case they
are unavoidable.
On Wed, 9 Apr 2025 15:16:41 +0200
David Brown <david.brown@hesbynett.no> wrote:
On 08/04/2025 19:53, Michael S wrote:
On Tue, 8 Apr 2025 13:39:14 +0200
David Brown <david.brown@hesbynett.no> wrote:
On 08/04/2025 12:54, Muttley@DastardlyHQ.org wrote:
On Tue, 8 Apr 2025 10:29:13 +0200
David Brown <david.brown@hesbynett.no> wibbled:
On 07/04/2025 21:29, Richard Heathfield wrote:
Is not it "20 milliards" in British English?
Yes. The British use
No we don't.
1 - one
10 - ten
100 - hundred
1 000 - thousand
10 000 - myriad
100 000 - pool
1 000 000 - million
1 000 000 000 - milliard
Is this a late april fool?
Absolutely no one in britain says myriad for 10K , pool (wtf?) for
100K or milliard apart from maybe history of science professor and
you'd probably be hard pressed to find many people who'd even
heard of them in that context. The only reason I knew milliard is
because I can speak (sort of) french and thats the french billion.
"myriad" means 10,000, coming directly from the Greek. But the
word is usually used to mean "a great many" or "more than you can
count". (It's like the use of "40" in the Bible - I guess the
ancient Greeks were better at counting than the ancient
Canaanites.)
In the Bible?
Or, may be, in imprecise translations of the Bible that confuse the
word ???? that means 10000 with remotely similar word ?????? that
means 40 ?
No, I simply mean that the number 40 is used many times in the Bible
to mean "a large number", rather than for a specific number.
Can you give me few example of use of number 40 in the meaning "a
large number"?
The very first appearance of 40 as as individual number (rather than
the part of 840) is in duration of the rain that caused flood (40 days
and 40 nights). I think, in this case it was meant literally.
In drier parts of Mesopotamia even 40 minutes of intense rain can cause dangerous flood. The same in Negev desert. After 40 hours of intense continuous rain very serious flood in lower places is pretty much
guaranteed. So, in opinion of people that live in such areas, 40 days
would be more than sufficient for The Flood of Noah. The author of the
text probably thought that he is overestimating a duration of the rain.
bart <bc@freeuk.com> writes:
On 08/04/2025 22:46, Keith Thompson wrote:
bart <bc@freeuk.com> writes:
Apparently the author of the chart chose to include types that are
defined by the core language, not by the library.
So here you're finally admitteding they are a different rank.
The core language and the library are equal in rank, both being
different parts of any implementation of C.
On 09/04/2025 18:26, BGB wrote:
On 4/9/2025 8:01 AM, David Brown wrote:
On 09/04/2025 11:49, Michael S wrote:
On Fri, 4 Apr 2025 02:57:10 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Thu, 3 Apr 2025 21:48:40 +0100, bart wrote:
Commas are overwhelmingly used to separate list elements in
programming languages.
Not just separate, but terminate.
I disagree. I am in favor of optional trailing commas rather than
mandatory ones.
I am certainly in favour of them for things like initialiser lists
and enum declarations.
...
All the reasonable languages allow
trailing commas.
Are your sure that C Standard does not allow trailing commas?
That is, they are obviously legal in initializer lists.
All compilers that I tried reject trailing comma in function calls.
But is it (rejection) really required by the Standard? I don't know.
Yes. The syntax (in 6.5.2p1) is :
postfix-expression:
...
postfix-expression ( argument-expression-list opt )
...
argument-expression-list :
argument-expression
argument-expression-list , argument-expression
I don't think it is unreasonable to suggest that it might be nice to
allow a trailing comma, at least in variadic function calls, but the
syntax of C does not allow it.
Yeah, pretty much.
It might have also been interesting if C allowed optional named
arguments:
int foo(int x=3, int y=4)
{
return x+y;
}
foo() => 7
foo(.y=2) => 5
Likely would be following any fixed arguments (if present), and likely
(for sake of implementation sanity) named arguments and varargs being
mutually exclusive (alternative being that named arguments precede
varargs if both are used).
Well, at least ".y=val" as "y: val" likely wouldn't go over well even
if it is what several other languages with this feature used (well or,
"y=val", which is used in some others).
In the most likely case, the named argument form would be transformed
into the equivalent fixed argument form at compile time.
So: "foo(.y=2)" would be functionally equivalent to "foo(3,2)".
There are all sorts of problems in adding this to C. For example, this
is legal:
void F(int a, float b, char* c);
void F(int c, float a, char* b);
void F(int b, float c, char* a) {}
The sets of parameter names are all different (and that's in the same
file!); which is the official set?
Another is to do with defining default values (essential if named
arguments are to be fully used). First, similar thing to the above:
void F(int a = x + y);
void F(int a = DEFAULT);
Still, the C++ crowd regularly try to figure out how named parameters
could be added to C++. I think they will figure it out eventually.
C++ adds a number of extra complications here that C does not have,
but once they have a decent solution, C could probably adopt it. Let
C++ pave the way on new concepts, and C can copy the bits that suit
once C++ has done the field testing - that's part of the C standard
committee philosophy, and a good way to handle these things.
Michael S <already5chosen@yahoo.com> writes:
On Fri, 4 Apr 2025 02:57:10 -0000 (UTC)
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Thu, 3 Apr 2025 21:48:40 +0100, bart wrote:
Commas are overwhelmingly used to separate list elements in
programming languages.
Not just separate, but terminate.
I disagree. I am in favor of optional trailing commas rather than mandatory ones.
All the reasonable languages allow
trailing commas.
Are your sure that C Standard does not allow trailing commas?
That is, they are obviously legal in initializer lists.
All compilers that I tried reject trailing comma in function calls.
For example
void bar(int);
void foo(void) {
bar(1,);
}
MSVC:
comma.c(3): error C2059: syntax error: ')'
clang:
comma.c:3:9: error: expected expression
3 | bar(1,);
| ^
gcc:
comma.c: In function 'foo':
comma.c:3:9: error: expected expression before ')' token
3 | bar(1,);
| ^
comma.c:3:3: error: too many arguments to function 'bar'
3 | bar(1,);
| ^~~
comma.c:1:6: note: declared here
1 | void bar(int);
| ^~~
But is it (rejection) really required by the Standard? I don't
know.
It is required in the sense that it is a syntax error,
and syntax errors require a diagnostic.
Trailing commas in argument lists and/or parameter lists
could be accepted as an extension, even without giving a
diagnostic as I read the C standard, but implementations
are certainly within their rights to reject them.
Michael S <already5chosen@yahoo.com> writes:
On Tue, 08 Apr 2025 23:12:13 -0700
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
James Kuyper <jameskuyper@alumni.caltech.edu> writes:
bart <bc@freeuk.com> writes:
On 08/04/2025 22:46, Keith Thompson wrote:
bart <bc@freeuk.com> writes:
Apparently the author of the chart chose to include types
that are
defined by the core language, not by the library.
So here you're finally admitteding they are a different rank.
The core language and the library are equal in rank, both being
different parts of any implementation of C.
This statement isn't exactly right. Some parts of the standard
library are available only in hosted implementations, and not in
freestanding implementations.
True. Also, freestanding implementations must support <stddef.h>
and <stdint.h>, among several other headers.
May be in some formal sense headers and library routines that are
mandatory for freestanding implementations belong to the same rank
as core language. But in practice there exists an obvious
difference. In the first case, name clashes are avoidable
(sometimes with toothless threat that they can happen in the
future) and in the second case they are unavoidable.
It's hard for me to make sense sense of this comment. The only
library routines that are required in standard C are those
documented as part of a section for one of the standard headers.
For freestanding implementations in particular, there are only
two names (va_copy and va_end) that might correspond to library
functions, and if they do then the names are reserved for that
purpose. Do you mean to suggest that user code defining either
va_copy or va_end as a symbol with external linkage is
unavoidable? Any user code that does so could be summarily
rejected by the implementation. It's hard to imagine anyone
writing user code wanting to define either of those names as a
symbol with external linkage.
On 09/04/2025 21:11, bart wrote:
The sets of parameter names are all different (and that's in the same
file!); which is the official set?
C has had flexibility here for all sorts of reasons. But if named
parameters were to be added to the language without significant extra
syntax, then this particular issue could be solved in at least two very >simple ways. Either say that named parameter syntax can only be used if
On Thu, 10 Apr 2025 09:53:40 +0200
David Brown <david.brown@hesbynett.no> wibbled:
On 09/04/2025 21:11, bart wrote:
The sets of parameter names are all different (and that's in the same
file!); which is the official set?
C has had flexibility here for all sorts of reasons. But if named
parameters were to be added to the language without significant extra
syntax, then this particular issue could be solved in at least two very
simple ways. Either say that named parameter syntax can only be used if
Anyone who really wants named parameters at function calling can already do this in C99:
struct st
{
int a;
int b;
int c;
};
void func(struct st s)
{
}
int main()
{
func((struct st){ .a = 1, .b = 2, .c = 3 });
return 0;
}
On 09/04/2025 21:11, bart wrote:
On 09/04/2025 18:26, BGB wrote:
It might have also been interesting if C allowed optional named
arguments:
int foo(int x=3, int y=4)
{
return x+y;
}
foo() => 7
foo(.y=2) => 5
Likely would be following any fixed arguments (if present), and
likely (for sake of implementation sanity) named arguments and
varargs being mutually exclusive (alternative being that named
arguments precede varargs if both are used).
Well, at least ".y=val" as "y: val" likely wouldn't go over well even
if it is what several other languages with this feature used (well
or, "y=val", which is used in some others).
In the most likely case, the named argument form would be transformed
into the equivalent fixed argument form at compile time.
So: "foo(.y=2)" would be functionally equivalent to "foo(3,2)".
There are all sorts of problems in adding this to C. For example, this
is legal:
void F(int a, float b, char* c);
void F(int c, float a, char* b);
void F(int b, float c, char* a) {}
The sets of parameter names are all different (and that's in the same
file!); which is the official set?
C has had flexibility here for all sorts of reasons. But if named parameters were to be added to the language without significant extra
syntax, then this particular issue could be solved in at least two very simple ways. Either say that named parameter syntax can only be used if
all of the function's declarations in the translation unit have
consistent naming, or say that the last declaration in scope is the one used. (My guess would be that the later, with compilers offering
warnings about the former.)
Of course that lets someone declare "void f(int a, int b);" in one file
and "void f(int b, int a);" in a different one - but that does not
noticeably change the kind of mixups already available to the
undisciplined programmer, and it is completely eliminated by the
standard practice of using shared headers for declarations.
Another is to do with defining default values (essential if named
arguments are to be fully used). First, similar thing to the above:
void F(int a = x + y);
void F(int a = DEFAULT);
Default arguments are most certainly not essential to make named
parameters useful.
They /can/ be a nice thing to have, but they are
merely icing on the cake. Still, there is an obvious and C-friendly way
to handle this too - the default values must be constant expressions.
A much clearer issue with a named parameter syntax like this is that something like "foo(b = 1, a = 2);" is already valid in C and means
something significantly different. You'd need a different syntax.
Fundamental matters such as this are best decided early in the design of
a language, rather than bolted on afterwards.
On 10/04/2025 11:07, Muttley@DastardlyHQ.org wrote:
On Thu, 10 Apr 2025 09:53:40 +0200
David Brown <david.brown@hesbynett.no> wibbled:
On 09/04/2025 21:11, bart wrote:
The sets of parameter names are all different (and that's in the same
file!); which is the official set?
C has had flexibility here for all sorts of reasons. But if named
parameters were to be added to the language without significant extra
syntax, then this particular issue could be solved in at least two very
simple ways. Either say that named parameter syntax can only be used if
Anyone who really wants named parameters at function calling can already do >> this in C99:
struct st
{
int a;
int b;
int c;
};
void func(struct st s)
{
}
int main()
{
func((struct st){ .a = 1, .b = 2, .c = 3 });
return 0;
}
Ha, ha, ha!
Those aren't named parameters. It would be a dreadful solution anyway:
* Each function now needs an accompanying struct
On 10/04/2025 08:53, David Brown wrote:
On 09/04/2025 21:11, bart wrote:
On 09/04/2025 18:26, BGB wrote:
It might have also been interesting if C allowed optional named
arguments:
int foo(int x=3, int y=4)
{
return x+y;
}
foo() => 7
foo(.y=2) => 5
Likely would be following any fixed arguments (if present), and
likely (for sake of implementation sanity) named arguments and
varargs being mutually exclusive (alternative being that named
arguments precede varargs if both are used).
Well, at least ".y=val" as "y: val" likely wouldn't go over well
even if it is what several other languages with this feature used
(well or, "y=val", which is used in some others).
In the most likely case, the named argument form would be
transformed into the equivalent fixed argument form at compile time.
So: "foo(.y=2)" would be functionally equivalent to "foo(3,2)".
There are all sorts of problems in adding this to C. For example,
this is legal:
void F(int a, float b, char* c);
void F(int c, float a, char* b);
void F(int b, float c, char* a) {}
The sets of parameter names are all different (and that's in the same
file!); which is the official set?
C has had flexibility here for all sorts of reasons. But if named
parameters were to be added to the language without significant extra
syntax, then this particular issue could be solved in at least two
very simple ways. Either say that named parameter syntax can only be
used if all of the function's declarations in the translation unit
have consistent naming, or say that the last declaration in scope is
the one used. (My guess would be that the later, with compilers
offering warnings about the former.)
Of course that lets someone declare "void f(int a, int b);" in one
file and "void f(int b, int a);" in a different one - but that does
not noticeably change the kind of mixups already available to the
undisciplined programmer, and it is completely eliminated by the
standard practice of using shared headers for declarations.
Another is to do with defining default values (essential if named
arguments are to be fully used). First, similar thing to the above:
void F(int a = x + y);
void F(int a = DEFAULT);
Default arguments are most certainly not essential to make named
parameters useful.
Then the advantage is minimal. They are useful when there are lots of parameters, where only a few are essential, and the rest are various
options.
They /can/ be a nice thing to have, but they are merely icing on the
cake. Still, there is an obvious and C-friendly way to handle this
too - the default values must be constant expressions.
Well, the most common default value is 0. But do you mean actual
literals, or can you use macro or enum names?
Because it is those name resolutions that are the problem, not whether
the result is a compile-time constant expression.
A much clearer issue with a named parameter syntax like this is that
something like "foo(b = 1, a = 2);" is already valid in C and means
something significantly different. You'd need a different syntax.
Not really; the above is inside a formal parameter list, where '=' has
no special meaning.
It is in an actual function call where using '=' is troublesome.
Anyway in C you'd probably use '.a = 10' to align it with struct initialisers, also that's a bit cluttery.
Fundamental matters such as this are best decided early in the design
of a language, rather than bolted on afterwards.
The funny thing is that my MessageBox example is a C function exported
by WinAPI, and I was able to superimpose keyword arguments on top. Since
I have to write my own bindings to such functions anyway.
The MS docs for WinAPI do tend to show function declarations with fully
named parameters, which also seem to be retained in gcc's windows.h (but
not in my cut-down one).
But it would need defaults added to make it
useful:
HWND CreateWindowExA(
[in] DWORD dwExStyle,
[in, optional] LPCSTR lpClassName,
[in, optional] LPCSTR lpWindowName,
[in] DWORD dwStyle,
[in] int X,
[in] int Y,
[in] int nWidth,
[in] int nHeight,
[in, optional] HWND hWndParent,
[in, optional] HMENU hMenu,
[in, optional] HINSTANCE hInstance,
[in, optional] LPVOID lpParam
);
On 10/04/2025 13:42, bart wrote:
On 10/04/2025 08:53, David Brown wrote:
On 09/04/2025 21:11, bart wrote:
On 09/04/2025 18:26, BGB wrote:
It might have also been interesting if C allowed optional named
arguments:
int foo(int x=3, int y=4)
{
return x+y;
}
foo() => 7
foo(.y=2) => 5
Likely would be following any fixed arguments (if present), and
likely (for sake of implementation sanity) named arguments and
varargs being mutually exclusive (alternative being that named
arguments precede varargs if both are used).
Well, at least ".y=val" as "y: val" likely wouldn't go over well
even if it is what several other languages with this feature used
(well or, "y=val", which is used in some others).
In the most likely case, the named argument form would be
transformed into the equivalent fixed argument form at compile time. >>>>> So: "foo(.y=2)" would be functionally equivalent to "foo(3,2)".
There are all sorts of problems in adding this to C. For example,
this is legal:
void F(int a, float b, char* c);
void F(int c, float a, char* b);
void F(int b, float c, char* a) {}
The sets of parameter names are all different (and that's in the
same file!); which is the official set?
C has had flexibility here for all sorts of reasons. But if named
parameters were to be added to the language without significant extra
syntax, then this particular issue could be solved in at least two
very simple ways. Either say that named parameter syntax can only be
used if all of the function's declarations in the translation unit
have consistent naming, or say that the last declaration in scope is
the one used. (My guess would be that the later, with compilers
offering warnings about the former.)
Of course that lets someone declare "void f(int a, int b);" in one
file and "void f(int b, int a);" in a different one - but that does
not noticeably change the kind of mixups already available to the
undisciplined programmer, and it is completely eliminated by the
standard practice of using shared headers for declarations.
Another is to do with defining default values (essential if named
arguments are to be fully used). First, similar thing to the above:
void F(int a = x + y);
void F(int a = DEFAULT);
Default arguments are most certainly not essential to make named
parameters useful.
Then the advantage is minimal. They are useful when there are lots of
parameters, where only a few are essential, and the rest are various
options.
That is one use-case, yes. Generally, functions with large numbers of parameters are frowned upon anyway - there are typically better ways to handle such things.
Where named parameters shine is when you have a few parameters that have
the same type. "void control_leds(bool red, bool green, bool blue);".
There are a variety of ways you can make a function like this in a
clearer or safer way in C, but it requires a fair amount of extra
boilerplate code (to define enum types for simple clarity, or struct
types for greater safety). Named parameters would make such functions
safer and clearer in a simple way.
They /can/ be a nice thing to have, but they are merely icing on the
cake. Still, there is an obvious and C-friendly way to handle this
too - the default values must be constant expressions.
Well, the most common default value is 0. But do you mean actual
literals, or can you use macro or enum names?
I mean actual constant expressions, as C defines them. That includes constants (now called "literals" in C23), constant expressions (such as
"2 * 10"), enumeration constants, and constexpr constants (in C23). Basically, things that you could use for initialisation of a variable at
file scope.
Because it is those name resolutions that are the problem, not whether
the result is a compile-time constant expression.
I don't see that at all.
Not really; the above is inside a formal parameter list, where '=' has
no special meaning.
That is exactly the point - "=" has no special meaning inside a function call.
Fundamental matters such as this are best decided early in the design
of a language, rather than bolted on afterwards.
The funny thing is that my MessageBox example is a C function exported
by WinAPI, and I was able to superimpose keyword arguments on top.
Since I have to write my own bindings to such functions anyway.
The MS docs for WinAPI do tend to show function declarations with
fully named parameters, which also seem to be retained in gcc's
windows.h (but not in my cut-down one).
gcc does not have a "windows.h". You are conflating gcc with some
windows packaging of gcc with additional tools, libraries and headers.
But it would need defaults added to make it useful:
Strangely, many people have been able to write code using the MS API
without named parameters or defaults.
Let's not pretend that MS's API's are good examples of clear design!
(And please don't bother picking other non-MS examples that are the same
or worse.)
On 10/04/2025 14:06, David Brown wrote:
On 10/04/2025 13:42, bart wrote:
On 10/04/2025 08:53, David Brown wrote:
On 09/04/2025 21:11, bart wrote:
On 09/04/2025 18:26, BGB wrote:
It might have also been interesting if C allowed optional namedThere are all sorts of problems in adding this to C. For example,
arguments:
int foo(int x=3, int y=4)
{
return x+y;
}
foo() => 7
foo(.y=2) => 5
Likely would be following any fixed arguments (if present), and
likely (for sake of implementation sanity) named arguments and
varargs being mutually exclusive (alternative being that named
arguments precede varargs if both are used).
Well, at least ".y=val" as "y: val" likely wouldn't go over well
even if it is what several other languages with this feature used
(well or, "y=val", which is used in some others).
In the most likely case, the named argument form would be
transformed into the equivalent fixed argument form at compile time. >>>>>> So: "foo(.y=2)" would be functionally equivalent to "foo(3,2)". >>>>>
this is legal:
void F(int a, float b, char* c);
void F(int c, float a, char* b);
void F(int b, float c, char* a) {}
The sets of parameter names are all different (and that's in the
same file!); which is the official set?
C has had flexibility here for all sorts of reasons. But if named
parameters were to be added to the language without significant
extra syntax, then this particular issue could be solved in at least
two very simple ways. Either say that named parameter syntax can
only be used if all of the function's declarations in the
translation unit have consistent naming, or say that the last
declaration in scope is the one used. (My guess would be that the
later, with compilers offering warnings about the former.)
Of course that lets someone declare "void f(int a, int b);" in one
file and "void f(int b, int a);" in a different one - but that does
not noticeably change the kind of mixups already available to the
undisciplined programmer, and it is completely eliminated by the
standard practice of using shared headers for declarations.
Another is to do with defining default values (essential if named
arguments are to be fully used). First, similar thing to the above:
void F(int a = x + y);
void F(int a = DEFAULT);
Default arguments are most certainly not essential to make named
parameters useful.
Then the advantage is minimal. They are useful when there are lots of
parameters, where only a few are essential, and the rest are various
options.
That is one use-case, yes. Generally, functions with large numbers of
parameters are frowned upon anyway - there are typically better ways
to handle such things.
Where named parameters shine is when you have a few parameters that
have the same type. "void control_leds(bool red, bool green, bool
blue);". There are a variety of ways you can make a function like this
in a clearer or safer way in C, but it requires a fair amount of extra
boilerplate code (to define enum types for simple clarity, or struct
types for greater safety). Named parameters would make such functions
safer and clearer in a simple way.
They /can/ be a nice thing to have, but they are merely icing on the
cake. Still, there is an obvious and C-friendly way to handle this
too - the default values must be constant expressions.
Well, the most common default value is 0. But do you mean actual
literals, or can you use macro or enum names?
I mean actual constant expressions, as C defines them. That includes
constants (now called "literals" in C23), constant expressions (such
as "2 * 10"), enumeration constants, and constexpr constants (in C23).
Basically, things that you could use for initialisation of a variable
at file scope.
Because it is those name resolutions that are the problem, not
whether the result is a compile-time constant expression.
I don't see that at all.
It probably wouldn't be too much of a problem in C, since outside of a function, there is only one scope anyway. But it can be illustrated like this:
enum {x=100};
void F(int a = x);
int main(void) {
enum {x=200};
void F(int a = x);
F();
}
What default value would be used for this call, 100 or 200? Or could
there actually be two possible defaults for the same function?
Declaring functions inside another is uncommon.
But you can do similar
things at file scope with #define and #undef.
Or maybe the default value uses names defined in a header, but a
different translation unit could use a different header, or it might
just have a different expression anyway.
(I would disallow this:
void F(int a, int b = a)
where the default value for 'b' is the parameter 'a'. That would be ill-defined and awkward to implement, plus you could have parameter defaulting to each other.)
Not really; the above is inside a formal parameter list, where '='
has no special meaning.
That is exactly the point - "=" has no special meaning inside a
function call.
But that wasn't a function call! So you can use '=' in declaration, and perhaps '.' and '=' in a call:
void F(a = 0);
F(.a = 77);
Fundamental matters such as this are best decided early in the
design of a language, rather than bolted on afterwards.
The funny thing is that my MessageBox example is a C function
exported by WinAPI, and I was able to superimpose keyword arguments
on top. Since I have to write my own bindings to such functions anyway.
The MS docs for WinAPI do tend to show function declarations with
fully named parameters, which also seem to be retained in gcc's
windows.h (but not in my cut-down one).
gcc does not have a "windows.h". You are conflating gcc with some
windows packaging of gcc with additional tools, libraries and headers.
Huh? Do you really want to go down that path of analysing exactly what
gcc is and isn't? 'gcc' must be the most famous C compiler on the planet!
Yes we all know that 'gcc' /now/ stands for 'gnu compiler collection' or something, and that it is a driver program for a number of utilities.
But this is a C group which has informally mentioned 'gcc' for decades
across tens of thousands of posts, but you had to bring it up now?
Any viable C compiler that targets Windows, gcc included, needs to
provide windows.h.
But it would need defaults added to make it useful:
Strangely, many people have been able to write code using the MS API
without named parameters or defaults.
Yes, and we know what such code looks like, with long chains of
mysterious arguments, many of which are zeros or NULLS:
hwnd = CreateWindowEx(
0,
szAppName,
"Hello, world!",
WS_OVERLAPPEDWINDOW|WS_VISIBLE,
300,
100,
400,
400,
NULL,
NULL,
0,
NULL);
Even without named arguments, just default values, but allowing trailing arguments only to be omitted, those last 4 arguments can be dropped.
(BTW I swapped those first two NULLs around; I guess you didn't notice!)
Let's not pretend that MS's API's are good examples of clear design!
(And please don't bother picking other non-MS examples that are the
same or worse.)
We all have to use libraries that other people have designed.
I merely wanted to say that it is pretty easy to write a legal, if not necessarily sensible, code that uses variable named 'memcpy' and
function named 'size_t'. OTOH, you can't name you variable 'break' or 'continue'. Or even 'bool', if you happen to use C23 compiler.
Thought people here might be interested in this image on Jens Gustedt's
blog, which translates section 6.2.5, "Types", of the C23 standard
into a graph of inclusions:
https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23-basic-types/
Thought people here might be interested in this image on Jens Gustedt's
blog, which translates section 6.2.5, "Types", of the C23 standard
into a graph of inclusions:
https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23-basic-types/
A trailing comma in an argument list in a function call ...
So, IMHO, if C waits for C++ then it will wait forever.
* Each function now needs an accompanying struct
* The function header does not list the parameter names or types
* When structs are passed by-value as is the case here, it can mean
copying the struct, an extra overhead
* It can also mean construction an argument list in memory, rather than passing arguments efficiently in registers
On Thu, 10 Apr 2025 11:37:30 +0300, Michael S wrote:
So, IMHO, if C waits for C++ then it will wait forever.
Seems like C is already committed to avoiding incompatibilities with C++,
if the decision on thousands separators in numbers is anything to go by.
On Thu, 10 Apr 2025 11:37:30 +0300, Michael S wrote:
So, IMHO, if C waits for C++ then it will wait forever.
Seems like C is already committed to avoiding incompatibilities with C++,
if the decision on thousands separators in numbers is anything to go by.
A better unit is, IMO, a second resolution (which at least is a basic physical unit) and a separate integer for sub-seconds.
On Wed, 09 Apr 2025 13:52:15 -0700
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
Michael S <already5chosen@yahoo.com> writes:
May be in some formal sense headers and library routines that are
mandatory for freestanding implementations belong to the same rank
as core language. But in practice there exists an obvious
difference. In the first case, name clashes are avoidable
(sometimes with toothless threat that they can happen in the
future) and in the second case they are unavoidable.
It's hard for me to make sense sense of this comment. The only
library routines that are required in standard C are those
documented as part of a section for one of the standard headers.
For freestanding implementations in particular, there are only
two names (va_copy and va_end) that might correspond to library
functions, and if they do then the names are reserved for that
purpose. Do you mean to suggest that user code defining either
va_copy or va_end as a symbol with external linkage is
unavoidable? Any user code that does so could be summarily
rejected by the implementation. It's hard to imagine anyone
writing user code wanting to define either of those names as a
symbol with external linkage.
I merely wanted to say that it is pretty easy to write a legal, if
not necessarily sensible, code that uses variable named 'memcpy'
and function named 'size_t'. OTOH, you can't name you variable
'break' or 'continue'. Or even 'bool', if you happen to use C23
compiler.
My point is that, as far as I'm aware, nobody has implemented
"implicitly include all the standard headers", either as a compiler
option or as a wrapper script. I'm sure somebody has (I could do
it in a few minutes), but it's just not something that programmers
appear to want.
Of course part of the motivation for *not* wanting this is that
it results in non-portable code, and if it were standardized that
wouldn't be an issue.
And if it were standardized, <assert.h> would raise some issues,
since NDEBUG needs to be defined or not defined before including it.
On Wed, 09 Apr 2025 13:14:55 -0700
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
It is required in the sense that it is a syntax error,
and syntax errors require a diagnostic.
Trailing commas in argument lists and/or parameter lists
could be accepted as an extension, even without giving a
diagnostic as I read the C standard, but implementations
are certainly within their rights to reject them.
I have no doubts that implementations have full rights to reject
them. The question was about possibility to accept them and
especially about possibility to accept without diagnostics.
So, it seems, there is no consensus about it among few posters
that read the relevant part of the standard.
Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
[...]
Trailing commas in argument lists and/or parameter lists
could be accepted as an extension, even without giving a
diagnostic as I read the C standard, but implementations
are certainly within their rights to reject them.
I believe a diagnotic is required.
C17 5.1.1.3:
A conforming implementation shall produce at least one
diagnostic message (identified in an implementation-defined
manner) if a preprocessing translation unit or translation
unit contains a violation of any syntax rule or constraint,
even if the behavior is also explicitly specified as undefined
or implementation-defined.
A trailing comma on an argument or parameter list is a violation
of a syntax rule.
about where they may or may not be used. Do you really have a
problem avoiding identifiers defined in this or that library
header, either for all headers or just those headers required for freestanding implementations?
Michael S <already5chosen@yahoo.com> writes:
On Wed, 09 Apr 2025 13:14:55 -0700
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
[may trailing commas in argument lists be accepted, or
must they be rejected?]
It is required in the sense that it is a syntax error,
and syntax errors require a diagnostic.
Trailing commas in argument lists and/or parameter lists
could be accepted as an extension, even without giving a
diagnostic as I read the C standard, but implementations
are certainly within their rights to reject them.
I have no doubts that implementations have full rights to reject
them. The question was about possibility to accept them and
especially about possibility to accept without diagnostics.
So, it seems, there is no consensus about it among few posters
that read the relevant part of the standard.
I don't think anyone should care about that. If there were any
significant demand for allowing such trailing commas then someone
would implement it, and people would use it even if in some
technical sense it meant that an implementation supporting it
would be nonconforming.
Besides, the opinions of people posting
in comp.lang.c carry zero weight; the only opinions that matter
are those of peole on the ISO C committee, and the major compiler
writers, and none of those people bother posting here.
On 03.04.2025 06:06, Tim Rentsch wrote:
Kaz Kylheku <643-408-1753@kylheku.com> writes:
[some symbols are defined in more than one header]
(In my opinion, things would be better if headers were not allowed
to behave as if they include other headers, or provide identifiers
also given in other heards. Not in ISO C, and not in POSIX.
Every identifier should be declared in exactly one home header,
and no other header should provide that definition. [...])
Not always practical. A good example is the type size_t. If a
function takes an argument of type size_t, then the symbol size_t
should be defined, no matter which header the function is being
declared in. Similarly for NULL for any function that has defined
behavior on some cases of arguments that include NULL. No doubt
there are other compelling examples.
I think that all that's said above (by Kaz and you) is basically
correct.
Obviously [to me] it is that 'size_t' and 'NULL' are so fundamental
entities (a standard type and a standard pointer constant literal)
that such items should have been inherent part of the "C" language,
and not #include'd.
Lawrence D'Oliveiro <ldo@nz.invalid> wrote at 04:33 this Monday (GMT):
I worked out that an integer of a little over 200 bits is sufficient to
represent the age of the known Universe in units of the Planck interval
(5.39e-44 seconds). Therefore, rounding to something more even, 256 bits
should be more than enough to measure any physically conceivable time down >> to that resolution.
The problem then becomes storing that size.
On Mon, 7 Apr 2025 21:49:02 +0200, Janis Papanagnou wrote:
A better unit is, IMO, a second resolution (which at least is a basic
physical unit) and a separate integer for sub-seconds.
I worked out that an integer of a little over 200 bits is sufficient to represent the age of the known Universe in units of the Planck interval (5.39e-44 seconds). Therefore, rounding to something more even, 256 bits should be more than enough to measure any physically conceivable time down
to that resolution.
On 4/14/2025 12:40 PM, candycanearter07 wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> wrote at 04:33 this Monday (GMT):
I worked out that an integer of a little over 200 bits is sufficient
to represent the age of the known Universe in units of the Planck
interval (5.39e-44 seconds). Therefore, rounding to something more
even, 256 bits should be more than enough to measure any physically
conceivable time down to that resolution.
The problem then becomes storing that size.
More practical is storing the time in microseconds.
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
On Mon, 14 Apr 2025 13:36:07 -0500, BGB wrote:
On 4/14/2025 12:40 PM, candycanearter07 wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> wrote at 04:33 this Monday
(GMT):
I worked out that an integer of a little over 200 bits is sufficient >>>>> to represent the age of the known Universe in units of the Planck
interval (5.39e-44 seconds). Therefore, rounding to something more
even, 256 bits should be more than enough to measure any physically
conceivable time down to that resolution.
The problem then becomes storing that size.
More practical is storing the time in microseconds.
Relative to what epoch?
I figured that it would be hard to find an epoch less arbitrary than
the Big Bang ...
Why??
That would not be practical or useful. The timing of the Big Bang is
not known with great precision ...
On 4/14/25 19:41, Lawrence D'Oliveiro wrote:
...
On Mon, 14 Apr 2025 15:56:56 -0700, Keith Thompson wrote:
That would not be practical or useful. The timing of the Big Bang is
not known with great precision ...
Neither is that of some fictional religious entity.
Not true. While his divinity is fictional, there might have been a
person who was the inspiration for those stories. Whether or not he was
real, the stories of his life are only consistent with a very specific
time period ...
Humm... Is the "Big Bang' nothing more than a hyper large and rather
local explosion?
On Mon, 14 Apr 2025 15:56:56 -0700, Keith Thompson wrote:...
That would not be practical or useful. The timing of the Big Bang is
not known with great precision ...
Neither is that of some fictional religious entity.
On 4/14/2025 5:33 PM, Lawrence D'Oliveiro wrote:
I figured that it would be hard to find an epoch less arbitrary than
the Big Bang ...
But, we don't really need it.
If so, could probably extend to 128 bits, maybe go to nanoseconds or picoseconds.
On 2025-04-14, candycanearter07 <candycanearter07@candycanearter07.nomail.afraid> wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> wrote at 04:33 this Monday (GMT):
I worked out that an integer of a little over 200 bits is sufficient to
represent the age of the known Universe in units of the Planck interval
(5.39e-44 seconds). Therefore, rounding to something more even, 256 bits >>> should be more than enough to measure any physically conceivable time down >>> to that resolution.
The problem then becomes storing that size.
In a twist of verbal irony, his time here is measured by *Plonck* Intervals.
On Mon, 14 Apr 2025 23:25:26 -0400, James Kuyper wrote:
On 4/14/25 19:41, Lawrence D'Oliveiro wrote:
...
On Mon, 14 Apr 2025 15:56:56 -0700, Keith Thompson wrote:
That would not be practical or useful. The timing of the Big Bang is
not known with great precision ...
Neither is that of some fictional religious entity.
Not true. While his divinity is fictional, there might have been a
person who was the inspiration for those stories. Whether or not he was
real, the stories of his life are only consistent with a very specific
time period ...
Unfortunately, whoever threw in references to historical details to try to make the stories seem more plausible didn’t try very hard to keep them consistent.
Remember that there was no “Year 1”. It was a few centuries before somebody decided something like “let’s call this year 615 A.D., and number
backwards and forwards from there”.
On 4/14/2025 10:25 PM, James Kuyper wrote:
On 4/14/25 19:41, Lawrence D'Oliveiro wrote:
On Mon, 14 Apr 2025 15:56:56 -0700, Keith Thompson wrote:...
That would not be practical or useful. The timing of the Big Bang is
not known with great precision ...
Neither is that of some fictional religious entity.
Not true. While his divinity is fictional, there might have been a
person who was the inspiration for those stories. Whether or not he was
real, the stories of his life are only consistent with a very specific
time period, which narrows the time period of his (possibly fictional)
birth to within just a few years. The uncertainty in the timing of the
Big Bang is currently about 59 million years.
He was a real person,
On 4/14/2025 5:33 PM, Lawrence D'Oliveiro wrote:
On Mon, 14 Apr 2025 13:36:07 -0500, BGB wrote:
On 4/14/2025 12:40 PM, candycanearter07 wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> wrote at 04:33 this Monday (GMT): >>>>
I worked out that an integer of a little over 200 bits is sufficient >>>>> to represent the age of the known Universe in units of the Planck
interval (5.39e-44 seconds). Therefore, rounding to something more
even, 256 bits should be more than enough to measure any physically
conceivable time down to that resolution.
The problem then becomes storing that size.
More practical is storing the time in microseconds.
Relative to what epoch?
Probably still Jan 1 1970...
Humm... Is the "Big Bang' nothing more than a hyper large and rather
local explosion?
On Mon, 14 Apr 2025 18:46:22 -0700, Chris M. Thomasson wrote:
Humm... Is the "Big Bang' nothing more than a hyper large and rather
local explosion?
No, as cosmology is currently understood, it is meaningless to talk
about space or time before the Big Bang.
The Big Bang is the event that
starts both time and space. That makes it very different from any normal explosion. At the moment of the Big Bang, the entire universe was
infinitely small, so literally everything was "local".
On 4/14/2025 11:15 PM, Lawrence D'Oliveiro wrote:
On Mon, 14 Apr 2025 19:43:04 -0500, BGB wrote:
On 4/14/2025 5:33 PM, Lawrence D'Oliveiro wrote:
I figured that it would be hard to find an epoch less arbitrary than
the Big Bang ...
But, we don't really need it.
If so, could probably extend to 128 bits, maybe go to nanoseconds or
picoseconds.
The reason why I chose the Planck interval as the time unit is that
quantum physics says that’s the smallest possible time interval that
makes
any physical sense. So there shouldn’t be any need to measure time more
accurately than that.
Practically, picoseconds are likely the smallest unit of time that
people could practically measure or hope to make much use of.
While femtoseconds exist, given in that unit of time light can only
travel a very short distance, and likely no practical clock could be
built (for similar reasons), not worth bothering with (*).
On 4/15/2025 12:22 PM, David Brown wrote:
I am not saying that the smaller times don't exist, but that there is no >point in wasting bits encoding times more accurate than can be used by a >computer running at a few GHz, with clock speeds that will likely never >exceed a few GHz.
This sets the practical limit mostly in nanosecond territory.
Practically, picoseconds are likely the smallest unit of time that
people could practically measure or hope to make much use of.
Planck units are so small as to be essentially useless for any
practical measurement.
On 4/15/2025 9:08 AM, Scott Lurndal wrote:...
BGB <cr88192@gmail.com> writes:
He was a real person,
On 4/15/25 00:11, Lawrence D'Oliveiro wrote:
Unfortunately, whoever threw in references to historical details to try
to make the stories seem more plausible didn’t try very hard to keep
them consistent.
That's why there's a range of possible dates ...
Remember that there was no “Year 1”. It was a few centuries before
somebody decided something like “let’s call this year 615 A.D., and
number backwards and forwards from there”.
No, Dionysius Exiguus didn't just randomly decide which year it was, he
did his best to determine how many years it had been since the birth of Christ. The method he used to reach that conclusion are unknown ...
He was a real person ...
Then again, it is very well possibly he could reappear again in the not
too distant future, and if so, better to not be on his bad side.
No, as cosmology is currently understood, it is meaningless to talk
about space or time before the Big Bang.
From what I had read, both the Romans and Jewish Rabbi's had secondary written accounts about him, although in a less positive light, and
lacking in terms of the more supernatural elements (and from different vantage points).
No point in them writing about someone that didn't exist.
The uncertainty in the timing of January 1, 1970, where 1970 is a
year number in the current almost universally accepted Gregorian
calendar, is essentially zero.
... Same for any other less commonly
used chosen epoch. The fact that the number 1970 is arbitrary
is not a problem for software. In fact it's an advantage, since
there's no uncertainty in the presence of any new information.
On Tue, 15 Apr 2025 10:06:55 -0400, James Kuyper wrote:
On 4/15/25 00:11, Lawrence D'Oliveiro wrote:
Unfortunately, whoever threw in references to historical details to try
to make the stories seem more plausible didn’t try very hard to keep
them consistent.
That's why there's a range of possible dates ...
No, there is no date that fits the claimed historical references.
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
On Tue, 15 Apr 2025 12:29:04 -0500, BGB wrote:
From what I had read, both the Romans and Jewish Rabbi's had secondary
written accounts about him, although in a less positive light, and
lacking in terms of the more supernatural elements (and from different
vantage points).
No point in them writing about someone that didn't exist.
Lots of people existed at that time and place. Doesn’t mean they were
talking about the same person.
THIS IS NOT THE PLACE FOR A RELIGIOUS DEBATE.
Please stop.
On 4/15/2025 5:56 PM, Lawrence D'Oliveiro wrote:
In all likelihood, computers will not get much faster (in terms of clock speeds) than they are already.
On Tue, 15 Apr 2025 00:40:48 -0500, BGB wrote:
Practically, picoseconds are likely the smallest unit of time that
people could practically measure or hope to make much use of.
“10¯¹² seconds ought to be enough for anybody.”
The lessons of software backward-compatibility baggage teach us that we
need to think a bit beyond present-day technological limitations.
On 4/15/25 18:58, Lawrence D'Oliveiro wrote:
No, there is no date that fits the claimed historical references.
That's the norm, not the exception, for events that obscure which
happened that long ago.
[snip]BGB <cr88192@gmail.com> writes:
On 4/15/2025 9:10 AM, Scott Lurndal wrote:
BGB <cr88192@gmail.com> writes:
On 4/14/2025 5:33 PM, Lawrence D'Oliveiro wrote:
On Mon, 14 Apr 2025 13:36:07 -0500, BGB wrote:
On 4/14/2025 12:40 PM, candycanearter07 wrote:
Technically, it depends on the timezone:Relative to what epoch?
Probably still Jan 1 1970...
Technically, it does not.
POSIX defines the epoch as follows:
Historically, the origin of UNIX system time was referred to as
"00:00:00 GMT, January 1, 1970". Greenwich Mean Time is actually not
a term acknowledged by the international standards community;
therefore, this term, "Epoch", is used to abbreviate the reference
to the actual standard, Coordinated Universal Time.
The epoch is a specified moment in time. That moment can be
expressed as midnight UTC Jan 1 1970, as 4PM PST Dec 31 1969,
or (time_t)0. GMT/UTC is just a convenient way to specify it.
$ date --date="@0"
Wed Dec 31 16:00:00 PST 1969
Yes, the date command by default uses the local time zone by default.
Well, and however much error there is from decades worth of leap
seconds, etc...
Yes, leap seconds are an issue (and would be for any of the proposed alternatives).
But, yeah, better if one had a notion of time that merely measured
absolute seconds since the epoch without any particular ties to the
Earth's rotation or orbit around the sun. Whether or not its "date"
matches exactly with the official Calendar date being secondary.
That's called TAI; it ignores leap seconds. See clock_gettime()
(defined by POSIX, not by ISO C). (Not all systems accurately record
the number of leap seconds, currently 37.)
Most systems don't use TAI for the system clock, because matching civil
time is generally considered more important than counting leap seconds.
[...]
candycanearter07 <candycanearter07@candycanearter07.nomail.afraid>
writes:
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote at 23:42 this Tuesday (GMT):[...]
The epoch is a specified moment in time. That moment can be
expressed as midnight UTC Jan 1 1970, as 4PM PST Dec 31 1969,
or (time_t)0. GMT/UTC is just a convenient way to specify it.
You could also be GoLang and use MST January 2 2006 at 3:04:05 PM.
(1/2 03:04:05 PM 2006 GMT-7)
That's not an epoch. It's a reference time used in documentation,
chosen because all the fields have unique values. It means that results
of converting a time to the dozen or so supported layouts can be easily
read.
[...]
Datetime is a nightmare, this is why we use a simple seconds-since-X
system.
Indeed. That makes it a slightly less unpleasant nightmare.
One shorthand is to assume a year is 365.25 days (31557600
seconds), and then base everything else off this (initially
ignoring things like leap-years, etc, just assume that the number
of days per year is fractional).
Then, say, 2629800 seconds per month, ...
For some other calculations, one can assume an integer number of
days (365), just that each day is 0.07% longer.
For date/time calculations, one could then "guess" the date, and
jitter it back/forth as needed until it was consistent with the
calendar math.
Estimate and subtract the year, estimate and subtract the month,
then the day. Then if we have landed on the wrong day, adjust
until it fits.
Not really sure if there was a more standard way to do this.
Like the Julian day number, it's
useful for computing the number of days between dates.
On 17/04/2025 00:31, Keith Thompson wrote:
<lots of good stuff snipped>
Like the Julian day number, it's
useful for computing the number of days between dates.
Indeed. <time.h> can do it, of course, but I find it a trifle
clumsy for the purpose.
Well. Humm... Actually, sometimes I ponder on _if_ the big bang was the result of a star going hyper-nova in our "parent" universe.
If the idea is true then our universe has children if it's own? It
creates a sort of infinite fractal cosmic tree in a sense. I don't know
if it's true, but fun to think about... Fair enough?
On 4/15/25 18:56, Keith Thompson wrote:
...
The uncertainty in the timing of January 1, 1970, where 1970 is a
year number in the current almost universally accepted Gregorian
calendar, is essentially zero.
Modern Cesium clock are accurate to about 1 ns/day.That's an effect
large enough that we can measure it, but cannot correct for it. We know
that the clocks disagree with each other, but the closest we can do to correcting for that instability is to average over 450 different clock;
the average is 10 times more stable than the individual clocks.
Note: the precision of cesium clocks has improved log-linearly since the 1950s. They're 6 orders of magnitude better in 2008 than they were in
1950. Who knows how much longer that will continue to be true?
... Same for any other less commonly
used chosen epoch. The fact that the number 1970 is arbitrary
is not a problem for software. In fact it's an advantage, since
there's no uncertainty in the presence of any new information.
I agree, which is why I identified that epoch as the one I preferred
over both of those.
1) No account is taken of the 11-day shift in September 1752.
One shorthand is to assume a year is 365.25 days (31557600 seconds), and
then base everything else off this ...
(In source code, it would also be useful to use 1e9 or 1e12,
unfortunately those normally yield floating point values.
On Mon, 7 Apr 2025 22:46:49 +0100, bart wrote:
(In source code, it would also be useful to use 1e9 or 1e12,
unfortunately those normally yield floating point values.
Tried Python:
>>> type(1e9)
<class 'float'>
>>> round(1e9)
1000000000
>>> round(1e12)
1000000000000
However:
>>> round(1e24)
999999999999999983222784
So I tried:
>>> import decimal
>>> decimal.Decimal("1e24")
Decimal('1E+24')
>>> int(decimal.Decimal("1e24"))
1000000000000000000000000
which is more like it.
On 4/15/2025 12:22 PM, David Brown wrote:
On 15/04/2025 07:40, BGB wrote:
On 4/14/2025 11:15 PM, Lawrence D'Oliveiro wrote:
On Mon, 14 Apr 2025 19:43:04 -0500, BGB wrote:
On 4/14/2025 5:33 PM, Lawrence D'Oliveiro wrote:
I figured that it would be hard to find an epoch less arbitrary than >>>>>> the Big Bang ...
But, we don't really need it.
If so, could probably extend to 128 bits, maybe go to nanoseconds or >>>>> picoseconds.
The reason why I chose the Planck interval as the time unit is that
quantum physics says that’s the smallest possible time interval that >>>> makes
any physical sense. So there shouldn’t be any need to measure time more >>>> accurately than that.
Quantum mechanics, the current theory, is not complete. Physicists
are aware of many limitations. So while Plank time is the smallest
meaningful time interval as far as we currently know, and we know of
no reason to suspect that smaller times would be meaningful, it would
be presumptuous to assume that we will never know of smaller time
intervals.
Practically, picoseconds are likely the smallest unit of time that
people could practically measure or hope to make much use of.
The fastest laser pulses so far are timed at 12 attosecond accuracies
- 100,000 as accurate as a picosecond. Some subatomic particle
lifetimes are measured in rontoseconds - 10 ^ -27 seconds.
Picoseconds are certainly fast enough for most people, but certainly
not remotely fast enough for high-speed or high-energy physics.
Physicists have measured times a thousand millionth of a femtosecond.
While femtoseconds exist, given in that unit of time light can only
travel a very short distance, and likely no practical clock could be
built (for similar reasons), not worth bothering with (*).
It is not easy, of course, but not impossible.
I am not saying that the smaller times don't exist, but that there is no point in wasting bits encoding times more accurate than can be used by a computer running at a few GHz, with clock speeds that will likely never exceed a few GHz.
This sets the practical limit mostly in nanosecond territory.
But, for many uses, even nanosecond is overkill. Like, even if a clock-
cycle is less than 1ns, random things like L1 cache misses, etc, will
throw in enough noise to make the lower end of the nanosecond range effectively unusable.
And, things like context switches are more in the area of around a microsecond or so. So, the only way one is going to have controlled
delays smaller than this is using delay-loops or NOP slides.
But, also not much point in having clock times much smaller than what
the CPU could effectively act on. And, program logic decisions are
unlikely to be able to be much more accurate than around 100ns or so
(say, several hundred clock cycles).
...
You could express time as a 64-bit value in nanoseconds, and, it would
roll over in a few centuries.
Meanwhile, a microsecond is big enough for computers to effectively
operate based on them, small enough to be accurate for most real-world
tasks.
On 4/15/2025 5:56 PM, Lawrence D'Oliveiro wrote:
On Tue, 15 Apr 2025 00:40:48 -0500, BGB wrote:
Practically, picoseconds are likely the smallest unit of time that
people could practically measure or hope to make much use of.
“10¯¹² seconds ought to be enough for anybody.”
The lessons of software backward-compatibility baggage teach us that we
need to think a bit beyond present-day technological limitations.
In all likelihood, computers will not get much faster (in terms of clock speeds) than they are already.
If things were able to get much faster (without melting) then more fundamental rethinking would be needed about how things work, as clock
pulses could no longer be used for global synchronization, and (going further) an inability to pass signals through metal wires.
The idea behind writing 1e12 for example was for something that was
compact, quick to type, and easy to grasp. This:
int(decimal.Decimal("1e24"))
seems to lack all of those.
On 16/04/2025 02:53, James Kuyper wrote:
On 4/15/25 18:56, Keith Thompson wrote:
...
The uncertainty in the timing of January 1, 1970, where 1970 is a
year number in the current almost universally accepted Gregorian
calendar, is essentially zero.
Modern Cesium clock are accurate to about 1 ns/day.That's an effect
large enough that we can measure it, but cannot correct for it. We know
that the clocks disagree with each other, but the closest we can do to
correcting for that instability is to average over 450 different clock;
the average is 10 times more stable than the individual clocks.
Note: the precision of cesium clocks has improved log-linearly since the
1950s. They're 6 orders of magnitude better in 2008 than they were in
1950. Who knows how much longer that will continue to be true?
I don't think cesium is still the current standard for the highest
precision atomic clocks.
But anyway, the newest breakthrough is thorium
nuclear clocks, which IIRC are 5 orders of magnitude more stable than
cesium clocks. (And probably 5 orders of magnitude more expensive...)
[...]
On 17.04.2025 17:56, David Brown wrote:
On 16/04/2025 02:53, James Kuyper wrote:
On 4/15/25 18:56, Keith Thompson wrote:
...
The uncertainty in the timing of January 1, 1970, where 1970 is a
year number in the current almost universally accepted Gregorian
calendar, is essentially zero.
Modern Cesium clock are accurate to about 1 ns/day.That's an effect
large enough that we can measure it, but cannot correct for it. We know
that the clocks disagree with each other, but the closest we can do to
correcting for that instability is to average over 450 different clock;
the average is 10 times more stable than the individual clocks.
Note: the precision of cesium clocks has improved log-linearly since the >>> 1950s. They're 6 orders of magnitude better in 2008 than they were in
1950. Who knows how much longer that will continue to be true?
I don't think cesium is still the current standard for the highest
precision atomic clocks.
Well, the "Cesium _fountain_" atomic clocks are still amongst
the most precise and they are in use in the world wide net of
atomic clocks that are interconnected to measure TAI.[*] And
the standard second is _defined_ on Caesium based transitions.
But anyway, the newest breakthrough is thorium
nuclear clocks, which IIRC are 5 orders of magnitude more stable than
cesium clocks. (And probably 5 orders of magnitude more expensive...)
I've not heard of Thorium based clocks. But I've heard of
"optical clocks" that are developed to get more precise and
more stable versions of atomic clock times.
On 19/04/2025 09:46, Janis Papanagnou wrote:
On 17.04.2025 17:56, David Brown wrote:
On 16/04/2025 02:53, James Kuyper wrote:
On 4/15/25 18:56, Keith Thompson wrote:
...
The uncertainty in the timing of January 1, 1970, where 1970 is a
year number in the current almost universally accepted Gregorian
calendar, is essentially zero.
Modern Cesium clock are accurate to about 1 ns/day.That's an
effect large enough that we can measure it, but cannot correct
for it. We know that the clocks disagree with each other, but the
closest we can do to correcting for that instability is to
average over 450 different clock; the average is 10 times more
stable than the individual clocks.
Note: the precision of cesium clocks has improved log-linearly
since the 1950s. They're 6 orders of magnitude better in 2008
than they were in 1950. Who knows how much longer that will
continue to be true?
I don't think cesium is still the current standard for the highest
precision atomic clocks.
Well, the "Cesium _fountain_" atomic clocks are still amongst
the most precise and they are in use in the world wide net of
atomic clocks that are interconnected to measure TAI.[*] And
the standard second is _defined_ on Caesium based transitions.
Caesium fountain clocks are old school, but still used. Rubidium is
popular because it is cheaper, and very high stability atomic clocks
use aluminium or strontium. Caesium is still the basis for the
current definition of the second, but that will change in the next
decade or so as accuracy of timekeeping has moved well beyond the
original caesium standard.
But anyway, the newest breakthrough is thorium
nuclear clocks, which IIRC are 5 orders of magnitude more stable
than cesium clocks. (And probably 5 orders of magnitude more
expensive...)
I've not heard of Thorium based clocks. But I've heard of
"optical clocks" that are developed to get more precise and
more stable versions of atomic clock times.
It was only last year that a good measurement of the resonant
frequencies of the Thorium 229 nucleus was achieved - the science bit
is done, now the engineering bit needs to be finished to get a
practical nuclear clock.
On Sat, 19 Apr 2025 17:15:42 +0200
David Brown <david.brown@hesbynett.no> wrote:
On 19/04/2025 09:46, Janis Papanagnou wrote:
On 17.04.2025 17:56, David Brown wrote:
But anyway, the newest breakthrough is thorium
nuclear clocks, which IIRC are 5 orders of magnitude more stable
than cesium clocks. (And probably 5 orders of magnitude more
expensive...)
I've not heard of Thorium based clocks. But I've heard of
"optical clocks" that are developed to get more precise and
more stable versions of atomic clock times.
It was only last year that a good measurement of the resonant
frequencies of the Thorium 229 nucleus was achieved - the science bit
is done, now the engineering bit needs to be finished to get a
practical nuclear clock.
Record my prediction: it's not going to happen.
David Brown <david.brown@hesbynett.no> writes:
[...]
I don't know enough about Thorium 229 nuclear resonances to be able
to predict one way or the other. Do you have a good reason or
reference for your thoughts here?
Can you PLEASE take this somewhere else? (Or drop it, I don't care.)
Don't read anything into the fact that I replied to one particular participant in the thread.
On 19/04/2025 22:15, Michael S wrote:
On Sat, 19 Apr 2025 17:15:42 +0200
David Brown <david.brown@hesbynett.no> wrote:
On 19/04/2025 09:46, Janis Papanagnou wrote:
On 17.04.2025 17:56, David Brown wrote:
But anyway, the newest breakthrough is thorium
nuclear clocks, which IIRC are 5 orders of magnitude more stable
than cesium clocks. (And probably 5 orders of magnitude more
expensive...)
I've not heard of Thorium based clocks. But I've heard of
"optical clocks" that are developed to get more precise and
more stable versions of atomic clock times.
It was only last year that a good measurement of the resonant
frequencies of the Thorium 229 nucleus was achieved - the science
bit is done, now the engineering bit needs to be finished to get a
practical nuclear clock.
Record my prediction: it's not going to happen.
I don't know enough about Thorium 229 nuclear resonances to be able
to predict one way or the other. Do you have a good reason or
reference for your thoughts here?
On Mon, 21 Apr 2025 14:28:30 -0700
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
David Brown <david.brown@hesbynett.no> writes:
[...]
I don't know enough about Thorium 229 nuclear resonances to be able
to predict one way or the other. Do you have a good reason or
reference for your thoughts here?
Can you PLEASE take this somewhere else? (Or drop it, I don't care.)
Don't read anything into the fact that I replied to one particular
participant in the thread.
There are two types of usenet groups:
- groups that suffer from significat amount of OT discussions
- dead
On Wed, 02 Apr 2025 16:59:59 +1100
Alexis <flexibeast@gmail.com> wrote:
Thought people here might be interested in this image on Jens
Gustedt's blog, which translates section 6.2.5, "Types", of the C23
standard into a graph of inclusions:
https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23-basic-types/
That's a little disappointing.
IMHO, C23 should have added optional types _Binary32, _Binary64,
_Binary128 and _Binary256 that designate their IEEE-754 namesakes.
Plus, a mandatory requirement that if compiler supports any of IEEE-754 binary types then they have to be accessible by above-mentioned names.
On Wed, 16 Apr 2025 23:13:58 +0100, Richard Heathfield wrote:
1) No account is taken of the 11-day shift in September 1752.
root@debian10:~ # ncal -s IT 10 1582
October 1582
Su 17 24 31
Mo 1 18 25
Tu 2 19 26
We 3 20 27
Th 4 21 28
Fr 15 22 29
Sa 16 23 30
Michael S <already5chosen@yahoo.com> writes:
On Wed, 02 Apr 2025 16:59:59 +1100
Alexis <flexibeast@gmail.com> wrote:
Thought people here might be interested in this image on Jens
Gustedt's blog, which translates section 6.2.5, "Types", of the C23
standard into a graph of inclusions:
https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23-basic-types/ >>
That's a little disappointing.
IMHO, C23 should have added optional types _Binary32, _Binary64,
_Binary128 and _Binary256 that designate their IEEE-754 namesakes.
Plus, a mandatory requirement that if compiler supports any of
IEEE-754 binary types then they have to be accessible by
above-mentioned names.
I see where you're coming from,
but I disagree with the suggested
addition; it simultaneously does too much and not enough. If
someone wants some capability along these lines, the first step
should be to understand what the underlying need is, and then to
figure out how to accommodate that need. The addition described
above creates more problems than it solves.
[...]
Back in the mainframe days, it was common to use julian dates
as they were both concise (5 BCD digits/20 bits) and sortable.
YYDDD
If time was neeeded, it was seconds since midnight in a reference
timezone.
[ Just noticed this post while catching up in my backlog, so I'm not
sure my questions/comments have already been addressed elsewhere. ]
On 16.04.2025 22:04, Scott Lurndal wrote:
[...]
Back in the mainframe days, it was common to use julian dates
as they were both concise (5 BCD digits/20 bits) and sortable.
YYDDD
If time was neeeded, it was seconds since midnight in a reference
timezone.
I don't quite understand the rationale behind all that said above.
"YYDDD" was used without century information? How is that useful?
(I assume it's just the popular laziness that later lead to all the
Y2k chaos activities.)
And "seconds since midnight" where taken despite the Julian Dates
have a day start at high noon (12:00)? [*]
[ Just noticed this post while catching up in my backlog, so I'm not[snip]
sure my questions/comments have already been addressed elsewhere. ]
On 16.04.2025 22:04, Scott Lurndal wrote:
[...]
Back in the mainframe days, it was common to use julian dates
as they were both concise (5 BCD digits/20 bits) and sortable.
YYDDD
If time was neeeded, it was seconds since midnight in a reference
timezone.
I don't quite understand the rationale behind all that said above.
"YYDDD" was used without century information? How is that useful?
(I assume it's just the popular laziness that later lead to all the
Y2k chaos activities.)
I believe the current rule for software is to consider "39" the cutoff,
ie 39 is considered 2039, and 40 is considered 1940. I agree though,
removing the century is a bad idea for anything that is supposed to be
kept for a length of time.
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
[...]
[*] I recall that e.g. SunOS also had that wrong and assumed start at
midnight. Folks generally don't seem to be aware of that difference.
I don't recall SunOS using any kind of Julian days/dates for anything
at the system level, though some programs might. [...]
On 4/28/25 20:10, Janis Papanagnou wrote:
[...]
And "seconds since midnight" where taken despite the Julian Dates
have a day start at high noon (12:00)? [*]
Strictly speaking, "Julian Day" is the number of days since Jan 01 4713
BCE at Noon (a date that was chosen because it simplifies conversion
between several different ancient calendar systems. It starts at Noon
because it was devised for use by astronomers, who are generally awake
at midnight and asleep at Noon (especially in ancient times).
Informally speaking, "Julian Day" is commonly used to refer to any
system for designating dates that include a "day of year" component, as
does the above example. Most of these start at midnight, not Noon.
There's no use being a purist about this (that would be my preference
too) - the informal meaning is quite common, probably more common than
the "correct" one.
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
[ Just noticed this post while catching up in my backlog, so I'm not
sure my questions/comments have already been addressed elsewhere. ]
On 16.04.2025 22:04, Scott Lurndal wrote:
[...]
Back in the mainframe days, it was common to use julian dates
as they were both concise (5 BCD digits/20 bits) and sortable.
YYDDD
If time was neeeded, it was seconds since midnight in a reference
timezone.
I don't quite understand the rationale behind all that said above.
"YYDDD" was used without century information? How is that useful?
(I assume it's just the popular laziness that later lead to all the
Y2k chaos activities.)
Yes, it was felt that saving storage (perhaps in the form of columns
on punch cards) was more important than supporting dates after 1999.
One relic of this is the tm_year member of struct tm in <time.h>,
which holds number of years since 1900. It was (I'm fairly sure)
originally just a 2-digit year number.
And "seconds since midnight" where taken despite the Julian Dates
have a day start at high noon (12:00)? [*]
The Julian day number used by astronomers does start at noon,
specifically at noon, Universal Time, Monday, January 1, 4713 BC
in the proleptic Julian calendar. As I write this, the current
Julian date is 2460794.591939746.
Outside of astronomy, the word Julian is (mis)used for just about
anything that counts days rather than months and days. A date
expressed in the form YYDDD (or YYYYDDD, or YYYYDDD) almost certainly
refers to a calendar day, starting and ending at midnight in some
time zone. See also the tm_yday member of struct tm, which counts
days since January 1 of the specified year.
On 4/29/2025 12:24 AM, James Kuyper wrote:
On 4/29/25 01:10, candycanearter07 wrote:
...
I believe the current rule for software is to consider "39"
the cutoff,
ie 39 is considered 2039, and 40 is considered 1940. I agree
though,
removing the century is a bad idea for anything that is
supposed to be
kept for a length of time.
I sincerely doubt that there is any unique current rule for
interpreting
two-digit year numbers - just a wide variety of different rules
used by
different people for different purposes. That's part of the
reason why
it's a bad idea to rely upon such rules.
Could always argue for a compromise, say, 1 signed byte year.
Say: 1872 to 2127, if origin is 2000.
Could also be 2 digits if expressed in hexadecimal.
Or, maybe 1612 BC to 5612 AD if the year were 2 digits in Base 85.
Or, 48 BC to 4048 AD with Base 64.
On 4/29/2025 12:24 AM, James Kuyper wrote:
On 4/29/25 01:10, candycanearter07 wrote:
...
I believe the current rule for software is to consider "39" the cutoff,
ie 39 is considered 2039, and 40 is considered 1940. I agree though,
removing the century is a bad idea for anything that is supposed to be
kept for a length of time.
I sincerely doubt that there is any unique current rule for interpreting
two-digit year numbers - just a wide variety of different rules used by
different people for different purposes. That's part of the reason why
it's a bad idea to rely upon such rules.
Could always argue for a compromise, say, 1 signed byte year.
Say: 1872 to 2127, if origin is 2000.
Could also be 2 digits if expressed in hexadecimal.
On 2025-04-07, Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
antispam@fricas.org (Waldek Hebisch) writes:
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
[...]
Not always practical. A good example is the type size_t. If a
function takes an argument of type size_t, then the symbol size_t
should be defined, no matter which header the function is being
declared in.
Why? One can use a type without a name for such type.
Convenience and existing practice. Sure, an implementation of
<string.h> could provide a declaration of memcpy() without making
size_t visible, but what would be the point?
Ther eis a point to such a discipline; you get ultra squeaky clean
modules whose header files define only their contribution to
the program, and do not transitively reveal any of the identifiers
from their dependencies.
In large programs, this clean practice can can help prevent
clashes.
[...]
Using memcpy as an example, it could be declared as
void *memcpy(void * restrict d, const void * restrict s,
__size_t size);
size_t is not revealed, but a private type __size_t.
To get __size_t, some private header is included <sys/priv_types.h>
or whatever.
The <stddef.h> header just includes that one and typedefs __size_t
size_t (if it were to work that way).
A system vendor which provides many API's and has the privilege of
being able to use the __* space could do things like this.
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
Kaz Kylheku <643-408-1753@kylheku.com> writes:
[some symbols are defined in more than one header]
(In my opinion, things would be better if headers were not allowed
to behave as if they include other headers, or provide identifiers
also given in other heards. Not in ISO C, and not in POSIX.
Every identifier should be declared in exactly one home header,
and no other header should provide that definition. [...])
Not always practical. A good example is the type size_t. If a
function takes an argument of type size_t, then the symbol size_t
should be defined, no matter which header the function is being
declared in.
Why?
One can use a type without a name for such type.
Similarly for NULL for any function that has defined
behavior on some cases of arguments that include NULL.
Why? There are many ways to produce null pointers.
And fact that
a function had defined behavior for null pointers does not mean
that users will need null pointers.
No doubt
there are other compelling examples.
Do not look compelling at all.
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
antispam@fricas.org (Waldek Hebisch) writes:
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
[...]
Not always practical. A good example is the type size_t. If a
function takes an argument of type size_t, then the symbol size_t
should be defined, no matter which header the function is being
declared in.
Why? One can use a type without a name for such type.
Convenience and existing practice. Sure, an implementation of
<string.h> could provide a declaration of memcpy() without making
size_t visible, but what would be the point?
Cleanliness of definitions? Consistency?
Fragment that you
replaced by [...] contained a proposal:
Every identifier should be declared in exactly one home header,
and no other header should provide that definition.
That would be pretty clean and consistent rule: if you need some
standard symbol, then you should include corresponding header.
Tim claimed that this in not practical. Clearly C standard
changed previous practice about headers,
so existing practice is _not_ a practical problem with adapting
such proposal.
With current standard and practice one frequently needs symbols
from several headers,
so "convenience" is also not a practival problem with such
proposal.
People not interested in clean name space can
define private "all.h" which includes all standard C headers
and possibly other things that they need, so for them overhead
is close to zero.
antispam@fricas.org (Waldek Hebisch) writes:
I believe that's not true, but certainly it is not /clearly/ true.
A lot of time passed between 1978, when K&R was published, and 1989,
when the first C standard was ratified. No doubt the C standard
unified different practices among many existing C implementations,
but it is highly likely that some of them anticipated the rules that
would be ratified in the C standard.
The fact that ncal knows about the 11-day shift ...
Strictly speaking, "Julian Day" is the number of days since Jan 01 4713
BCE at Noon ...
On Tue, 8 Apr 2025 20:53:45 +0300 Michael S <already5chosen@yahoo.com> wibbled:
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
Any chance of using utf8 rather than whatever the hell encoding this is.
On Mon, 28 Apr 2025 07:52:10 +0100, Richard Heathfield wrote:
The fact that ncal knows about the 11-day shift ...
Note the calendar listing I posted had a 10-day shift.
On Mon, 21 Apr 2025 14:28:30 -0700
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
David Brown <david.brown@hesbynett.no> writes:
[...]
I don't know enough about Thorium 229 nuclear resonances to be able
to predict one way or the other. Do you have a good reason or
reference for your thoughts here?
Can you PLEASE take this somewhere else? (Or drop it, I don't care.)
Don't read anything into the fact that I replied to one particular
participant in the thread.
There are two types of usenet groups:
- groups that suffer from significat amount of OT discussions
- dead
On Mon, 14 Apr 2025 01:59:24 -0700
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
Michael S <already5chosen@yahoo.com> writes:
On Wed, 09 Apr 2025 13:14:55 -0700
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
[may trailing commas in argument lists be accepted, or
must they be rejected?]
It is required in the sense that it is a syntax error,
and syntax errors require a diagnostic.
Trailing commas in argument lists and/or parameter lists
could be accepted as an extension, even without giving a
diagnostic as I read the C standard, but implementations
are certainly within their rights to reject them.
I have no doubts that implementations have full rights to reject
them. The question was about possibility to accept them and
especially about possibility to accept without diagnostics.
So, it seems, there is no consensus about it among few posters
that read the relevant part of the standard.
I don't think anyone should care about that. If there were any
significant demand for allowing such trailing commas then someone
would implement it, and people would use it even if in some
technical sense it meant that an implementation supporting it
would be nonconforming.
Personally, I'd use this feature if it would be standard.
But if it would be non-standard feature supported by both gcc and
clang I would hesitate.
Besides, the opinions of people posting
in comp.lang.c carry zero weight; the only opinions that matter
are those of peole on the ISO C committee, and the major compiler
writers, and none of those people bother posting here.
My impression was that Philipp Klaus Krause that posts here, if
infrequently, is a member of WG14.
Michael S <already5chosen@yahoo.com> writes:
My impression was that Philipp Klaus Krause that posts here, if infrequently, is a member of WG14.
Do you know if he is a member, or is he just an interested
participant?
On Mon, 14 Apr 2025 01:24:49 -0700
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
about where they may or may not be used. Do you really have a
problem avoiding identifiers defined in this or that library
header, either for all headers or just those headers required for
freestanding implementations?
I don't know. In order to know I'd have to include all
standard headers into all of my C files
But I would guess that for headers required for freestanding
implementations I would have no problems.
On Sun, 27 Apr 2025 12:05:16 -0700
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
Michael S <already5chosen@yahoo.com> writes:
On Wed, 02 Apr 2025 16:59:59 +1100
Alexis <flexibeast@gmail.com> wrote:
Thought people here might be interested in this image on JensThat's a little disappointing.
Gustedt's blog, which translates section 6.2.5, "Types", of the C23
standard into a graph of inclusions:
https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23-basic-types/ >>>
IMHO, C23 should have added optional types _Binary32, _Binary64,
_Binary128 and _Binary256 that designate their IEEE-754 namesakes.
Plus, a mandatory requirement that if compiler supports any of
IEEE-754 binary types then they have to be accessible by
above-mentioned names.
I see where you're coming from,
I suppose, you know it because you followed my failed attempt to improve speed and cross-platform consistency of gcc IEEE binary128 arithmetic.
Granted, in this case absence of common name for the type was much
smaller obstacle than general indifference of gcc maintainers.
So, yes, on the "producer" side the problem of absence of common name
was annoying but could be regarded as minor.
Apart from being a "producer", quite often I am on the other side,
wearing a hat of consumer of extended precision types. When in this
role, I feel that the relative weight of inconsistent type names is
rather significant. I'd guess that it is even more significant for
people whose work, unlike mine, is routinely multi-platform. I would
not be surprised if for many of them it ends up as main reason to
refrain completely from use IEEE binary128 in their software; even when
it causes complications to their work and when the type is
readily available, under different names, on all platforms they care
about.
Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
antispam@fricas.org (Waldek Hebisch) writes:
I believe that's not true, but certainly it is not /clearly/ true.
A lot of time passed between 1978, when K&R was published, and
1989, when the first C standard was ratified. No doubt the C
standard unified different practices among many existing C
implementations, but it is highly likely that some of them
anticipated the rules that would be ratified in the C standard.
Looking at SVID 3rd edition (1989), size_t did not yet exist, so in
that particular case, there was no need to implicitly define it in
any header file.
For interfaces that require custom typedefs (for example, stat(2)),
the SVID requires the programmer include <sys/types.h> before
including <sys/stat.h>.
When size_t was added there were existing interfaces where the
argument was changed to require size_t/ssize_t. These interfaces
did not, at the time, require the programmer to include
<sys/types.h> or <stddef.h> in order to use the interface, for
example in the SVID memory(BA_LIB) interface description, the
programmer had been instructed that only <string.h> was required
for the str* functions, and <memory.h> was required for the mem*
functions - but the SVID noted at that time that the latter was
deprecated - the pending ANSI C standard was to require only
<string.h>.
So, when the arguments of memcpy/strcpy were changed from int to
size_t, they couldn't go back and require existing code to include
e.g. <stddef.h> to get size_t; POSIX chose to note in the
interface description that additional typedefs may be visible
when <string.h> is included.
"The <string.h> header shall define NULL and size_t as described
in <stddef.h>."
On Mon, 05 May 2025 16:25:57 -0700
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
Michael S <already5chosen@yahoo.com> writes:
My impression was that Philipp Klaus Krause that posts here, if
infrequently, is a member of WG14.
Do you know if he is a member, or is he just an interested
participant?
He appears in the picture named "WG14 members attending the Strasbourg meeting in-person for the finalization of C23". To me that sounds as sufficient proof.
scott@slp53.sl.home (Scott Lurndal) writes:
After digging into the history, I have the impression that SVID
was hoping to be a leader in defining standard interfaces (which
Michael S <already5chosen@yahoo.com> writes:
On Mon, 14 Apr 2025 01:24:49 -0700
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
about where they may or may not be used. Do you really have a
problem avoiding identifiers defined in this or that library
header, either for all headers or just those headers required for
freestanding implementations?
I don't know. In order to know I'd have to include all
standard headers into all of my C files
Let me ask the question differently. Have you ever run into an
actual problem due to inadvertent collision with a reserved
identifier?
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
Michael S <already5chosen@yahoo.com> writes:
On Mon, 14 Apr 2025 01:24:49 -0700
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
about where they may or may not be used. Do you really have a
problem avoiding identifiers defined in this or that library
header, either for all headers or just those headers required for
freestanding implementations?
I don't know. In order to know I'd have to include all
standard headers into all of my C files
Let me ask the question differently. Have you ever run into an
actual problem due to inadvertent collision with a reserved
identifier?
Not in my own code. But I remember an old piece of code whose
author apparently thought that 'inline' is a perfect name for
input line. Few days ago I had trouble compiling with gcc-15
code which declares its own 'bool' type. The code is supposed to
compile using a wide range of compilers, so I am still looking
for "best" solution.
Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
Michael S <already5chosen@yahoo.com> writes:
On Mon, 14 Apr 2025 01:24:49 -0700
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
about where they may or may not be used. Do you really have a
problem avoiding identifiers defined in this or that library
header, either for all headers or just those headers required for
freestanding implementations?
I don't know. In order to know I'd have to include all
standard headers into all of my C files
Let me ask the question differently. Have you ever run into an
actual problem due to inadvertent collision with a reserved
identifier?
I'm not Michael, but I was once mildly inconvienced because I
defined a logging function called log(). The solution was trivial:
I changed the name.
Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
scott@slp53.sl.home (Scott Lurndal) writes:
[...]
After digging into the history, I have the impression that SVID
was hoping to be a leader in defining standard interfaces (which
SVID was AT&T's attempt to standardize unix interfaces. The last
edition (third) was released in 1989, but earlier versions date
to 1983.
As to who, exactly, first proposed size_t, that I don't recall.
Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:
Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
Michael S <already5chosen@yahoo.com> writes:
On Mon, 14 Apr 2025 01:24:49 -0700
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
about where they may or may not be used. Do you really have a
problem avoiding identifiers defined in this or that library
header, either for all headers or just those headers required for
freestanding implementations?
I don't know. In order to know I'd have to include all
standard headers into all of my C files
Let me ask the question differently. Have you ever run into an
actual problem due to inadvertent collision with a reserved
identifier?
I'm not Michael, but I was once mildly inconvienced because I
defined a logging function called log(). The solution was trivial:
I changed the name.
Yes, I expect I have run into similar situations. What I was
wondering about were problems where either the existence of
the problem or what to do to fix it needed more than a minimal
effort.
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
Michael S <already5chosen@yahoo.com> writes:
On Mon, 14 Apr 2025 01:24:49 -0700
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
about where they may or may not be used. Do you really have a
problem avoiding identifiers defined in this or that library
header, either for all headers or just those headers required for
freestanding implementations?
I don't know. In order to know I'd have to include all
standard headers into all of my C files
Let me ask the question differently. Have you ever run into an
actual problem due to inadvertent collision with a reserved
identifier?
Not in my own code. But I remember an old piece of code whose
author apparently thought that 'inline' is a perfect name for
input line.
Few days ago I had trouble compiling with gcc-15
code which declares its own 'bool' type. The code is supposed to
compile using a wide range of compilers, so I am still looking
for "best" solution.
I recall running into issues using variables named 'index'
when porting code to SVR4 when the BSD compatibility layer
was present.
Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:
Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
Michael S <already5chosen@yahoo.com> writes:
On Mon, 14 Apr 2025 01:24:49 -0700
Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
about where they may or may not be used. Do you really have a
problem avoiding identifiers defined in this or that library
header, either for all headers or just those headers required
for freestanding implementations?
I don't know. In order to know I'd have to include all
standard headers into all of my C files
Let me ask the question differently. Have you ever run into an
actual problem due to inadvertent collision with a reserved
identifier?
I'm not Michael, but I was once mildly inconvienced because I
defined a logging function called log(). The solution was
trivial: I changed the name.
Yes, I expect I have run into similar situations. What I was
wondering about were problems where either the existence of the
problem or what to do to fix it needed more than a minimal
effort.
I recall running into issues using variables named 'index'
when porting code to SVR4 when the BSD compatibility layer
was present.
https://man.freebsd.org/cgi/man.cgi?query=index&sektion=3
IMHO, a need for a common name for IEEE binary128 exists for quite some
time. For IEEE binary256 the real need didn't emerge yet. But it will
emerge in the hopefully near future.
On Mon, 28 Apr 2025 16:27:38 +0300, Michael S wrote:
IMHO, a need for a common name for IEEE binary128 exists for quite some
time. For IEEE binary256 the real need didn't emerge yet. But it will
emerge in the hopefully near future.
A thought: the main advantage of binary types over decimal is supposed to
be speed. Once you get up to larger precisions like that, the speed
advantage becomes less clear, particularly since hardware support doesn’t seem forthcoming any time soon. There are already variable-precision
decimal floating-point libraries available. And with such calculations, C
no longer offers a great performance advantage over a higher-level
language, so you might as well use the higher-level language.
<https://docs.python.org/3/library/decimal.html>
C
no longer offers a great performance advantage over a higher-level
language, so you might as well use the higher-level language.
On Mon, 28 Apr 2025 16:27:38 +0300, Michael S wrote:
IMHO, a need for a common name for IEEE binary128 exists for quite some
time. For IEEE binary256 the real need didn't emerge yet. But it will
emerge in the hopefully near future.
A thought: the main advantage of binary types over decimal is supposed to
be speed. Once you get up to larger precisions like that, the speed
advantage becomes less clear, particularly since hardware support doesn’t seem forthcoming any time soon. There are already variable-precision
decimal floating-point libraries available. And with such calculations, C
no longer offers a great performance advantage over a higher-level
language, so you might as well use the higher-level language.
<https://docs.python.org/3/library/decimal.html>
(And at the risk of incurring Richard's wrath, I would suggest
C++ is an even better language choice in such cases.)
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
On Mon, 28 Apr 2025 16:27:38 +0300, Michael S wrote:
IMHO, a need for a common name for IEEE binary128 exists for quite
some time. For IEEE binary256 the real need didn't emerge yet. But
it will emerge in the hopefully near future.
A thought: the main advantage of binary types over decimal is
supposed to be speed. Once you get up to larger precisions like
that, the speed advantage becomes less clear, particularly since
hardware support doesn’t seem forthcoming any time soon. There are already variable-precision decimal floating-point libraries
available. And with such calculations, C no longer offers a great performance advantage over a higher-level language, so you might as
well use the higher-level language.
<https://docs.python.org/3/library/decimal.html>
I think there's an implicit assumption that, all else being equal,
decimal is better than binary. That's true in some contexts,
but not in all.
If you're performing calculations on physical quantities, decimal
probably has no particular advantages, and binary is likely to be
more efficient in both time and space.
The advantagers of decimal show up if you're formatting a *lot*
of numbers in human-readable form (but nobody has time to read a
billion numbers), or if you're working with money. But for financial calculations, particularly compound interest, there are likely to
be precise regulations about how to round results. A given decimal floating-point format might or might not satisfy those regulations.
On Thu, 26 Jun 2025 12:31:32 -0700
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
On Mon, 28 Apr 2025 16:27:38 +0300, Michael S wrote: =20=20
IMHO, a need for a common name for IEEE binary128 exists for quite
some time. For IEEE binary256 the real need didn't emerge yet. But
it will emerge in the hopefully near future. =20
A thought: the main advantage of binary types over decimal is
supposed to be speed. Once you get up to larger precisions like
that, the speed advantage becomes less clear, particularly since
hardware support doesn=E2=80=99t seem forthcoming any time soon. There = >are
already variable-precision decimal floating-point libraries
available. And with such calculations, C no longer offers a great
performance advantage over a higher-level language, so you might as
well use the higher-level language.
<https://docs.python.org/3/library/decimal.html> =20
I think there's an implicit assumption that, all else being equal,
decimal is better than binary. That's true in some contexts,
but not in all.
=20
My implicit assumption is that other sings being equal binary is
better than anything else because it has the lowest variation in ULP to
value ratio.=20
The fact that other things being equal binary fp also tends to be
faster is a nice secondary advantage. For example, it is easy to
imagine hardware that implements S/360 style hex floating point as fast
or a little faster than binary fp, but numerec properties of it are
much worse then sane implementations of binary fp.
When working with such (low for me) precisions dynamic allocation of
memory is major cost item, frequently more important than calculation.
To avoid this cost one needs stack allocatation.
Also, when using binary underlying representation decimal rounding
is much more expensive than binary one, so with such representation
cost of decimal computation is significantly higher.
Floating point computations naturally are approximate. In most cases
exact details of rounding do not matter much.
To put it differently, decimal floating point is a marketing stint by
IBM.
On 26/06/2025 10:01, Lawrence D'Oliveiro wrote:
C no longer offers a great performance advantage over a higher-level
language, so you might as well use the higher-level language.
Nothing is stopping you, but then comp.lang.c no longer offers you the facility to discuss your chosen language, so you might as well use the higher-level language's group.
[...]if C is going to become more suitable for such high-
precision calculations, it might need to become more Python-like.
scott@slp53.sl.home (Scott Lurndal) writes:
[...]
But not all decimal floating point implementations used "hex floating point".
Burroughs medium systems had BCD floating point - one of the advantages
was that it could exactly represent any floating point number that
could be specified with a 100 digit mantissa and a 2 digit exponent.
BCD uses 4 bits to represent values from 0 to 9. That's about 83%
efficent relative to pure binary. (And it still can't represent 1/3.)
[...]
On Thu, 26 Jun 2025 12:51:19 -0000 (UTC), Waldek Hebisch wrote:
When working with such (low for me) precisions dynamic allocation of
memory is major cost item, frequently more important than calculation.
To avoid this cost one needs stack allocatation.
What you may not realize is that, on current machines, there is about a
100:1 speed difference between accessing CPU registers and accessing main memory.
Whether that main memory access is doing “stack allocation” or “heap allocation” is going to make very little difference to this.
Also, when using binary underlying representation decimal rounding
is much more expensive than binary one, so with such representation
cost of decimal computation is significantly higher.
This may take more computation, but if the calculation time is dominated
by memory access time to all those digits, how much difference is that
going to make, really?
Floating point computations naturally are approximate. In most cases
exact details of rounding do not matter much.
It often surprises you when they do. That’s why a handy rule of thumb is
to test your calculation with all four IEEE 754 rounding modes, to ensure that the variation in the result remains minor. If it doesn’t ... then watch out.
To put it differently, decimal floating point is a marketing stint by
IBM.
Not sure IBM has any marketing power left to inflict their own ideas on
the computing industry. Decimal calculations just make sense because the results are less surprising to normal people.
Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Thu, 26 Jun 2025 12:51:19 -0000 (UTC), Waldek Hebisch wrote:
When working with such (low for me) precisions dynamic allocation of
memory is major cost item, frequently more important than calculation.
To avoid this cost one needs stack allocatation.
What you may not realize is that, on current machines, there is about a
100:1 speed difference between accessing CPU registers and accessing main
memory.
Whether that main memory access is doing “stack allocation” or “heap >> allocation” is going to make very little difference to this.
Did you measure things? CPU has caches and cache friendly code
makes a difference. Avoiding dynamic allocation helps, that is
measurable. Rational explanation is that stack allocated things
do not move and have close to zero cost to manage. Moving stuff
leads to cache misses.
Michael S <already5chosen@yahoo.com> writes:
On Thu, 26 Jun 2025 12:31:32 -0700
Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
Lawrence D'Oliveiro <ldo@nz.invalid> writes:are
On Mon, 28 Apr 2025 16:27:38 +0300, Michael S wrote: =20
IMHO, a need for a common name for IEEE binary128 exists for
quite some time. For IEEE binary256 the real need didn't emerge
yet. But it will emerge in the hopefully near future. =20
A thought: the main advantage of binary types over decimal is
supposed to be speed. Once you get up to larger precisions like
that, the speed advantage becomes less clear, particularly since
hardware support doesn=E2=80=99t seem forthcoming any time soon.
There =
already variable-precision decimal floating-point libraries=20
available. And with such calculations, C no longer offers a great
performance advantage over a higher-level language, so you might
as well use the higher-level language.
<https://docs.python.org/3/library/decimal.html> =20
I think there's an implicit assumption that, all else being equal,
decimal is better than binary. That's true in some contexts,
but not in all.
=20
My implicit assumption is that other sings being equal binary is
better than anything else because it has the lowest variation in ULP
to value ratio.=20
The fact that other things being equal binary fp also tends to be
faster is a nice secondary advantage. For example, it is easy to
imagine hardware that implements S/360 style hex floating point as
fast or a little faster than binary fp, but numerec properties of it
are much worse then sane implementations of binary fp.
But not all decimal floating point implementations used "hex floating
point".
Burroughs medium systems had BCD floating point - one of the
advantages was that it could exactly represent any floating point
number that could be specified with a 100 digit mantissa and a 2
digit exponent.
This was a memory-to-memory architecture, so no floating point
registers to worry about.
For financial calculations, a fixed point format (up to 100 digits)
was used. Using an implicit decimal point, rounding was a matter of
where the implicit decimal point was located in the up to 100 digit
field; so do your calculations in mills and truncate the result field
to the desired precision.
scott@slp53.sl.home (Scott Lurndal) writes:
[...]
But not all decimal floating point implementations used "hex floating point".
Burroughs medium systems had BCD floating point - one of the advantages
was that it could exactly represent any floating point number that
could be specified with a 100 digit mantissa and a 2 digit exponent.
BCD uses 4 bits to represent values from 0 to 9. That's about 83%
efficent relative to pure binary. (And it still can't represent 1/3.)
On Thu, 26 Jun 2025 12:51:19 -0000 (UTC), Waldek Hebisch wrote:
When working with such (low for me) precisions dynamic allocation of
memory is major cost item, frequently more important than calculation.
To avoid this cost one needs stack allocatation.
What you may not realize is that, on current machines, there is about a
100:1 speed difference between accessing CPU registers and accessing main >memory.
On Thu, 26 Jun 2025 21:09:37 GMT
scott@slp53.sl.home (Scott Lurndal) wrote:
[..]
For fix point, anything "decimal" is even less useful than in floating
point. I can't find any good explanation for use of "decimal" things in
some early computers [...]
For fix point, anything "decimal" is even less useful than in floating
point. I can't find any good explanation for use of "decimal" things in
some early computers except that their designers were, may be, good engineers, but 2nd rate thinkers.
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:floating point".
On 27.06.2025 02:10, Keith Thompson wrote:
scott@slp53.sl.home (Scott Lurndal) writes:
[...]
But not all decimal floating point implementations used "hex
Burroughs medium systems had BCD floating point - one of the advantages >>>> was that it could exactly represent any floating point number that
could be specified with a 100 digit mantissa and a 2 digit exponent.
BCD uses 4 bits to represent values from 0 to 9. That's about 83%
efficent relative to pure binary. (And it still can't represent 1/3.)
That's a problem of where your numbers stem from. "1/3" is a formula!
1/3 is also a C expression with the value 0. But what I was
referring to was the real number 1/3, the unique real number that
yields one when multiplied by three.
My point is that any choice of radix in a floating-point format
means that there are going to be some useful real numbers you
can't represent.
That's as true of decimal as it is of binary.
(Trinary can represent 1/3, but can't represent 1/2.)
Decimal can represent any number that can be exactly represented in
binary *if* you have enough digits (because 10 is multiple of 2),
and many numbers like 0.1 that can't be represented exactly in
binary, but at a cost -- that is worth paying in some contexts.
(Scaled integers might sometimes be a good alternative).
I doubt that I'm saying anything you don't already know. I just
wanted to clarify what I meant.
[ Some technical troubles - in case this post appeared already 30...
minutes ago (I don't see it), please ignore this re-sent post. ]
On 28.06.2025 02:56, Keith Thompson wrote:
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
On 27.06.2025 02:10, Keith Thompson wrote:
BCD uses 4 bits to represent values from 0 to 9. That's about 83%
efficent relative to pure binary. (And it still can't represent 1/3.)
That's a problem of where your numbers stem from. "1/3" is a formula!
1/3 is also a C expression with the value 0. But what I was
referring to was the real number 1/3, the unique real number that
yields one when multiplied by three.
Yes, sure. That was also how I interpreted it; that you meant (in
"C" parlance) 1.0/3.0.
On 26/06/2025 10:01, Lawrence D'Oliveiro wrote:
C
no longer offers a great performance advantage over a higher-level
language, so you might as well use the higher-level language.
Nothing is stopping you, but then comp.lang.c no longer offers
you the facility to discuss your chosen language, so you might as
well use the higher-level language's group.
(compile-toplevel '(expt 2 150))#<sys:vm-desc: a103620>
(disassemble *1)data:
On 2025-06-26, Richard Heathfield <rjh@cpax.org.uk> wrote:
On 26/06/2025 10:01, Lawrence D'Oliveiro wrote:
C
no longer offers a great performance advantage over a higher-level
language, so you might as well use the higher-level language.
Nothing is stopping you, but then comp.lang.c no longer offers
you the facility to discuss your chosen language, so you might as
well use the higher-level language's group.
Even a broken clock is right once or twice in a 24h period.
He did say that this advantage was in the manipulation
of multi-precision integers, like big decimals.
Indeed, most of the time is spent int the math routines themselves,
not in what dispatches them, Calculations written in C, using a
certain bignum libarary won't be much faster than the same
calculations in a higher level language, using the same bignum
library.
A higher level language may also have a compiler which does
optimizations on the bignum code, such as CSE and constant folding,
basically treating it the same like fixnum integers.
C code consisting of calls into a bignum library will not be
aggressively optimized. If you wastefully perform a calculation
with constants that could be done at compile time, it almost
certainly won't be.
Example:
(compile-toplevel '(expt 2 150))#<sys:vm-desc: a103620>
(disassemble *1)data:
0: 1427247692705959881058285969449495136382746624
syms:
code:
0: 10000400 end d0
instruction count:
1
#<sys:vm-desc: a103620>
The compiled code just retrieves the bignum integer result from
static data register d0. This is just from the compiler finding
"expt" to be in a list of functions that are reducible at compile
time over constant inputs; no special reasoning about large integers.
But if you were to write the C code to initialize a bignum from 5,
and one from 150, and then call the bignum exponentiation routine, I
doubt you'd get the compiler to optimize all that away.
Maybe with a sufficiently advanced link-time optimization ...
On Fri, 27 Jun 2025 14:52:42 +0300, Michael S wrote:
For fix point, anything "decimal" is even less useful than in floating
point. I can't find any good explanation for use of "decimal" things in
some early computers except that their designers were, may be, good
engineers, but 2nd rate thinkers.
IEEE-754 now includes decimal floating-point formats in addition to the
older binary ones. I think this was originally a separate spec (IEEE-854), but it got rolled into the 2008 revision of IEEE-754.
On 2025-06-28 23:03, Janis Papanagnou wrote:
[ Some technical troubles - in case this post appeared already 30...
minutes ago (I don't see it), please ignore this re-sent post. ]
On 28.06.2025 02:56, Keith Thompson wrote:
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
On 27.06.2025 02:10, Keith Thompson wrote:
BCD uses 4 bits to represent values from 0 to 9. That's about 83%That's a problem of where your numbers stem from. "1/3" is a formula!
efficent relative to pure binary. (And it still can't represent 1/3.) >>>>
1/3 is also a C expression with the value 0. But what I was
referring to was the real number 1/3, the unique real number that
yields one when multiplied by three.
Yes, sure. That was also how I interpreted it; that you meant (in
"C" parlance) 1.0/3.0.
No, it is very much the point that the C expression 1.0/3.0 cannot have
the value he's talking about [...]
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
On 28.06.2025 02:56, Keith Thompson wrote:
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:floating point".
On 27.06.2025 02:10, Keith Thompson wrote:
scott@slp53.sl.home (Scott Lurndal) writes:
[...]
But not all decimal floating point implementations used "hex
That's a problem of where your numbers stem from. "1/3" is a formula!BCD uses 4 bits to represent values from 0 to 9. That's about 83%
Burroughs medium systems had BCD floating point - one of the advantages >>>>>> was that it could exactly represent any floating point number that >>>>>> could be specified with a 100 digit mantissa and a 2 digit exponent. >>>>>
efficent relative to pure binary. (And it still can't represent 1/3.) >>>>
1/3 is also a C expression with the value 0. But what I was
referring to was the real number 1/3, the unique real number that
yields one when multiplied by three.
Yes, sure. That was also how I interpreted it; that you meant (in
"C" parlance) 1.0/3.0.
As mentioned elsethread, I was referring to the real value.
1.0/3.0 as a C expression yields a value of type double, typically 0.333333333333333314829616256247390992939472198486328125 or
[...]
In numerics you have various places where errors appear in principle
and accumulate. One of the errors is when transferred from (and to)
external representation. Another one is when performing calculations
with internally imprecise represented numbers.
The point with decimal encoding addresses the lossless (and fast[*])
input/output of given [finite] numbers. Numbers that have been (and
are) used e.g. in financial contexts (Billions of Euros and Cents).
And you can also perform exact arithmetic in the typical operations
(sum, multiply, subtract)[**] without errors.[***]
Which is convenient only because we happen to use decimal notation
when writing numbers.
[...]
On 29.06.2025 05:18, James Kuyper wrote:
On 2025-06-28 23:03, Janis Papanagnou wrote:
[ Some technical troubles - in case this post appeared already 30...
minutes ago (I don't see it), please ignore this re-sent post. ]
On 28.06.2025 02:56, Keith Thompson wrote:
Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
On 27.06.2025 02:10, Keith Thompson wrote:
1/3 is also a C expression with the value 0. But what I wasBCD uses 4 bits to represent values from 0 to 9. That's about 83%That's a problem of where your numbers stem from. "1/3" is a formula! >>>>
efficent relative to pure binary. (And it still can't represent 1/3.) >>>>>
referring to was the real number 1/3, the unique real number that
yields one when multiplied by three.
Yes, sure. That was also how I interpreted it; that you meant (in
"C" parlance) 1.0/3.0.
No, it is very much the point that the C expression 1.0/3.0 cannot have
the value he's talking about [...]
I was talking about the Real Value. Indicated by the formula '1/3'.
When Keith spoke about that being '0' I refined it to '1.0/3.0' to
address this misunderstanding. (That's all to say here about that.)
On 27/06/2025 01:39, Lawrence D'Oliveiro wrote:
[...]if C is going to become more suitable for such high-
precision calculations, it might need to become more Python-like.
C is not in search of a reason to exist.
On Fri, 27 Jun 2025 02:40:58 +0100, Richard Heathfield wrote:
On 27/06/2025 01:39, Lawrence D'Oliveiro wrote:
[...]if C is going to become more suitable for such high-
precision calculations, it might need to become more Python-like.
C is not in search of a reason to exist.
Not in traditional fixed-precision arithmetic, anyway -- at least after it fully embraced IEEE 754.
With higher-precision arithmetic, on the other hand, the traditional C paradigms may not be so suitable.
On 15/07/2025 20:41, Lawrence D'Oliveiro wrote:
On Fri, 27 Jun 2025 02:40:58 +0100, Richard Heathfield wrote:
On 27/06/2025 01:39, Lawrence D'Oliveiro wrote:
[...]if C is going to become more suitable for such high-
precision calculations, it might need to become more Python-like.
C is not in search of a reason to exist.
Not in traditional fixed-precision arithmetic, anyway -- at least
after it fully embraced IEEE 754.
With higher-precision arithmetic, on the other hand, the
traditional C paradigms may not be so suitable.
If you want something else, you know where to find it. There is no
value in eroding the differences in all languages until only one
universal language emerges. Vivat differentia.
On Wed, 16 Jul 2025 03:55:14 +0100, Richard Heathfield wrote:
On 15/07/2025 20:41, Lawrence D'Oliveiro wrote:
On Fri, 27 Jun 2025 02:40:58 +0100, Richard Heathfield wrote:
On 27/06/2025 01:39, Lawrence D'Oliveiro wrote:
[...]if C is going to become more suitable for such high-
precision calculations, it might need to become more Python-like.
C is not in search of a reason to exist.
Not in traditional fixed-precision arithmetic, anyway -- at least
after it fully embraced IEEE 754.
With higher-precision arithmetic, on the other hand, the
traditional C paradigms may not be so suitable.
If you want something else, you know where to find it. There is no
value in eroding the differences in all languages until only one
universal language emerges. Vivat differentia.
You sound as though you don’t want languages copying ideas from each
other.
On 20/07/2025 01:16, Lawrence D'Oliveiro wrote:
On Wed, 16 Jul 2025 03:55:14 +0100, Richard Heathfield wrote:
[...]
You sound as though you don’t want languages copying ideas from each
other.
[...]
There's nothing wrong with new languages pinching ideas from old
languages - that's creativity and progress, especially when those ideas
are combined in new and interesting ways, and you can keep on adding
those ideas right up until your second reference implementation goes
public.
But going the other way turns a programming language into a constantly
moving target that it's impossible for more than a handful of people to master - the handful in question being those who decide what's in and
what's out. This is bad for programmers' expertise and bad for the
industry.
It's somewhat more complicated than that. IEEE-784 is a
radix-independent standard, otherwise equivalent to IEEE-754.
On Sun, 29 Jun 2025 09:23:01 -0400, James Kuyper wrote:
It's somewhat more complicated than that. IEEE-784 is a
radix-independent standard, otherwise equivalent to IEEE-754.
Did you mean IEEE-854?
Astronomers count Julian Day Numbers from 4713 BC proleptic Julian.
This was chosen to ensure that all astronomical observations or events
in recorded history have positive dates.
Huge numbers of systems already use the perfectly reasonable POSIX
epoch, 1970-01-01 00:00:00 UTC. I can think of no good reason to
standardize anything else.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 546 |
Nodes: | 16 (2 / 14) |
Uptime: | 149:59:21 |
Calls: | 10,383 |
Files: | 14,054 |
Messages: | 6,417,784 |