• Re: "A diagram of C23 basic types"

    From Janis Papanagnou@21:1/5 to Alexis on Wed Apr 2 09:02:58 2025
    On 02.04.2025 07:59, Alexis wrote:

    Thought people here might be interested in this image on Jens Gustedt's
    blog, which translates section 6.2.5, "Types", of the C23 standard
    into a graph of inclusions:

    https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23-basic-types/

    A nice overview. - I have questions on some of these types...

    The _Decimal* types - are these just types with other implicit
    encodings, say, BCD encoded, or some such?

    The nullptr_t seems to be a special beast concerning the "NULL"
    entity; what purpose does that type serve, where is it used?

    I see the 'bool' but recently seen mentioned some '_Bool' type.
    The latter was probably chosen in that special syntax to avoid
    conflicts during "C" language evolution?
    How do regular "C" programmers handle that multitude of boolean
    types; ranging from use of 'int', over own "bool" types, then
    '_Bool', and now 'bool'? Since it's a very basic type it looks
    like you need hard transitions in evolution of your "C" code?

    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Janis Papanagnou on Wed Apr 2 07:32:20 2025
    On 2025-04-02, Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
    On 02.04.2025 07:59, Alexis wrote:

    Thought people here might be interested in this image on Jens Gustedt's
    blog, which translates section 6.2.5, "Types", of the C23 standard
    into a graph of inclusions:

    https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23-basic-types/

    A nice overview. - I have questions on some of these types...

    The _Decimal* types - are these just types with other implicit
    encodings, say, BCD encoded, or some such?

    IEEE 754 defines decimal floating point types now, so that's what
    that is about. The spec allows for the significand to be encoded
    using Binary Integer Decimal, or to use Densely Packed Decimal.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Janis Papanagnou on Wed Apr 2 11:33:17 2025
    On 02/04/2025 09:02, Janis Papanagnou wrote:
    On 02.04.2025 07:59, Alexis wrote:

    Thought people here might be interested in this image on Jens Gustedt's
    blog, which translates section 6.2.5, "Types", of the C23 standard
    into a graph of inclusions:

    https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23-basic-types/

    A nice overview. - I have questions on some of these types...

    The _Decimal* types - are these just types with other implicit
    encodings, say, BCD encoded, or some such?

    The nullptr_t seems to be a special beast concerning the "NULL"
    entity; what purpose does that type serve, where is it used?


    "nullptr_t" is the type of "nullptr". And "nullptr" is always a null
    pointer constant, just like the literal "0". But unlike "0", it can
    never be mistaken for an integer - thus compilers will be able to detect mistakes more easily. Consider the three basic ways of specifying a
    null pointer, and how they can be converted to an integer:

    int * p = 0;
    int * q = NULL;
    int * r = nullptr;

    int a = 0; // No complaints from the compiler!

    int b = NULL; // Usually an error
    int c = (int) NULL; // No complaints

    int d = nullptr; // Error
    int e = (int) nullptr; // Error
    int f = (int) (void*) nullptr; // You really mean it!


    This is what makes "nullptr" safer, and thus a useful addition to C. It
    is also consistent with C++, where "nullptr" is more important (for use
    in function overloads).

    The type "nullptr_t" itself is unlikely to be particularly useful, but
    has to exist because "nullptr" needs a type. It could conceivably be
    used in a _Generic, but I have not seen any applications of that.

    I see the 'bool' but recently seen mentioned some '_Bool' type.
    The latter was probably chosen in that special syntax to avoid
    conflicts during "C" language evolution?

    Yes. The tradition in C has been that added keywords have had this
    _Form, since that type of identifier is reserved. Thus between C99
    (when _Bool was added) and C17, the type was named "_Bool", and the
    header <stdboolh> contained:

    #define bool _Bool
    #define true 1
    #define false 0

    For C23, it was decided that after 20-odd years people should be using
    the standard C boolean types, and thus the type was renamed "bool" (with "_Bool" kept as a synonym), and "true" and "false" became keywords.
    <stdbool.h> is now basically empty (and unnecessary) in C23 mode, but of
    course still exists for compatibility.

    How do regular "C" programmers handle that multitude of boolean
    types; ranging from use of 'int', over own "bool" types, then
    '_Bool', and now 'bool'? Since it's a very basic type it looks
    like you need hard transitions in evolution of your "C" code?


    For anything that is not hampered by ancient legacy code, you use :

    #include <stdbool.h>

    bool a = false;
    bool b = true;

    That applies from C99 up to and including C23 - though if your code is definitely C23-only, you can happily omit the #include.

    For pre-C99 code that uses home-made boolean types, it is common in
    reusable code (such as libraries) for the types and everything else to
    be prefixed - "MYLIB_boolean", etc. But some code might use the names
    "bool", "true" and "false" with their own definitions. If those are
    #define definitions, the code will probably work much as before though
    it is technically UB, but if they are typedefs, enums, etc., they cannot
    be compiled as C23.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Alexis on Wed Apr 2 10:57:29 2025
    On 02/04/2025 06:59, Alexis wrote:

    Thought people here might be interested in this image on Jens Gustedt's
    blog, which translates section 6.2.5, "Types", of the C23 standard
    into a graph of inclusions:

    https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23-basic-types/


    So much for C being a 'simple' language.

    However the chart seems unnecessarily over-elaborate in some areas,
    while missing some features:

    * I thought that enumerations could now have their own integer type

    * Where are the fixed-width types from stdint.h?

    * How does unsigned BitInt end up as a signed bit-precise version?

    * What about bit-fields?

    I also have trouble with 'basic type' used for BitInt which could be arbitrarily large, or for Complex types. Could they also end up as
    Scalars? It's hard to see when the coloured lines should be followed.

    Anyway, the good thing is that if I now look at an Ada type hierarchy,
    it appears simple by comparison!

    All enumeration/integer types end up as Discrete, and all float types as
    Real; together they are Scalars.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Wed Apr 2 10:14:01 2025
    On Wed, 2 Apr 2025 10:57:29 +0100
    bart <bc@freeuk.com> wibbled:
    On 02/04/2025 06:59, Alexis wrote:

    Thought people here might be interested in this image on Jens Gustedt's
    blog, which translates section 6.2.5, "Types", of the C23 standard
    into a graph of inclusions:

    https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23-basic-types/


    So much for C being a 'simple' language.

    C should be left alone. It does what it needs to do for a systems language. Almost no use uses it for applications any more and sophisticated processing using complex types for example are far better done in C++.

    IMO, YMMV.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Muttley@DastardlyHQ.org on Wed Apr 2 15:35:31 2025
    On 02/04/2025 12:14, Muttley@DastardlyHQ.org wrote:
    On Wed, 2 Apr 2025 10:57:29 +0100
    bart <bc@freeuk.com> wibbled:
    On 02/04/2025 06:59, Alexis wrote:

    Thought people here might be interested in this image on Jens Gustedt's
    blog, which translates section 6.2.5, "Types", of the C23 standard
    into a graph of inclusions:

    https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23-basic-types/ >>>

    So much for C being a 'simple' language.

    C should be left alone. It does what it needs to do for a systems language. Almost no use uses it for applications any more and sophisticated processing using complex types for example are far better done in C++.

    IMO, YMMV.


    The C standards committee knows what C is used for. You can be quite
    confident that they have heard plenty of people say that "C should be
    left alone", as well as other people say "We would like feature X to be standardised in C".

    To those that think C should not change, the answer is obvious - implementations are not going to drop support for C90, C99, C11 or C17
    just because C23 has been released. So you can continue to use the old familiar C as much as you want.

    Meanwhile, those who want the new features can use them. While it is
    rare that new applications are written in C, and existing C applications
    will not change to newer C standards, there is plenty of other kinds of
    coding for which C is the most suitable (or at least most used) language.

    Changes and new features are not added to the C standards just for fun,
    or just to annoy people - they are there because some people want them
    and expect that they can write better / faster / clearer / safer /
    easier code as a result.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Wed Apr 2 14:05:17 2025
    On Wed, 2 Apr 2025 15:35:31 +0200
    David Brown <david.brown@hesbynett.no> wibbled:
    On 02/04/2025 12:14, Muttley@DastardlyHQ.org wrote:
    On Wed, 2 Apr 2025 10:57:29 +0100
    bart <bc@freeuk.com> wibbled:
    On 02/04/2025 06:59, Alexis wrote:

    Thought people here might be interested in this image on Jens Gustedt's >>>> blog, which translates section 6.2.5, "Types", of the C23 standard
    into a graph of inclusions:

    https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23-basic-types/ >>>>

    So much for C being a 'simple' language.

    C should be left alone. It does what it needs to do for a systems language. >> Almost no use uses it for applications any more and sophisticated processing >> using complex types for example are far better done in C++.

    IMO, YMMV.


    The C standards committee knows what C is used for. You can be quite >confident that they have heard plenty of people say that "C should be
    left alone", as well as other people say "We would like feature X to be >standardised in C".

    I suspect the people who are happy with C never have any correspondence with anyone from the committee so they get an entirely biased sample. Just like
    its usually only people who had a bad experience that fill in "How did we do" surveys.

    Changes and new features are not added to the C standards just for fun,
    or just to annoy people - they are there because some people want them
    and expect that they can write better / faster / clearer / safer /
    easier code as a result.

    And add complexity to compilers.

    So what exactly is better / faster / clearer / safer in C23?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Waldek Hebisch@21:1/5 to Muttley@dastardlyhq.org on Wed Apr 2 14:12:18 2025
    Muttley@dastardlyhq.org wrote:
    On Wed, 2 Apr 2025 10:57:29 +0100
    bart <bc@freeuk.com> wibbled:
    On 02/04/2025 06:59, Alexis wrote:

    Thought people here might be interested in this image on Jens Gustedt's
    blog, which translates section 6.2.5, "Types", of the C23 standard
    into a graph of inclusions:

    https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23-basic-types/ >>>

    So much for C being a 'simple' language.

    C should be left alone. It does what it needs to do for a systems language. Almost no use uses it for applications any more and sophisticated processing using complex types for example are far better done in C++.

    C99 has VMT (variable modified types). Thanks to VMT and complex types
    C99 can naturaly do numeric computing that previously was done using
    Fortran 77. Offical C++ has no VMT. C++ mechanizms look nicer,
    but can be less efficient than using VMT-s, so C has advantage for
    basic numeric "cores".

    --
    Waldek Hebisch

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Muttley@DastardlyHQ.org on Wed Apr 2 16:59:45 2025
    On 02/04/2025 16:05, Muttley@DastardlyHQ.org wrote:
    On Wed, 2 Apr 2025 15:35:31 +0200
    David Brown <david.brown@hesbynett.no> wibbled:
    On 02/04/2025 12:14, Muttley@DastardlyHQ.org wrote:
    On Wed, 2 Apr 2025 10:57:29 +0100
    bart <bc@freeuk.com> wibbled:
    On 02/04/2025 06:59, Alexis wrote:

    Thought people here might be interested in this image on Jens Gustedt's >>>>> blog, which translates section 6.2.5, "Types", of the C23 standard
    into a graph of inclusions:

    https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23-basic-types/


    So much for C being a 'simple' language.

    C should be left alone. It does what it needs to do for a systems language. >>> Almost no use uses it for applications any more and sophisticated processing
    using complex types for example are far better done in C++.

    IMO, YMMV.


    The C standards committee knows what C is used for. You can be quite
    confident that they have heard plenty of people say that "C should be
    left alone", as well as other people say "We would like feature X to be
    standardised in C".

    I suspect the people who are happy with C never have any correspondence with anyone from the committee so they get an entirely biased sample. Just like its usually only people who had a bad experience that fill in "How did we do" surveys.

    And I suspect that you haven't a clue who the C standards committee talk
    to - and who those people in turn have asked.

    Of course there are plenty of people who just use whatever the compiler
    and library provide, without complaint or feedback of any kind, without thinking about standards, much less reading about them. And everyone
    involved - the library developers, the compiler developers, the
    standards committee - /knows/ this. Everyone involved /knows/ that
    compilers and libraries, the C implementations, will continue to support
    the existing standards and won't break old, working code because of new keywords or features.


    Changes and new features are not added to the C standards just for fun,
    or just to annoy people - they are there because some people want them
    and expect that they can write better / faster / clearer / safer /
    easier code as a result.

    And add complexity to compilers.


    Yes. But that's okay - it has always been acceptable to make compiler
    writers do more work if it is better for compiler users.


    So what exactly is better / faster / clearer / safer in C23?


    Have a little look at Annex M of the standard and you can see the list
    of changes:

    <https://open-std.org/JTC1/SC22/WG14/www/docs/n3467.pdf>

    The things I expect to use from C23 in my own work include:

    1. I will gradually move over to standard attributes instead of gcc
    attributes where practical. That will make code more portable. (The attributes themselves improve static error checking and code efficiency,
    but that will be much the same in "-std=c23" compared to "-std=gnu17".)

    2. Single-argument "static_assert", without needing macros - static
    assertions make code safer and sometimes more efficient, and the single-argument version makes it neater and clearer.

    3. /Finally/ the old-style function declarations are dead. "int foo();"
    now means what many programmers think it means, for greater safety and convenience. Identifiers can be omitted in function definitions if they
    are not used, which is clearer and neater than alternatives.

    4. I expect to use a few of the new library functions, such as memccpy,
    as safer and/or more efficient than alternatives.

    5. "constexpr" for safety, efficiency and convenience.

    6. "static" compound literals.

    7. "typeof" and "auto" for type inference is standardised, so I don't
    need to use the gcc extensions for them. These can make code safer and
    clearer (though their benefits are much smaller in C than C++).

    8. I probably won't use _BitInt(N) types directly - their primary use
    will be for hardware implementation of C code in FPGAs.

    9. Slightly better and safer enumeration types.

    10. Bit-manipulation functions for greater portability and possibly
    greater efficiency.

    11. nullptr for clarity and safety.

    12. Some improvements to variadic macros.

    13. Changes to the way "intmax_t" is defined might mean access to
    int128_t and uint128_t on more platforms.

    14. #elifdef and #elifndef can make complicated conditional compilation
    a little neater.

    15. #warning is now standard.

    16. Binary literals are now standard.

    17. Digit separators make large numbers clearer.

    18. "unreachable()" is now standard.

    19. printf (and friends) support for things like "%w32i" as the format specifier for int32_t, so that we no longer need the ugly PRIi32 style
    of macro for portable code with fixed-size types.




    From the next version beyond C23, so far there is :

    1. Declarations in "if" and "switch" statements, like those in "for"
    loops, helps keep local variable scopes small and neat.

    2. Ranges in case labels - that speaks for itself (though again I used
    it already as a gcc extension).

    3. Hopefully the 0o and 0O octal prefixes will mean that the 0 octal
    prefix will be deprecated and warned about by compilers (other than for
    plain 0, obviously).



    None of this are a big deal, and many are already things I use in my
    code (since I can use gcc extensions), but there are enough small
    benefits that I will move over to C23 on new projects.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Wed Apr 2 15:12:20 2025
    On Wed, 2 Apr 2025 11:12:07 -0300
    Thiago Adams <thiago.adams@gmail.com> wibbled:
    Em 4/2/2025 11:05 AM, Muttley@DastardlyHQ.org escreveu:
    So what exactly is better / faster / clearer / safer in C23?

    We already had some C23 topics here.
    My list

    - #warning (better)
    - typeof/auto (better only when strictly necessary)

    Auto as per C++ where its used as a substitute for unknown/long winded templated types or in range based loops? C doesn't have those so there's no reason to have it. If you don't know what type you're dealing with in C then you'll soon be up poo creek.

    - digit separator (better, safer)

    Meh.

    - binary literal useful

    We've had bitfields for years which cover most use cases.

    - #elifdef, OK not a problem, not complex..
    - _has_include useful
    - [[nodiscard]] safer (although I think it could be better defined)
    - static_assert no param (clear)

    Meh.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Wed Apr 2 15:16:24 2025
    On Wed, 2 Apr 2025 14:12:18 -0000 (UTC)
    antispam@fricas.org (Waldek Hebisch) wibbled:
    Muttley@dastardlyhq.org wrote:
    On Wed, 2 Apr 2025 10:57:29 +0100
    bart <bc@freeuk.com> wibbled:
    On 02/04/2025 06:59, Alexis wrote:

    Thought people here might be interested in this image on Jens Gustedt's >>>> blog, which translates section 6.2.5, "Types", of the C23 standard
    into a graph of inclusions:

    https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23-basic-types/ >>>>

    So much for C being a 'simple' language.

    C should be left alone. It does what it needs to do for a systems language. >> Almost no use uses it for applications any more and sophisticated processing >> using complex types for example are far better done in C++.

    C99 has VMT (variable modified types). Thanks to VMT and complex types
    C99 can naturaly do numeric computing that previously was done using
    Fortran 77. Offical C++ has no VMT. C++ mechanizms look nicer,

    Officially no, but I've never come across a C++ compiler that didn't support them given they're all C compilers too.

    but can be less efficient than using VMT-s, so C has advantage for
    basic numeric "cores".

    Maybe, my knowledge of the internals of C++ std:array and std::vector arn't good enough to argue the point.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Alexis on Wed Apr 2 06:02:36 2025
    On Wed, 02 Apr 2025 16:59:59 +1100, Alexis wrote:

    Thought people here might be interested in this image on Jens Gustedt's
    blog, which translates section 6.2.5, "Types", of the C23 standard into
    a graph of inclusions:

    https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23-basic-types/

    Wow, we have bit-precise integers now?

    PL/I, come back, all is forgiven!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Wed Apr 2 15:26:36 2025
    On Wed, 2 Apr 2025 16:59:45 +0200
    David Brown <david.brown@hesbynett.no> wibbled:
    On 02/04/2025 16:05, Muttley@DastardlyHQ.org wrote:
    I suspect the people who are happy with C never have any correspondence with >> anyone from the committee so they get an entirely biased sample. Just like >> its usually only people who had a bad experience that fill in "How did we do"

    surveys.

    And I suspect that you haven't a clue who the C standards committee talk
    to - and who those people in turn have asked.

    By imference you do - so who are they?

    11. nullptr for clarity and safety.

    Never understood that in C++ never mind C. NULL has worked fine for 50 years.

    12. Some improvements to variadic macros.

    Might be useful. Would be nice to pass the "..." args directly through to lower level functions without having to convert them to a va_list first.

    18. "unreachable()" is now standard.

    Googled it - don't see the point. More syntatic noise.

    19. printf (and friends) support for things like "%w32i" as the format >specifier for int32_t, so that we no longer need the ugly PRIi32 style
    of macro for portable code with fixed-size types.

    If you do a lot of cross platform code might be useful.

    To be honest you can do most of you posted already - just compile C with a C++ compiler. Seems a case of catch up me-too.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Muttley@DastardlyHQ.org on Wed Apr 2 16:33:46 2025
    On 02/04/2025 16:12, Muttley@DastardlyHQ.org wrote:
    On Wed, 2 Apr 2025 11:12:07 -0300
    Thiago Adams <thiago.adams@gmail.com> wibbled:
    Em 4/2/2025 11:05 AM, Muttley@DastardlyHQ.org escreveu:
    So what exactly is better / faster / clearer / safer in C23?

    We already had some C23 topics here.
    My list

    - #warning (better)
    - typeof/auto (better only when strictly necessary)

    Auto as per C++ where its used as a substitute for unknown/long winded templated types or in range based loops? C doesn't have those so there's no reason to have it. If you don't know what type you're dealing with in C then you'll soon be up poo creek.

    - digit separator (better, safer)

    Meh.

    What's the problem with it? Here, tell me at a glance the magnitude of
    this number:

    10000000000

    You're either likely to get it wrong, or need to start counting digits.
    Here (not the same number) it's easier:

    100_000_000



    - binary literal useful

    We've had bitfields for years which cover most use cases.

    Bitfields and binary literals are completely different things! A binary
    literal looks like this (if I got the prefix right):

    0b1_1101_1101 // the decimal value 447 or hex value 1DD
    0b11011101 // same thing without the separators

    This is a bitfield, which can only appear inside a struct definition:

    int a:12;

    The mystery is why it's taken half a century to standardise such literals.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@dastardlyhq.com@21:1/5 to All on Wed Apr 2 15:51:17 2025
    On Wed, 2 Apr 2025 16:33:46 +0100
    bart <bc@freeuk.com> gabbled:
    On 02/04/2025 16:12, Muttley@DastardlyHQ.org wrote:
    Meh.

    What's the problem with it? Here, tell me at a glance the magnitude of
    this number:

    10000000000

    And how often do you hard code values that large into a program? Almost
    never I imagine unless its some hex value to set flags in a word.

    We've had bitfields for years which cover most use cases.

    Bitfields and binary literals are completely different things! A binary >literal looks like this (if I got the prefix right):

    0b1_1101_1101 // the decimal value 447 or hex value 1DD
    0b11011101 // same thing without the separators

    Well fair enough, I thought it was some new binary type. Yes those are useful. C++ has had then for over 10 years so you've been able to use them for a long time.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Muttley@DastardlyHQ.org on Wed Apr 2 16:38:03 2025
    On 02/04/2025 16:26, Muttley@DastardlyHQ.org wrote:
    On Wed, 2 Apr 2025 16:59:45 +0200
    David Brown <david.brown@hesbynett.no> wibbled:
    On 02/04/2025 16:05, Muttley@DastardlyHQ.org wrote:
    I suspect the people who are happy with C never have any correspondence with
    anyone from the committee so they get an entirely biased sample. Just like >>> its usually only people who had a bad experience that fill in "How did we do"

    surveys.

    And I suspect that you haven't a clue who the C standards committee talk
    to - and who those people in turn have asked.

    By imference you do - so who are they?

    11. nullptr for clarity and safety.

    Never understood that in C++ never mind C. NULL has worked fine for 50 years.

    And it's been a hack for 50 years. Especially when it is just:

    #define NULL 0

    You also need to include some header (which one?) in order to use it.
    I'd hope you wouldn't need to do that for nullptr, but backwards
    compatibility may require it (because of any forward-thinking
    individuals who have already defined their own 'nullptr').

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@dastardlyhq.com@21:1/5 to All on Wed Apr 2 15:53:21 2025
    On Wed, 2 Apr 2025 16:38:03 +0100
    bart <bc@freeuk.com> gabbled:
    On 02/04/2025 16:26, Muttley@DastardlyHQ.org wrote:
    On Wed, 2 Apr 2025 16:59:45 +0200
    David Brown <david.brown@hesbynett.no> wibbled:
    On 02/04/2025 16:05, Muttley@DastardlyHQ.org wrote:
    I suspect the people who are happy with C never have any correspondence >with
    anyone from the committee so they get an entirely biased sample. Just like >>>> its usually only people who had a bad experience that fill in "How did we >do"

    surveys.

    And I suspect that you haven't a clue who the C standards committee talk >>> to - and who those people in turn have asked.

    By imference you do - so who are they?

    11. nullptr for clarity and safety.

    Never understood that in C++ never mind C. NULL has worked fine for 50 years.


    And it's been a hack for 50 years. Especially when it is just:

    #define NULL 0

    (void *)0 I thought. But anyway, it works. Zero is zero is zero. There are no concerns about type or type size.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Muttley@DastardlyHQ.org on Wed Apr 2 16:16:27 2025
    Muttley@DastardlyHQ.org writes:
    On Wed, 2 Apr 2025 16:59:45 +0200
    David Brown <david.brown@hesbynett.no> wibbled:
    On 02/04/2025 16:05, Muttley@DastardlyHQ.org wrote:
    ist first.

    18. "unreachable()" is now standard.

    Googled it - don't see the point.

    That's a defect in your understanding, not a defect in the standard.

    I've found the gcc equivelent useful often in standalone
    applications (OS, Hypervisor, standalone utilities, etc).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Muttley@dastardlyhq.com on Wed Apr 2 16:20:05 2025
    Muttley@dastardlyhq.com writes:
    On Wed, 2 Apr 2025 16:33:46 +0100
    bart <bc@freeuk.com> gabbled:
    On 02/04/2025 16:12, Muttley@DastardlyHQ.org wrote:
    Meh.

    What's the problem with it? Here, tell me at a glance the magnitude of
    this number:

    10000000000

    And how often do you hard code values that large into a program? Almost
    never I imagine unless its some hex value to set flags in a word.

    Every day, several times a day. 16 hex digit constants are very
    common in my work. The digit separator really helps with readability,
    although I would have preferred '_' over "'".

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Muttley@DastardlyHQ.org on Wed Apr 2 16:18:12 2025
    Muttley@DastardlyHQ.org writes:
    On Wed, 2 Apr 2025 11:12:07 -0300
    Thiago Adams <thiago.adams@gmail.com> wibbled:
    Em 4/2/2025 11:05 AM, Muttley@DastardlyHQ.org escreveu:
    So what exactly is better / faster / clearer / safer in C23?

    We already had some C23 topics here.
    My list

    - #warning (better)
    - typeof/auto (better only when strictly necessary)

    Auto as per C++ where its used as a substitute for unknown/long winded >templated types or in range based loops? C doesn't have those so there's no >reason to have it. If you don't know what type you're dealing with in C then >you'll soon be up poo creek.

    - digit separator (better, safer)

    Meh.

    - binary literal useful

    We've had bitfields for years which cover most use cases.

    Do you understand the term 'binary literal'?

    0b1011'1001'1111'0101 is a binary literal.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Muttley@DastardlyHQ.org on Wed Apr 2 19:23:58 2025
    On 02/04/2025 17:26, Muttley@DastardlyHQ.org wrote:
    On Wed, 2 Apr 2025 16:59:45 +0200
    David Brown <david.brown@hesbynett.no> wibbled:
    On 02/04/2025 16:05, Muttley@DastardlyHQ.org wrote:
    I suspect the people who are happy with C never have any correspondence with
    anyone from the committee so they get an entirely biased sample. Just like >>> its usually only people who had a bad experience that fill in "How did we do"

    surveys.

    And I suspect that you haven't a clue who the C standards committee talk
    to - and who those people in turn have asked.

    By imference you do - so who are they?

    That's an unwarranted inference. I assume that they talk with compiler developers, library developers, and representatives of at least some
    users (typically from large companies or major projects). And those
    people will have contact with and feedback from their users and
    developers. I did "know" (in the sense of email and Usenet
    conversations, rather than personally) one person who used to be on the
    C standards committee, and know a little of how he handled things at the committee. So no, I did not say I had any special knowledge here - I
    simply stated that it is clear that /you/ have no idea.


    11. nullptr for clarity and safety.

    Never understood that in C++ never mind C. NULL has worked fine for 50 years.

    If ignorance really is bliss, you must be the happiest person around.
    Or you can read one of my other posts pointing out the advantages of
    nullptr.


    12. Some improvements to variadic macros.

    Might be useful. Would be nice to pass the "..." args directly through to lower
    level functions without having to convert them to a va_list first.

    18. "unreachable()" is now standard.

    Googled it - don't see the point. More syntatic noise.

    Ignorant and proud of it!


    19. printf (and friends) support for things like "%w32i" as the format
    specifier for int32_t, so that we no longer need the ugly PRIi32 style
    of macro for portable code with fixed-size types.

    If you do a lot of cross platform code might be useful.

    To be honest you can do most of you posted already - just compile C with a C++
    compiler. Seems a case of catch up me-too.

    A number of these changes did come over from C++, yes. That does not
    mean they are not useful or wanted in C - it means the C world is happy
    to let C++ go first, then copy what has been shown to be useful. I
    think that is a good strategy.

    Some people (including me) will choose to use C++, but others prefer to
    (or are required to) use C.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to bart on Wed Apr 2 19:29:24 2025
    On 02/04/2025 17:38, bart wrote:
    On 02/04/2025 16:26, Muttley@DastardlyHQ.org wrote:
    On Wed, 2 Apr 2025 16:59:45 +0200
    David Brown <david.brown@hesbynett.no> wibbled:
    On 02/04/2025 16:05, Muttley@DastardlyHQ.org wrote:
    I suspect the people who are happy with C never have any
    correspondence with
    anyone from the committee so they get an entirely biased sample.
    Just like
    its usually only people who had a bad experience that fill in "How
    did we do"

    surveys.

    And I suspect that you haven't a clue who the C standards committee talk >>> to - and who those people in turn have asked.

    By imference you do - so who are they?

    11. nullptr for clarity and safety.

    Never understood that in C++ never mind C. NULL has worked fine for 50
    years.

    And it's been a hack for 50 years. Especially when it is just:

      #define NULL 0


    The common definition in C is :

    #define NULL ((void*) 0)

    Some compilers might have an extension, such as gcc's "__null", that are
    used instead to allow better static error checking.

    (In C++, it is often defined to 0, because the rules for implicit
    conversions from void* are different in C++.)


    You also need to include some header (which one?) in order to use it.

    <stddef.h>, as pretty much any C programmer will know.

    I'd hope you wouldn't need to do that for nullptr, but backwards compatibility may require it (because of any forward-thinking
    individuals who have already defined their own 'nullptr').


    No, nullptr is a keyword in C23.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to David Brown on Wed Apr 2 18:02:07 2025
    On 2025-04-02, David Brown <david.brown@hesbynett.no> wrote:
    Changes and new features are not added to the C standards just for fun,
    or just to annoy people - they are there because some people want them

    New features, as such, are not just added to annoy people.

    New features that have long time, excellent counterparts in GCC
    and Clang extensions, but are incomaptible and horribly worse
    are added just to annoy people.

    For instance, oh, alignment as a storage class specifier
    instead of an attribute system.

    When a thing exists, the job of the standard is to standardize what
    exists, and not invent some caricature of it.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to David Brown on Wed Apr 2 18:04:29 2025
    David Brown <david.brown@hesbynett.no> writes:
    On 02/04/2025 17:26, Muttley@DastardlyHQ.org wrote:
    On Wed, 2 Apr 2025 16:59:45 +0200
    David Brown <david.brown@hesbynett.no> wibbled:
    On 02/04/2025 16:05, Muttley@DastardlyHQ.org wrote:
    I suspect the people who are happy with C never have any correspondence with
    anyone from the committee so they get an entirely biased sample. Just like >>>> its usually only people who had a bad experience that fill in "How did we do"

    surveys.

    And I suspect that you haven't a clue who the C standards committee talk >>> to - and who those people in turn have asked.

    By imference you do - so who are they?

    That's an unwarranted inference. I assume that they talk with compiler >developers, library developers, and representatives of at least some
    users (typically from large companies or major projects). And those
    people will have contact with and feedback from their users and
    developers.

    You are basically correct. While I was never on the C committee, I've
    been on several others (88Open, XOpen/Posix, PCI-SIG, Unix International
    and a couple of technical advisory boards) over the
    years and your assumptions are spot-on.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to David Brown on Wed Apr 2 19:26:39 2025
    On 02/04/2025 18:29, David Brown wrote:
    On 02/04/2025 17:38, bart wrote:
    On 02/04/2025 16:26, Muttley@DastardlyHQ.org wrote:

    Never understood that in C++ never mind C. NULL has worked fine for
    50 years.

    And it's been a hack for 50 years. Especially when it is just:

       #define NULL 0


    The common definition in C is :

        #define NULL ((void*) 0)

    Some compilers might have an extension, such as gcc's "__null", that are
    used instead to allow better static error checking.

    (In C++, it is often defined to 0, because the rules for implicit
    conversions from void* are different in C++.)


    You also need to include some header (which one?) in order to use it.

    <stddef.h>, as pretty much any C programmer will know.

    This program:

    void* p = NULL;

    reports that NULL is undefined, but that can be fixed by including any
    of stdio.h, stdlib.h or string.h. Those are the first three I tried;
    there may be others.

    So it is not true that you need include stddef.h, nor obvious that that
    is where NULL is defined, if you are used to having it available indirectly.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to bart on Wed Apr 2 18:48:06 2025
    bart <bc@freeuk.com> writes:
    On 02/04/2025 18:29, David Brown wrote:
    On 02/04/2025 17:38, bart wrote:
    On 02/04/2025 16:26, Muttley@DastardlyHQ.org wrote:

    Never understood that in C++ never mind C. NULL has worked fine for
    50 years.

    And it's been a hack for 50 years. Especially when it is just:

       #define NULL 0


    The common definition in C is :

        #define NULL ((void*) 0)

    Some compilers might have an extension, such as gcc's "__null", that are
    used instead to allow better static error checking.

    (In C++, it is often defined to 0, because the rules for implicit
    conversions from void* are different in C++.)


    You also need to include some header (which one?) in order to use it.

    <stddef.h>, as pretty much any C programmer will know.

    This program:

    void* p = NULL;

    reports that NULL is undefined, but that can be fixed by including any
    of stdio.h, stdlib.h or string.h. Those are the first three I tried;
    there may be others.

    So it is not true that you need include stddef.h, nor obvious that that
    is where NULL is defined, if you are used to having it available indirectly.

    Indeed, and it is well documented.

    For example, in the POSIX description for the string functions you'll
    find the following statement:

    [CX] Inclusion of the <string.h> header may also make visible all
    symbols from <stddef.h>. [Option End]

    This is true for a number of POSIX headers, include those you enumerate
    above.

    [CX] marks a POSIX extension to ISO C.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to bart on Wed Apr 2 18:51:34 2025
    On 2025-04-02, bart <bc@freeuk.com> wrote:
    So it is not true that you need include stddef.h, nor obvious that that
    is where NULL is defined, if you are used to having it available indirectly.

    It's documented as the canonical source of NULL.

    In C90, now 35 years ago, it was written up like this:

    7.1.6 Common definitions <stddef.h>

    The following types and macros are defined in the standard header
    <stddef.h>. Some are also defined in other headers, as noted in their
    respective subclauses.

    ...

    The macros are

    NULL
    which expands to an implementation-defined null pointer constant: and

    offsetof(type, member-designator)

    ... etc

    There is no other easy way to find that out. An implementation could directly stick #define NULL into every header that is either allowed or required to reveal that macro, and so from that you would not know which headers are required to provide it.

    Many things are not going to be "obvious" if you don't use documentation.

    (In my opinion, things would be better if headers were not allowed to behave as if they include other headers, or provide identifiers also given in other heards. Not in ISO C, and not in POSIX. Every identifier should be declared in exactly one home header, and no other header should provide that definition. Programs ported from one Unix to another sometimes break for no other reason than this! On the original platform, a header provided a certain identifier which it is not required to; on the new platform that same header doesn't do that.)

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to bart on Wed Apr 2 23:24:43 2025
    On Wed, 2 Apr 2025 16:38:03 +0100
    bart <bc@freeuk.com> wrote:

    On 02/04/2025 16:26, Muttley@DastardlyHQ.org wrote:
    On Wed, 2 Apr 2025 16:59:45 +0200
    David Brown <david.brown@hesbynett.no> wibbled:
    On 02/04/2025 16:05, Muttley@DastardlyHQ.org wrote:
    I suspect the people who are happy with C never have any
    correspondence with anyone from the committee so they get an
    entirely biased sample. Just like its usually only people who had
    a bad experience that fill in "How did we do"

    surveys.

    And I suspect that you haven't a clue who the C standards
    committee talk to - and who those people in turn have asked.

    By imference you do - so who are they?

    11. nullptr for clarity and safety.

    Never understood that in C++ never mind C. NULL has worked fine for
    50 years.

    And it's been a hack for 50 years. Especially when it is just:

    #define NULL 0

    You also need to include some header (which one?) in order to use it.
    I'd hope you wouldn't need to do that for nullptr, but backwards compatibility may require it (because of any forward-thinking
    individuals who have already defined their own 'nullptr').



    C23 is rather bold in that regard, adding non-underscored keywords as
    if there was no yesterday. IMHO, for no good reasons.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to Scott Lurndal on Wed Apr 2 23:31:07 2025
    On 02.04.2025 18:20, Scott Lurndal wrote:
    Muttley@dastardlyhq.com writes:
    On Wed, 2 Apr 2025 16:33:46 +0100
    bart <bc@freeuk.com> gabbled:
    On 02/04/2025 16:12, Muttley@DastardlyHQ.org wrote:
    Meh.

    What's the problem with it? Here, tell me at a glance the magnitude of
    this number:

    10000000000

    And how often do you hard code values that large into a program? Almost
    never I imagine unless its some hex value to set flags in a word.

    I can't tell generally; it certainly depends on the application
    contexts.

    And of course for bases lower than 10 the numeric literals grow
    in length, so its usefulness is probably most obvious in binary
    literals. But why restrict a readability feature to binary only?

    It's useful and it doesn't hurt (WRT compatibility).


    Every day, several times a day. 16 hex digit constants are very
    common in my work. The digit separator really helps with readability, although I would have preferred '_' over "'".

    Obviously a question of opinion depending on where one comes from.

    I see a couple options for the group separator. Spaces (as used in
    Algol 68) are probably most readable, but maybe a no-go in "C".
    Locale specific separators (dot and comma vs. comma and dot, in
    fractional numbers) and the problem of commas infer own semantics.
    The single quote is actually what I found well suited in the past;
    it stems (I think) from the convention used in Switzerland. The
    underscore you mention didn't occur to me as option, but it's not
    bad as well.

    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to David Brown on Wed Apr 2 23:43:49 2025
    On 02.04.2025 16:59, David Brown wrote:
    [...]

    From the next version beyond C23, so far there is :

    1. Declarations in "if" and "switch" statements, like those in "for"
    loops, helps keep local variable scopes small and neat.

    Oh, I thought that would already be supported in some existing "C"
    version for the 'if'; I probably confused that with C++.

    2. Ranges in case labels - that speaks for itself (though again I used
    it already as a gcc extension).

    Heh! - As a GNU Awk user I'd be disappointed by anything less than
    the option to use regexps here. :-)

    Janis

    [...]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Janis Papanagnou on Wed Apr 2 23:32:28 2025
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
    On 02.04.2025 18:20, Scott Lurndal wrote:
    Muttley@dastardlyhq.com writes:
    On Wed, 2 Apr 2025 16:33:46 +0100
    bart <bc@freeuk.com> gabbled:
    On 02/04/2025 16:12, Muttley@DastardlyHQ.org wrote:
    Meh.

    What's the problem with it? Here, tell me at a glance the magnitude of >>>> this number:

    10000000000

    And how often do you hard code values that large into a program? Almost
    never I imagine unless its some hex value to set flags in a word.

    I can't tell generally; it certainly depends on the application
    contexts.

    And of course for bases lower than 10 the numeric literals grow
    in length, so its usefulness is probably most obvious in binary
    literals. But why restrict a readability feature to binary only?

    It's useful and it doesn't hurt (WRT compatibility).


    Every day, several times a day. 16 hex digit constants are very
    common in my work. The digit separator really helps with readability,
    although I would have preferred '_' over "'".

    Obviously a question of opinion depending on where one comes from.

    Verilog uses _ as a digit separator.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Scott Lurndal on Thu Apr 3 01:10:33 2025
    On 02/04/2025 17:20, Scott Lurndal wrote:
    Muttley@dastardlyhq.com writes:
    On Wed, 2 Apr 2025 16:33:46 +0100
    bart <bc@freeuk.com> gabbled:
    On 02/04/2025 16:12, Muttley@DastardlyHQ.org wrote:
    Meh.

    What's the problem with it? Here, tell me at a glance the magnitude of
    this number:

    10000000000

    And how often do you hard code values that large into a program? Almost
    never I imagine unless its some hex value to set flags in a word.

    Every day, several times a day. 16 hex digit constants are very
    common in my work. The digit separator really helps with readability, although I would have preferred '_' over "'".

    Oh, I thought C23 used '_', since Python uses that. I prefer single
    quote as that is not shifted on my keyboard. (My language projects just
    allow both!)

    That fact that it is not widespread is a problem however, so I can't use
    either without restricting the compilers that can be used.

    For example gcc 14.x on Windows accepts it with -std=c23 only; gcc on
    WSL doesn't; tcc doesn't.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to Scott Lurndal on Thu Apr 3 03:02:02 2025
    On 03.04.2025 01:32, Scott Lurndal wrote:
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
    [...]

    Obviously a question of opinion depending on where one comes from.

    Verilog uses _ as a digit separator.

    And Kornshell's 'printf' uses ',' for output formatting as in

    $ printf "%,d\n" 1234567
    1,234,567

    Maybe it should be configurable?

    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Janis Papanagnou on Wed Apr 2 20:53:49 2025
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:

    [...]

    I see the 'bool' but recently seen mentioned some '_Bool' type.
    The latter was probably chosen in that special syntax to avoid
    conflicts during "C" language evolution?
    How do regular "C" programmers handle that multitude of boolean
    types; ranging from use of 'int', over own "bool" types, then
    '_Bool', and now 'bool'? Since it's a very basic type it looks
    like you need hard transitions in evolution of your "C" code?

    I use a typedef, in order to isolate code from changes in the C
    standard, and to facilitate moving code from one compilation
    environment to another. As much as I can I try to write code
    that is both platform- and standard-version- independent (within
    limits, obviously, but usually code can be written so that there
    will be complaints at compile time if the relevant limits are not
    observed).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to Kaz Kylheku on Thu Apr 3 05:43:40 2025
    On 02.04.2025 09:32, Kaz Kylheku wrote:
    On 2025-04-02, Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
    On 02.04.2025 07:59, Alexis wrote:

    Thought people here might be interested in this image on Jens Gustedt's
    blog, which translates section 6.2.5, "Types", of the C23 standard
    into a graph of inclusions:

    https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23-basic-types/

    A nice overview. - I have questions on some of these types...

    The _Decimal* types - are these just types with other implicit
    encodings, say, BCD encoded, or some such?

    IEEE 754 defines decimal floating point types now, so that's what
    that is about. The spec allows for the significand to be encoded
    using Binary Integer Decimal, or to use Densely Packed Decimal.

    Thanks for the hint and keywords. It seems my BCD guess was not far
    from what these two IEEE formats actually are.

    Does that now mean that every conforming C23 compiler must support
    yet more numeric types, including multiple implementation-versions
    of the whole arithmetic functions and operators necessary?

    I wonder why these variants had been introduced.

    In many other languages you have abstractions of numeric types, not
    every implicitly encoding variant revealed an the programming level.

    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Kaz Kylheku on Wed Apr 2 21:06:48 2025
    Kaz Kylheku <643-408-1753@kylheku.com> writes:

    [some symbols are defined in more than one header]

    (In my opinion, things would be better if headers were not allowed
    to behave as if they include other headers, or provide identifiers
    also given in other heards. Not in ISO C, and not in POSIX.
    Every identifier should be declared in exactly one home header,
    and no other header should provide that definition. [...])

    Not always practical. A good example is the type size_t. If a
    function takes an argument of type size_t, then the symbol size_t
    should be defined, no matter which header the function is being
    declared in. Similarly for NULL for any function that has defined
    behavior on some cases of arguments that include NULL. No doubt
    there are other compelling examples.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Scott Lurndal on Wed Apr 2 21:00:14 2025
    scott@slp53.sl.home (Scott Lurndal) writes:

    bart <bc@freeuk.com> writes:
    [...]
    So it is not true that you need include stddef.h, nor obvious
    that that is where NULL is defined, if you are used to having it
    available indirectly.

    Indeed, and it is well documented.

    For example, in the POSIX description for the string functions
    you'll find the following statement:

    [CX] Inclusion of the <string.h> header may also make visible
    all symbols from <stddef.h>. [Option End]

    This is true for a number of POSIX headers, include those you
    enumerate above.

    [CX] marks a POSIX extension to ISO C.

    How strange. I don't know why anyone would ever want either to
    rely on or to take advantage of this property. To make the extra
    symbols visible, does there need to be something like

    #define __POSIX 1

    to enable the non-conforming behavior?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to Kaz Kylheku on Thu Apr 3 00:35:28 2025
    On 4/2/25 14:02, Kaz Kylheku wrote:
    ...
    When a thing exists, the job of the standard is to standardize what
    exists, and not invent some caricature of it.

    In the process of standardization, the committee is supposed to exercise
    it's judgement, and if that judgement says that there's a better way to
    do something than the way for with there is existing practice, they have
    an obligation to correct the design of that feature accordingly.

    Feel free to disagree with the committee's judgement in this matter, but
    don't presume that existing practice overrides good design - that's not
    one of the rules the committee works under.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to bart on Thu Apr 3 05:09:21 2025
    On 2025-04-03, bart <bc@freeuk.com> wrote:
    On 02/04/2025 17:20, Scott Lurndal wrote:
    Muttley@dastardlyhq.com writes:
    On Wed, 2 Apr 2025 16:33:46 +0100
    bart <bc@freeuk.com> gabbled:
    On 02/04/2025 16:12, Muttley@DastardlyHQ.org wrote:
    Meh.

    What's the problem with it? Here, tell me at a glance the magnitude of >>>> this number:

    10000000000

    And how often do you hard code values that large into a program? Almost
    never I imagine unless its some hex value to set flags in a word.

    Every day, several times a day. 16 hex digit constants are very
    common in my work. The digit separator really helps with readability,
    although I would have preferred '_' over "'".

    Oh, I thought C23 used '_', since Python uses that. I prefer single
    quote as that is not shifted on my keyboard. (My language projects just
    allow both!)

    I made , (comma) the digit separator in TXR Lisp. Nobody uses _ in the
    real world.

    I understand that in some countries, that is the decimal point. That is
    not relevant in programming languages that use a period for that and are
    not localized.

    Comma means I can just copy and paste a figure from a financial document
    or application, or any other document which uses that convention.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Muttley@DastardlyHQ.org on Wed Apr 2 17:28:24 2025
    On Wed, 2 Apr 2025 14:05:17 -0000 (UTC)
    Muttley@DastardlyHQ.org wrote:


    So what exactly is better / faster / clearer / safer in C23?


    Are you banned in Wikipedia?!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Wed Apr 2 15:17:10 2025
    On Wed, 2 Apr 2025 17:28:24 +0300
    Michael S <already5chosen@yahoo.com> wibbled:
    On Wed, 2 Apr 2025 14:05:17 -0000 (UTC)
    Muttley@DastardlyHQ.org wrote:


    So what exactly is better / faster / clearer / safer in C23?


    Are you banned in Wikipedia?!


    If everyone just used the web there'd be no posts to usenet. Sometimes its nicer to have things distilled down rather than wading through pages of
    waffle.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to James Kuyper on Thu Apr 3 06:21:14 2025
    On 2025-04-03, James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
    On 4/2/25 14:02, Kaz Kylheku wrote:
    ...
    When a thing exists, the job of the standard is to standardize what
    exists, and not invent some caricature of it.

    In the process of standardization, the committee is supposed to exercise
    it's judgement, and if that judgement says that there's a better way to
    do something than the way for with there is existing practice, they have
    an obligation to correct the design of that feature accordingly.

    Can you name one thing that was designed better by ISO C, for which
    a GCC extension was prior art?

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Janis Papanagnou on Thu Apr 3 09:38:05 2025
    On 03/04/2025 05:43, Janis Papanagnou wrote:
    On 02.04.2025 09:32, Kaz Kylheku wrote:
    On 2025-04-02, Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
    On 02.04.2025 07:59, Alexis wrote:

    Thought people here might be interested in this image on Jens Gustedt's >>>> blog, which translates section 6.2.5, "Types", of the C23 standard
    into a graph of inclusions:

    https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23-basic-types/ >>>
    A nice overview. - I have questions on some of these types...

    The _Decimal* types - are these just types with other implicit
    encodings, say, BCD encoded, or some such?

    IEEE 754 defines decimal floating point types now, so that's what
    that is about. The spec allows for the significand to be encoded
    using Binary Integer Decimal, or to use Densely Packed Decimal.

    Thanks for the hint and keywords. It seems my BCD guess was not far
    from what these two IEEE formats actually are.

    Does that now mean that every conforming C23 compiler must support
    yet more numeric types, including multiple implementation-versions
    of the whole arithmetic functions and operators necessary?


    They are an optional feature (as are the other floating point and
    complex types beyond the basics of float, double and long double).

    I wonder why these variants had been introduced.

    I presume that some people want them. They are in the ISO 60559
    standard, along with things like "interchange" floating point types
    intended to be maximally consistent between different systems.


    In many other languages you have abstractions of numeric types, not
    every implicitly encoding variant revealed an the programming level.


    That's often fine within a program, but sometimes you need to exchange
    data with other programs. In particular, C is the standard language for
    common libraries - being able to reliably and consistently exchange data
    with other languages and other machines is thus very important.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to bart on Thu Apr 3 09:57:51 2025
    On 03/04/2025 02:10, bart wrote:
    On 02/04/2025 17:20, Scott Lurndal wrote:
    Muttley@dastardlyhq.com writes:
    On Wed, 2 Apr 2025 16:33:46 +0100
    bart <bc@freeuk.com> gabbled:
    On 02/04/2025 16:12, Muttley@DastardlyHQ.org wrote:
    Meh.

    What's the problem with it? Here, tell me at a glance the magnitude of >>>> this number:

         10000000000

    And how often do you hard code values that large into a program? Almost
    never I imagine unless its some hex value to set flags in a word.

    Every day, several times a day. 16 hex digit constants are very
    common in my work.  The digit separator really helps with readability,
    although I would have preferred '_' over "'".

    Oh, I thought C23 used '_', since Python uses that. I prefer single
    quote as that is not shifted on my keyboard. (My language projects just
    allow both!)

    C++ uses single quotes - it is much more natural for C to copy C++ than
    to copy Python.


    That fact that it is not widespread is a problem however, so I can't use either without restricting the compilers that can be used.

    For example gcc 14.x on Windows accepts it with -std=c23 only; gcc on
    WSL doesn't; tcc doesn't.


    Surprisingly enough, this new C23 feature is only available when using
    C23 or later.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Janis Papanagnou on Thu Apr 3 09:49:01 2025
    On 02/04/2025 23:31, Janis Papanagnou wrote:
    On 02.04.2025 18:20, Scott Lurndal wrote:
    Muttley@dastardlyhq.com writes:
    On Wed, 2 Apr 2025 16:33:46 +0100
    bart <bc@freeuk.com> gabbled:
    On 02/04/2025 16:12, Muttley@DastardlyHQ.org wrote:
    Meh.

    What's the problem with it? Here, tell me at a glance the magnitude of >>>> this number:

    10000000000

    And how often do you hard code values that large into a program? Almost
    never I imagine unless its some hex value to set flags in a word.

    I can't tell generally; it certainly depends on the application
    contexts.

    And of course for bases lower than 10 the numeric literals grow
    in length, so its usefulness is probably most obvious in binary
    literals. But why restrict a readability feature to binary only?

    It's useful and it doesn't hurt (WRT compatibility).


    Every day, several times a day. 16 hex digit constants are very
    common in my work. The digit separator really helps with readability,
    although I would have preferred '_' over "'".

    Obviously a question of opinion depending on where one comes from.

    I see a couple options for the group separator. Spaces (as used in
    Algol 68) are probably most readable, but maybe a no-go in "C".
    Locale specific separators (dot and comma vs. comma and dot, in
    fractional numbers) and the problem of commas infer own semantics.
    The single quote is actually what I found well suited in the past;
    it stems (I think) from the convention used in Switzerland. The
    underscore you mention didn't occur to me as option, but it's not
    bad as well.


    Once you have eliminated punctuation that would already have a different meaning in the syntax of the language, you very quickly get down to
    three choices, AFAICS - underscore, single quote or double quote. When
    C++ added digit separators, they had already used underscore for
    user-defined literals, so that was ruled out. I don't know if double
    quotation marks could have been used, or they were ruled out for other
    reasons, but C++ settled on single quotation marks. Then C followed
    suit, because re-inventing an incompatible wheel would have been insane.

    With hindsight, it might have been nicer to use underscore for digit
    separators and single quote marks for user-defined literals in C++ (in
    the manner of attributes in Ada), but the fact that underscore is
    effectively a letter in C and C++ was very relevant for its choice in user-defined literals.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to bart on Thu Apr 3 10:04:22 2025
    On 02/04/2025 20:26, bart wrote:
    On 02/04/2025 18:29, David Brown wrote:
    On 02/04/2025 17:38, bart wrote:
    On 02/04/2025 16:26, Muttley@DastardlyHQ.org wrote:

    Never understood that in C++ never mind C. NULL has worked fine for
    50 years.

    And it's been a hack for 50 years. Especially when it is just:

       #define NULL 0


    The common definition in C is :

         #define NULL ((void*) 0)

    Some compilers might have an extension, such as gcc's "__null", that
    are used instead to allow better static error checking.

    (In C++, it is often defined to 0, because the rules for implicit
    conversions from void* are different in C++.)


    You also need to include some header (which one?) in order to use it.

    <stddef.h>, as pretty much any C programmer will know.

    This program:

      void* p = NULL;

    reports that NULL is undefined, but that can be fixed by including any
    of stdio.h, stdlib.h or string.h. Those are the first three I tried;
    there may be others.

    So it is not true that you need include stddef.h, nor obvious that that
    is where NULL is defined, if you are used to having it available
    indirectly.


    Fair enough - it is correct that there are some other standard headers
    that also define NULL (and/or a few other common identifiers such as
    size_t or wchar_t). The standard source of these common definitions,
    without pulling in a range of other identifiers, is <stddef.h>. (That
    is where it is documented in the standards.)

    It doesn't matter to C programmers /where/ NULL is defined, or how - it
    matters merely that it is defined when they need it, and what it means.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to David Brown on Thu Apr 3 10:15:17 2025
    On 03.04.2025 09:38, David Brown wrote:
    On 03/04/2025 05:43, Janis Papanagnou wrote:

    In many other languages you have abstractions of numeric types, not
    every implicitly encoding variant revealed an the programming level.

    That's often fine within a program, but sometimes you need to exchange
    data with other programs. In particular, C is the standard language for common libraries - being able to reliably and consistently exchange data
    with other languages and other machines is thus very important.

    I consider this an important point! - My background is a bit different,
    though; I was working in system environments not restricted to single languages, single OSes, or systems originating from the same vendor
    or even the same country. For data exchange it was important to have
    a standard transfer syntax independent of the data types of a specific programming language. - Don't get me wrong; in the past I've also used
    those byte-reverting library functions (for endian'ness), sometimes an
    object serialization, but also CORBA, and preferable even ASN.1 (with
    an associated transfer syntax). - Being platform/language independent
    requires of course an abstraction layer.

    This "flexibility" of various sorts of numeric "subtypes", be it in
    Fortran, Algol 68, or "C", always appeared odd to me. Things like the
    ranged types (say as Pascal or Ada provided) seemed more appropriate
    to me for a high-level language.

    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Keith Thompson on Thu Apr 3 10:16:00 2025
    On 03/04/2025 02:41, Keith Thompson wrote:
    scott@slp53.sl.home (Scott Lurndal) writes:
    bart <bc@freeuk.com> writes:
    On 02/04/2025 18:29, David Brown wrote:
    On 02/04/2025 17:38, bart wrote:
    [...]
    You also need to include some header (which one?) in order to use it. >>>>
    <stddef.h>, as pretty much any C programmer will know.

    This program:

    void* p = NULL;

    reports that NULL is undefined, but that can be fixed by including any
    of stdio.h, stdlib.h or string.h. Those are the first three I tried;
    there may be others.

    So it is not true that you need include stddef.h, nor obvious that that
    is where NULL is defined, if you are used to having it available indirectly.

    Indeed, and it is well documented.

    For example, in the POSIX description for the string functions you'll
    find the following statement:

    [CX] Inclusion of the <string.h> header may also make visible all
    symbols from <stddef.h>. [Option End]

    This is true for a number of POSIX headers, include those you enumerate
    above.

    [CX] marks a POSIX extension to ISO C.

    Interesting. The C standard says that <string.h> defines NULL and
    size_t, both of which are also defined in <stddef.h>. A number of other symbols from <stddef.h> are also defined in other headers. A conforming implementation may not make any other declarations from <stddef.h>
    visible as a result of including <string.h>. I wonder why POSIX has
    that "extension".


    The documentation quoted by Scott says "may". To me, it seems pretty
    obvious why they have this. It means that their definition of
    <string.h> can start with

    #include <stddef.h>

    rather than go through the merry dance of conditional compilation,
    defining and undefining these macros, "__null_is_defined" macros, and
    the rest of it. This all gets particularly messy when some standard
    headers (generally those that are part of "freestanding" C - including <stddef.h>) often come with the compiler, while other parts (like
    <string.h>) generally come with the library. On some systems, these two
    parts are from entirely separate groups, and may use different
    conventions for their various "__is_defined" tag macros.

    But by writing "may", it is clear that some setups may define the
    identifiers solely according to the C standards documentation. Thus you
    should not rely on an inclusion of <string.h> providing the "offsetof"
    macro - but equally, you should not rely on it /not/ providing that macro.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Thu Apr 3 08:45:42 2025
    On Wed, 02 Apr 2025 16:16:27 GMT
    scott@slp53.sl.home (Scott Lurndal) wibbled:
    Muttley@DastardlyHQ.org writes:
    On Wed, 2 Apr 2025 16:59:45 +0200
    David Brown <david.brown@hesbynett.no> wibbled:
    On 02/04/2025 16:05, Muttley@DastardlyHQ.org wrote:
    ist first.

    18. "unreachable()" is now standard.

    Googled it - don't see the point.

    That's a defect in your understanding, not a defect in the standard.

    I've found the gcc equivelent useful often in standalone
    applications (OS, Hypervisor, standalone utilities, etc).

    Enlighten me then.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Kaz Kylheku on Thu Apr 3 10:23:40 2025
    On 02/04/2025 20:51, Kaz Kylheku wrote:
    On 2025-04-02, bart <bc@freeuk.com> wrote:
    So it is not true that you need include stddef.h, nor obvious that that
    is where NULL is defined, if you are used to having it available indirectly.

    It's documented as the canonical source of NULL.

    In C90, now 35 years ago, it was written up like this:

    7.1.6 Common definitions <stddef.h>

    The following types and macros are defined in the standard header
    <stddef.h>. Some are also defined in other headers, as noted in their
    respective subclauses.

    ...

    The macros are

    NULL
    which expands to an implementation-defined null pointer constant: and

    offsetof(type, member-designator)

    ... etc

    There is no other easy way to find that out. An implementation could directly stick #define NULL into every header that is either allowed or required to reveal that macro, and so from that you would not know which headers are required to provide it.

    Many things are not going to be "obvious" if you don't use documentation.

    (In my opinion, things would be better if headers were not allowed to behave as
    if they include other headers, or provide identifiers also given in other heards. Not in ISO C, and not in POSIX. Every identifier should be declared in
    exactly one home header, and no other header should provide that definition. Programs ported from one Unix to another sometimes break for no other reason than this! On the original platform, a header provided a certain identifier which it is not required to; on the new platform that same header doesn't do that.)


    IMHO, it would be better if headers were explicitly defined as including
    other headers. The documentation of <string.h> should say that it
    includes <stddef.h>. That way everything is defined in one and only one header, in a clear manner, without forcing users to manually include
    extra headers in a specific order.

    In particular, since <stddef.h> contains "common definitions", it would
    be natural to say that /all/ standard library headers include it. That
    would simplify implementations, and simplify the standards documents,
    and I would be very surprised if it lead to any real code conflicts.

    After that, there are only a few cases where one standard library header
    would need to include another one - such as <inttypes.h> including
    <stdint.h>.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Thu Apr 3 08:46:39 2025
    On Wed, 02 Apr 2025 16:20:05 GMT
    scott@slp53.sl.home (Scott Lurndal) wibbled:
    Muttley@dastardlyhq.com writes:
    On Wed, 2 Apr 2025 16:33:46 +0100
    bart <bc@freeuk.com> gabbled:
    On 02/04/2025 16:12, Muttley@DastardlyHQ.org wrote:
    Meh.

    What's the problem with it? Here, tell me at a glance the magnitude of >>>this number:

    10000000000

    And how often do you hard code values that large into a program? Almost >>never I imagine unless its some hex value to set flags in a word.

    Every day, several times a day. 16 hex digit constants are very
    common in my work. The digit separator really helps with readability,

    Oh really? What are you doing, hardcoding password hashes?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Thu Apr 3 08:49:49 2025
    On Wed, 2 Apr 2025 19:23:58 +0200
    David Brown <david.brown@hesbynett.no> wibbled:
    On 02/04/2025 17:26, Muttley@DastardlyHQ.org wrote:
    On Wed, 2 Apr 2025 16:59:45 +0200
    David Brown <david.brown@hesbynett.no> wibbled:
    On 02/04/2025 16:05, Muttley@DastardlyHQ.org wrote:
    I suspect the people who are happy with C never have any correspondence >with
    anyone from the committee so they get an entirely biased sample. Just like >>>> its usually only people who had a bad experience that fill in "How did we >do"

    surveys.

    And I suspect that you haven't a clue who the C standards committee talk >>> to - and who those people in turn have asked.

    By imference you do - so who are they?

    That's an unwarranted inference. I assume that they talk with compiler

    You *assume* "they", whoever they are. Oh well, thats definitive then.

    11. nullptr for clarity and safety.

    Never understood that in C++ never mind C. NULL has worked fine for 50 years.


    If ignorance really is bliss, you must be the happiest person around.
    Or you can read one of my other posts pointing out the advantages of
    nullptr.

    Compile them into a book and publish it. In the meantime I have better things to do than trawl back through god knows how many posts to find them.

    A number of these changes did come over from C++, yes. That does not
    mean they are not useful or wanted in C - it means the C world is happy
    to let C++ go first, then copy what has been shown to be useful. I
    think that is a good strategy.

    Some people (including me) will choose to use C++, but others prefer to
    (or are required to) use C.

    I can't imagine many situations outside of maybe speclialist hardware scenarios where the C compiler isn't also a C++ compiler.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Thu Apr 3 08:55:15 2025
    On Thu, 3 Apr 2025 05:09:21 -0000 (UTC)
    Kaz Kylheku <643-408-1753@kylheku.com> wibbled:
    On 2025-04-03, bart <bc@freeuk.com> wrote:
    Oh, I thought C23 used '_', since Python uses that. I prefer single
    quote as that is not shifted on my keyboard. (My language projects just
    allow both!)

    I made , (comma) the digit separator in TXR Lisp. Nobody uses _ in the
    real world.

    Not a good idea because in a lot of the non english speaking world the comma and point swap places in writing numbers. eg: 100,000.123 becomes 100.000,123

    Most americans are probably blissfully unaware of that.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Thu Apr 3 08:51:03 2025
    On Wed, 2 Apr 2025 13:09:08 -0700
    "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> wibbled:
    On 4/2/2025 8:16 AM, Muttley@DastardlyHQ.org wrote:
    On Wed, 2 Apr 2025 14:12:18 -0000 (UTC)
    antispam@fricas.org (Waldek Hebisch) wibbled:
    Muttley@dastardlyhq.org wrote:
    On Wed, 2 Apr 2025 10:57:29 +0100
    bart <bc@freeuk.com> wibbled:
    On 02/04/2025 06:59, Alexis wrote:

    Thought people here might be interested in this image on Jens Gustedt's >>>>>> blog, which translates section 6.2.5, "Types", of the C23 standard >>>>>> into a graph of inclusions:


    https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23-basic-types/


    So much for C being a 'simple' language.

    C should be left alone. It does what it needs to do for a systems language.

    Almost no use uses it for applications any more and sophisticated >processing
    using complex types for example are far better done in C++.

    C99 has VMT (variable modified types). Thanks to VMT and complex types
    C99 can naturaly do numeric computing that previously was done using
    Fortran 77. Offical C++ has no VMT. C++ mechanizms look nicer,

    Officially no, but I've never come across a C++ compiler that didn't support >> them given they're all C compilers too.

    All C++ compilers are also C compilers?

    Name a current one (ie not a cross compiler from the 90s) that isn't.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Michael S on Thu Apr 3 10:59:33 2025
    On 02/04/2025 22:24, Michael S wrote:
    On Wed, 2 Apr 2025 16:38:03 +0100
    bart <bc@freeuk.com> wrote:

    On 02/04/2025 16:26, Muttley@DastardlyHQ.org wrote:
    On Wed, 2 Apr 2025 16:59:45 +0200
    David Brown <david.brown@hesbynett.no> wibbled:
    On 02/04/2025 16:05, Muttley@DastardlyHQ.org wrote:
    I suspect the people who are happy with C never have any
    correspondence with anyone from the committee so they get an
    entirely biased sample. Just like its usually only people who had
    a bad experience that fill in "How did we do"

    surveys.

    And I suspect that you haven't a clue who the C standards
    committee talk to - and who those people in turn have asked.

    By imference you do - so who are they?

    11. nullptr for clarity and safety.

    Never understood that in C++ never mind C. NULL has worked fine for
    50 years.

    And it's been a hack for 50 years. Especially when it is just:

    #define NULL 0

    You also need to include some header (which one?) in order to use it.
    I'd hope you wouldn't need to do that for nullptr, but backwards
    compatibility may require it (because of any forward-thinking
    individuals who have already defined their own 'nullptr').



    C23 is rather bold in that regard, adding non-underscored keywords as
    if there was no yesterday. IMHO, for no good reasons.


    It is bold, perhaps, but there are certainly good reasons. As far as I
    can see we have some keywords that have dropped their underscore-capital
    form:

    alignas
    alignof
    bool
    static_assert
    thread_local

    And we have some new ones :

    constexpr
    false
    nullptr
    true
    typeof
    typeof_unequal

    (Other new keywords, such as _Decimal32, have the underscore-capital form.)

    While it is a good idea to avoid new non-reserved identifier keywords
    for compatibility, it would also be getting a bit silly to add <stdconstexpr.h>, <stdnullptr.h>, etc., to the existing <stdalign.h>, <stdbool.h>, etc. It is an inconvenience for programmers to have to
    pull in a dozen extra headers just to be able to use new standard
    language features in a nice manner.


    This does mean that some pre-C23 code will be incompatible with C23. It
    is like C99 in that regard - it is a significantly bigger change than
    C17 was. But in my opinion, it is good to see that C23 places a bit
    more relevance on newer code going forward, even though it is at the
    cost of some older code (particularly code written to pre-C99 standards).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Janis Papanagnou on Thu Apr 3 11:03:02 2025
    On 02/04/2025 23:43, Janis Papanagnou wrote:
    On 02.04.2025 16:59, David Brown wrote:
    [...]

    From the next version beyond C23, so far there is :

    1. Declarations in "if" and "switch" statements, like those in "for"
    loops, helps keep local variable scopes small and neat.

    Oh, I thought that would already be supported in some existing "C"
    version for the 'if'; I probably confused that with C++.


    C++17 has it.

    I guess the C committee waited until C++17 had been common enough that
    they could see if it was useful in real code, and if it lead to any
    unexpected problems in code or compilers before copying it for C.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Muttley@DastardlyHQ.org on Thu Apr 3 11:41:31 2025
    On 03/04/2025 10:45, Muttley@DastardlyHQ.org wrote:
    On Wed, 02 Apr 2025 16:16:27 GMT
    scott@slp53.sl.home (Scott Lurndal) wibbled:
    Muttley@DastardlyHQ.org writes:
    On Wed, 2 Apr 2025 16:59:45 +0200
    David Brown <david.brown@hesbynett.no> wibbled:
    On 02/04/2025 16:05, Muttley@DastardlyHQ.org wrote:
    ist first.

    18. "unreachable()" is now standard.

    Googled it - don't see the point.

    That's a defect in your understanding, not a defect in the standard.

    I've found the gcc equivelent useful often in standalone
    applications (OS, Hypervisor, standalone utilities, etc).

    Enlighten me then.


    I can't tell you what Scott uses it for, but I have used gcc's __builtin_unreachable() a fair number of times in my coding. I use it
    to inform both the compiler and human readers that a path is unreachable:

    switch (x) {
    case 1 : ...
    case 2 : ...
    case 3 : ...
    default : __builtin_unreachable();
    }

    I can also use it to inform the compiler about data :

    if ((x < 0) || (x > 10)) __builtin_unreachable();
    // x must be 1 .. 10

    Mostly I have it wrapped in macros that let me conveniently have
    run-time checking during testing or debugging, and extra efficiency in
    the code when I am confident it is bug-free.

    Good use of __builtin_unreachable() can result in smaller and faster
    code, and possibly improved static error checking. It is related to the
    C++23 "assume" attribute (which is also available as a gcc extension in
    any C and C++ version).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Thu Apr 3 11:07:56 2025
    On Thu, 3 Apr 2025 11:41:31 +0200
    David Brown <david.brown@hesbynett.no> wibbled:
    On 03/04/2025 10:45, Muttley@DastardlyHQ.org wrote:
    On Wed, 02 Apr 2025 16:16:27 GMT
    scott@slp53.sl.home (Scott Lurndal) wibbled:
    Muttley@DastardlyHQ.org writes:
    On Wed, 2 Apr 2025 16:59:45 +0200
    David Brown <david.brown@hesbynett.no> wibbled:
    On 02/04/2025 16:05, Muttley@DastardlyHQ.org wrote:
    ist first.

    18. "unreachable()" is now standard.

    Googled it - don't see the point.

    That's a defect in your understanding, not a defect in the standard.

    I've found the gcc equivelent useful often in standalone
    applications (OS, Hypervisor, standalone utilities, etc).

    Enlighten me then.


    I can't tell you what Scott uses it for, but I have used gcc's >__builtin_unreachable() a fair number of times in my coding. I use it
    to inform both the compiler and human readers that a path is unreachable:

    What for? The compiler doesn't care and a human reader would probably
    prefer a meaningful comment if its not obvious. If you're worried about the code accidently going there use an assert.

    switch (x) {
    case 1 : ...
    case 2 : ...
    case 3 : ...
    default : __builtin_unreachable();
    }

    I can also use it to inform the compiler about data :

    if ((x < 0) || (x > 10)) __builtin_unreachable();
    // x must be 1 .. 10

    And that'll do what? You want the compiler to compile in a hidden value check?

    Good use of __builtin_unreachable() can result in smaller and faster
    code, and possibly improved static error checking. It is related to the

    Sorry, don't see how. If you think a piece of code is unreachable then don't put it in in the first place!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Alexis on Thu Apr 3 15:02:10 2025
    On Wed, 02 Apr 2025 16:59:59 +1100
    Alexis <flexibeast@gmail.com> wrote:

    Thought people here might be interested in this image on Jens
    Gustedt's blog, which translates section 6.2.5, "Types", of the C23
    standard into a graph of inclusions:

    https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23-basic-types/


    Alexis.

    That's a little disappointing.
    IMHO, C23 should have added optional types _Binary32, _Binary64,
    _Binary128 and _Binary256 that designate their IEEE-754 namesakes.
    Plus, a mandatory requirement that if compiler supports any of IEEE-754
    binary types then they have to be accessible by above-mentioned names.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to David Brown on Thu Apr 3 14:45:21 2025
    On Thu, 3 Apr 2025 11:41:31 +0200
    David Brown <david.brown@hesbynett.no> wrote:

    On 03/04/2025 10:45, Muttley@DastardlyHQ.org wrote:
    On Wed, 02 Apr 2025 16:16:27 GMT
    scott@slp53.sl.home (Scott Lurndal) wibbled:
    Muttley@DastardlyHQ.org writes:
    On Wed, 2 Apr 2025 16:59:45 +0200
    David Brown <david.brown@hesbynett.no> wibbled:
    On 02/04/2025 16:05, Muttley@DastardlyHQ.org wrote:
    ist first.

    18. "unreachable()" is now standard.

    Googled it - don't see the point.

    That's a defect in your understanding, not a defect in the
    standard.

    I've found the gcc equivelent useful often in standalone
    applications (OS, Hypervisor, standalone utilities, etc).

    Enlighten me then.


    I can't tell you what Scott uses it for, but I have used gcc's __builtin_unreachable() a fair number of times in my coding. I use
    it to inform both the compiler and human readers that a path is
    unreachable:

    switch (x) {
    case 1 : ...
    case 2 : ...
    case 3 : ...
    default : __builtin_unreachable();
    }

    I can also use it to inform the compiler about data :

    if ((x < 0) || (x > 10)) __builtin_unreachable();
    // x must be 1 .. 10

    Mostly I have it wrapped in macros that let me conveniently have
    run-time checking during testing or debugging, and extra efficiency
    in the code when I am confident it is bug-free.

    Good use of __builtin_unreachable() can result in smaller and faster
    code, and possibly improved static error checking. It is related to
    the C++23 "assume" attribute (which is also available as a gcc
    extension in any C and C++ version).



    In theory, compilers can use unreachable() to generated better code.
    In practice, every single time I looked at compiler output, it made no difference.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Opus@21:1/5 to Muttley@DastardlyHQ.org on Thu Apr 3 15:05:59 2025
    On 03/04/2025 10:51, Muttley@DastardlyHQ.org wrote:
    All C++ compilers are also C compilers?

    Name a current one (ie not a cross compiler from the 90s) that isn't.

    Most compilers handling both C and C++ sure have a common code base, but
    why does it matter? C and C++ are two different languages with a
    different standard and quite a few different behaviors and even accepted syntax. C has not been a "subset" of C++ for a very long time, although
    this is something still said on a regular basis. It was maybe true in
    the early days of C++ but hasn't been in ages.

    You're probably referring to the C++ front-end of GCC and Clang (which
    strives to support the same things as GCC to be a drop-in replacement),
    which supports compiler-specific extensions for both C and C++, some of
    them borrowing from one another (like C getting some features that were
    only available in C++, and conversely). But that's not standard C or
    C++, so that point is kind of moot. If you want to write
    standard-compliant code only, most of what's been added in C since C99
    is not available in C++. For instance, if I'm not mistaken, designated initializers, which are very handy and have been available in C since
    C99 (25 years ago) have appeared only in C++20, about 20 years later.

    "Interestingly", committees seem to differ largely on the topic: the C++ committee has been promoting making C a strict subset of C++ for years,
    while the C committee is a lot less enthused by that idea. C does
    occasionally and slowly borrow some features from C++ when they do bring
    value without breaking C, but that's pretty much the extent of it. As of
    2025, making C a strict standardized subset of C++ would benefit neither.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to David Brown on Thu Apr 3 13:49:48 2025
    On 03/04/2025 09:59, David Brown wrote:
    On 02/04/2025 22:24, Michael S wrote:
    On Wed, 2 Apr 2025 16:38:03 +0100
    bart <bc@freeuk.com> wrote:

    On 02/04/2025 16:26, Muttley@DastardlyHQ.org wrote:
    On Wed, 2 Apr 2025 16:59:45 +0200
    David Brown <david.brown@hesbynett.no> wibbled:
    On 02/04/2025 16:05, Muttley@DastardlyHQ.org wrote:
    I suspect the people who are happy with C never have any
    correspondence with anyone from the committee so they get an
    entirely biased sample. Just like its usually only people who had
    a bad experience that fill in "How did we do"
    surveys.

    And I suspect that you haven't a clue who the C standards
    committee talk to - and who those people in turn have asked.

    By imference you do - so who are they?
    11. nullptr for clarity and safety.

    Never understood that in C++ never mind C. NULL has worked fine for
    50 years.

    And it's been a hack for 50 years. Especially when it is just:

        #define NULL 0

    You also need to include some header (which one?) in order to use it.
    I'd hope you wouldn't need to do that for nullptr, but backwards
    compatibility may require it (because of any forward-thinking
    individuals who have already defined their own 'nullptr').



    C23 is rather bold in that regard, adding non-underscored keywords as
    if there was no yesterday. IMHO, for no good reasons.


    It is bold, perhaps, but there are certainly good reasons.

    Perhaps go bolder and drop the need to explicitly include those 30 or so standard headers. It's ridiculous having to micro-manage the availablity
    of fundamental language features ('uint8_t' for example!) in every module.

    When I suggested this is the past, people were up in arms about the
    overheads of having to compile all those headers (in 2017, they were
    3-5KB lines in all for gcc on Windows/Linux).

    Yet the same people think nothing of using libraries like SDL2 (50K
    lines of headers) or GTK2 (350K lines).

    This does mean that some pre-C23 code will be incompatible with C23.

    This was also my view in the past, to draw a line under 'old' C and to
    start using 'new' C.

    I understand C23 mode will be enabled by a compiler option (-std=c23);
    the same method could have been used to enable all std headers, and for
    that to be the default.

    Hello World then becomes this one-liner:

    int main() {puts("Hello, World!");}

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Muttley@DastardlyHQ.org on Thu Apr 3 15:16:30 2025
    On 03/04/2025 10:49, Muttley@DastardlyHQ.org wrote:
    On Wed, 2 Apr 2025 19:23:58 +0200
    David Brown <david.brown@hesbynett.no> wibbled:
    On 02/04/2025 17:26, Muttley@DastardlyHQ.org wrote:
    On Wed, 2 Apr 2025 16:59:45 +0200
    David Brown <david.brown@hesbynett.no> wibbled:


    11. nullptr for clarity and safety.

    Never understood that in C++ never mind C. NULL has worked fine for 50 years.


    If ignorance really is bliss, you must be the happiest person around.
    Or you can read one of my other posts pointing out the advantages of
    nullptr.

    Compile them into a book and publish it. In the meantime I have better things to do than trawl back through god knows how many posts to find them.

    It was in this thread!


    A number of these changes did come over from C++, yes. That does not
    mean they are not useful or wanted in C - it means the C world is happy
    to let C++ go first, then copy what has been shown to be useful. I
    think that is a good strategy.

    Some people (including me) will choose to use C++, but others prefer to
    (or are required to) use C.

    I can't imagine many situations outside of maybe speclialist hardware scenarios
    where the C compiler isn't also a C++ compiler.


    It is fair to say that the most used C compilers - gcc and clang - are
    usually combined with C++ compilers. (The other big C++ compiler, MSVC, doesn't have decent modern C support.) But that does /not/ mean that
    people who want a bit more than older C standards support will want to
    compile their C code with a C++ compiler! There are countless reasons
    why that is an unrealistic idea.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Thu Apr 3 13:22:26 2025
    On Thu, 3 Apr 2025 15:16:30 +0200
    David Brown <david.brown@hesbynett.no> wibbled:
    On 03/04/2025 10:49, Muttley@DastardlyHQ.org wrote:
    If ignorance really is bliss, you must be the happiest person around.
    Or you can read one of my other posts pointing out the advantages of
    nullptr.

    Compile them into a book and publish it. In the meantime I have better things

    to do than trawl back through god knows how many posts to find them.

    It was in this thread!

    I don't read entire threads. I have work to do.

    I can't imagine many situations outside of maybe speclialist hardware >scenarios
    where the C compiler isn't also a C++ compiler.


    It is fair to say that the most used C compilers - gcc and clang - are >usually combined with C++ compilers. (The other big C++ compiler, MSVC, >doesn't have decent modern C support.) But that does /not/ mean that
    people who want a bit more than older C standards support will want to >compile their C code with a C++ compiler! There are countless reasons
    why that is an unrealistic idea.

    Apart from a few very minor edge cases where C and C++ differ but won't
    concern 99.999% of devs anyway, why not?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Thu Apr 3 13:19:52 2025
    On Thu, 3 Apr 2025 15:05:59 +0200
    Opus <ifonly@youknew.org> wibbled:
    On 03/04/2025 10:51, Muttley@DastardlyHQ.org wrote:
    All C++ compilers are also C compilers?

    Name a current one (ie not a cross compiler from the 90s) that isn't.

    Most compilers handling both C and C++ sure have a common code base, but
    why does it matter? C and C++ are two different languages with a
    different standard and quite a few different behaviors and even accepted >syntax. C has not been a "subset" of C++ for a very long time, although
    this is something still said on a regular basis. It was maybe true in
    the early days of C++ but hasn't been in ages.

    You're probably referring to the C++ front-end of GCC and Clang (which >strives to support the same things as GCC to be a drop-in replacement),
    which supports compiler-specific extensions for both C and C++, some of
    them borrowing from one another (like C getting some features that were
    only available in C++, and conversely). But that's not standard C or
    C++, so that point is kind of moot. If you want to write
    standard-compliant code only, most of what's been added in C since C99
    is not available in C++. For instance, if I'm not mistaken, designated >initializers, which are very handy and have been available in C since
    C99 (25 years ago) have appeared only in C++20, about 20 years later.

    "Interestingly", committees seem to differ largely on the topic: the C++ >committee has been promoting making C a strict subset of C++ for years,
    while the C committee is a lot less enthused by that idea. C does >occasionally and slowly borrow some features from C++ when they do bring >value without breaking C, but that's pretty much the extent of it. As of >2025, making C a strict standardized subset of C++ would benefit neither.

    The point is it doesn't really matter any more. Using a modern C++ compiler
    you can pick and choose which bits you want to use. Eg if you want to write pure C except for constexpr you can already do that. Ditto binary literals etc. Writing pure C that will compile on a C only compiler is an interesting intellectual exercise but in the real world an irrelevance. Eg the linux
    kernel whiel written in C has been using gcc specific extensions for decades.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Keith Thompson on Thu Apr 3 15:23:18 2025
    On 03/04/2025 12:27, Keith Thompson wrote:
    David Brown <david.brown@hesbynett.no> writes:
    On 03/04/2025 02:41, Keith Thompson wrote:
    scott@slp53.sl.home (Scott Lurndal) writes:
    [...]
    For example, in the POSIX description for the string functions you'll
    find the following statement:
    [CX] Inclusion of the <string.h> header may also make
    visible all
    symbols from <stddef.h>. [Option End]

    This is true for a number of POSIX headers, include those you enumerate >>>> above.

    [CX] marks a POSIX extension to ISO C.
    Interesting. The C standard says that <string.h> defines NULL and
    size_t, both of which are also defined in <stddef.h>. A number of other >>> symbols from <stddef.h> are also defined in other headers. A conforming >>> implementation may not make any other declarations from <stddef.h>
    visible as a result of including <string.h>. I wonder why POSIX has
    that "extension".

    The documentation quoted by Scott says "may". To me, it seems pretty
    obvious why they have this. It means that their definition of
    <string.h> can start with

    #include <stddef.h>

    rather than go through the merry dance of conditional compilation,
    defining and undefining these macros, "__null_is_defined" macros, and
    the rest of it. This all gets particularly messy when some standard
    headers (generally those that are part of "freestanding" C - including
    <stddef.h>) often come with the compiler, while other parts (like
    <string.h>) generally come with the library. On some systems, these
    two parts are from entirely separate groups, and may use different
    conventions for their various "__is_defined" tag macros.

    Yes, implementers *may* be so lazy that they don't bother to define
    their standard headers in the manner required by the C standard.

    Building an implementation from separate parts can make things more difficult. That's no excuse for getting things wrong.

    Maybe you could have an implementation that conforms to POSIX without attempting to conform to ISO C, but POSIX is based on ISO C.

    I am not suggesting that I think it is a /good/ think that POSIX has
    written this - merely that I think it is easy to understand why they
    might have done so.

    (I think it would have been better if the C standards had said that
    <string.h> includes <stddef.h> - but regardless of what the standards
    say, C implementations and standards based on them should follow those C standards by default. Extensions and non-conformities can be extremely
    useful, but should not be the default.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Opus on Thu Apr 3 16:27:04 2025
    On Thu, 3 Apr 2025 15:05:59 +0200
    Opus <ifonly@youknew.org> wrote:

    For instance, if I'm not mistaken,
    designated initializers, which are very handy and have been available
    in C since C99 (25 years ago) have appeared only in C++20, about 20
    years later.


    AFAIK, even C++23 provides only a subset of C99 designated initializers.
    The biggest difference is that in C++ initializers have to be
    specified in the same order as declarations for respective fields.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Janis Papanagnou on Thu Apr 3 13:42:09 2025
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
    On 03.04.2025 01:32, Scott Lurndal wrote:
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
    [...]

    Obviously a question of opinion depending on where one comes from.

    Verilog uses _ as a digit separator.

    And Kornshell's 'printf' uses ',' for output formatting as in

    $ printf "%,d\n" 1234567
    1,234,567

    Maybe it should be configurable?

    It is already configurable in ksh

    $ LANG=en_US.utf8 printf "$%'10.2f\n" $(( ( 7540.0 * 118.70 ) + ( 2295.0 * 412.88 ) ))

    $1,842,557.60

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Kaz Kylheku on Thu Apr 3 13:43:53 2025
    Kaz Kylheku <643-408-1753@kylheku.com> writes:
    On 2025-04-03, bart <bc@freeuk.com> wrote:
    On 02/04/2025 17:20, Scott Lurndal wrote:
    Muttley@dastardlyhq.com writes:
    On Wed, 2 Apr 2025 16:33:46 +0100
    bart <bc@freeuk.com> gabbled:
    On 02/04/2025 16:12, Muttley@DastardlyHQ.org wrote:
    Meh.

    What's the problem with it? Here, tell me at a glance the magnitude of >>>>> this number:

    10000000000

    And how often do you hard code values that large into a program? Almost >>>> never I imagine unless its some hex value to set flags in a word.

    Every day, several times a day. 16 hex digit constants are very
    common in my work. The digit separator really helps with readability,
    although I would have preferred '_' over "'".

    Oh, I thought C23 used '_', since Python uses that. I prefer single
    quote as that is not shifted on my keyboard. (My language projects just
    allow both!)

    I made , (comma) the digit separator in TXR Lisp. Nobody uses _ in the
    real world.

    That's an incorrect statement. There's a large amount of Verilog
    code (both the RTL and the associated verification testbench code)
    that uses _ as a digit separator.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to bart on Thu Apr 3 16:44:37 2025
    On Thu, 3 Apr 2025 13:49:48 +0100
    bart <bc@freeuk.com> wrote:

    On 03/04/2025 09:59, David Brown wrote:

    It is bold, perhaps, but there are certainly good reasons.

    Perhaps go bolder and drop the need to explicitly include those 30 or
    so standard headers. It's ridiculous having to micro-manage the
    availablity of fundamental language features ('uint8_t' for example!)
    in every module.

    I don't find it ridiculous.


    When I suggested this is the past, people were up in arms about the
    overheads of having to compile all those headers (in 2017, they were
    3-5KB lines in all for gcc on Windows/Linux).


    Overhead is a smaller concern. Name clashes are bigger concern.

    Yet the same people think nothing of using libraries like SDL2 (50K
    lines of headers) or GTK2 (350K lines).

    This does mean that some pre-C23 code will be incompatible with
    C23.

    This was also my view in the past, to draw a line under 'old' C and
    to start using 'new' C.

    I understand C23 mode will be enabled by a compiler option
    (-std=c23);

    In 2025.
    An expectations are, however, that several years down the road it would
    be a default. Then people would have to specify compiler options in
    order to get older standard. And at some point older standards will be
    dropped. Not only K&r and C90. C99 will be dropped as well. Not that I
    expect to live that long.

    the same method could have been used to enable all std
    headers, and for that to be the default.

    Hello World then becomes this one-liner:

    int main() {puts("Hello, World!");}



    Somehow I don't feel excited by the prospect.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to bart on Thu Apr 3 15:40:46 2025
    On 03/04/2025 14:49, bart wrote:
    On 03/04/2025 09:59, David Brown wrote:
    On 02/04/2025 22:24, Michael S wrote:
    On Wed, 2 Apr 2025 16:38:03 +0100
    bart <bc@freeuk.com> wrote:

    On 02/04/2025 16:26, Muttley@DastardlyHQ.org wrote:
    On Wed, 2 Apr 2025 16:59:45 +0200
    David Brown <david.brown@hesbynett.no> wibbled:
    On 02/04/2025 16:05, Muttley@DastardlyHQ.org wrote:
    I suspect the people who are happy with C never have any
    correspondence with anyone from the committee so they get an
    entirely biased sample. Just like its usually only people who had >>>>>>> a bad experience that fill in "How did we do"
    surveys.

    And I suspect that you haven't a clue who the C standards
    committee talk to - and who those people in turn have asked.

    By imference you do - so who are they?
    11. nullptr for clarity and safety.

    Never understood that in C++ never mind C. NULL has worked fine for
    50 years.

    And it's been a hack for 50 years. Especially when it is just:

        #define NULL 0

    You also need to include some header (which one?) in order to use it.
    I'd hope you wouldn't need to do that for nullptr, but backwards
    compatibility may require it (because of any forward-thinking
    individuals who have already defined their own 'nullptr').



    C23 is rather bold in that regard, adding non-underscored keywords as
    if there was no yesterday. IMHO, for no good reasons.


    It is bold, perhaps, but there are certainly good reasons.

    Perhaps go bolder and drop the need to explicitly include those 30 or so standard headers. It's ridiculous having to micro-manage the availablity
    of fundamental language features ('uint8_t' for example!) in every module.


    There's a difference between "bold" and "foolhardy". Backwards
    compatibility with existing code - and with the knowledge and experience
    of existing programmers - is vital to C. New C standards should only
    risk breakage if there is significant gain in relation to the risk of compatibility issues.

    It would be a different matter if C had namespaces and a good module
    system. Then C programmers could do as future C++ programmers will -
    just put "import std;" at the start of their code and pull in all they
    need of the standard library in an efficient and convenient manner. But
    C does not have namespaces, and does not have modules as such, so there
    is no such option here.

    When I suggested this is the past, people were up in arms about the
    overheads of having to compile all those headers (in 2017, they were
    3-5KB lines in all for gcc on Windows/Linux).


    It's not the number of lines that matters (though it is surely /vastly/
    more than 5 KLocs). It is the namespace pollution. There are a very
    large number of functions in the C standard library - programmers should
    not have to worry about accidentally picking identifiers that coincide
    with those in standard library headers that they do not use.

    Yet the same people think nothing of using libraries like SDL2 (50K
    lines of headers) or GTK2 (350K lines).

    This does mean that some pre-C23 code will be incompatible with C23.

    This was also my view in the past, to draw a line under 'old' C and to
    start using 'new' C.

    Yes, to some extent - I think it is right that they have drawn a line,
    but good that the differences are small and unlikely to be an issue in
    real code. It should not take undue effort to write code that is
    compatible with C90 through C23, or any other subset of that range
    according to the new features that you want to use. It should not be
    difficult to modify existing working C90 code to work correctly with
    C23, though sometimes /some/ changes will be needed.

    (I am not saying that I agree 100% with the changes C23 made - I might
    have made more or fewer incompatibilities - but I agree with the
    principles.)


    I understand C23 mode will be enabled by a compiler option (-std=c23);
    the same method could have been used to enable all std headers, and for
    that to be the default.

    Hello World then becomes this one-liner:

      int main() {puts("Hello, World!");}



    That would be great for people writing "Hello, world" programs - but not
    for people writing /real/ world programs.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to David Brown on Thu Apr 3 13:48:04 2025
    David Brown <david.brown@hesbynett.no> writes:
    On 03/04/2025 02:41, Keith Thompson wrote:
    scott@slp53.sl.home (Scott Lurndal) writes:
    bart <bc@freeuk.com> writes:
    On 02/04/2025 18:29, David Brown wrote:
    On 02/04/2025 17:38, bart wrote:
    [...]
    You also need to include some header (which one?) in order to use it. >>>>>
    <stddef.h>, as pretty much any C programmer will know.

    This program:

    void* p = NULL;

    reports that NULL is undefined, but that can be fixed by including any >>>> of stdio.h, stdlib.h or string.h. Those are the first three I tried;
    there may be others.

    So it is not true that you need include stddef.h, nor obvious that that >>>> is where NULL is defined, if you are used to having it available indirectly.

    Indeed, and it is well documented.

    For example, in the POSIX description for the string functions you'll
    find the following statement:

    [CX] Inclusion of the <string.h> header may also make visible all
    symbols from <stddef.h>. [Option End]

    This is true for a number of POSIX headers, include those you enumerate
    above.

    [CX] marks a POSIX extension to ISO C.

    Interesting. The C standard says that <string.h> defines NULL and
    size_t, both of which are also defined in <stddef.h>. A number of other
    symbols from <stddef.h> are also defined in other headers. A conforming
    implementation may not make any other declarations from <stddef.h>
    visible as a result of including <string.h>. I wonder why POSIX has
    that "extension".


    The documentation quoted by Scott says "may". To me, it seems pretty
    obvious why they have this. It means that their definition of
    <string.h> can start with

    #include <stddef.h>

    rather than go through the merry dance of conditional compilation,
    defining and undefining these macros, "__null_is_defined" macros, and
    the rest of it.

    I think the answer is much simpler than that - standardizing the
    behavior of existing unix implementations at the time was an
    important consideration.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to James Kuyper on Thu Apr 3 13:53:12 2025
    James Kuyper <jameskuyper@alumni.caltech.edu> writes:
    On 4/2/25 14:02, Kaz Kylheku wrote:
    ...
    When a thing exists, the job of the standard is to standardize what
    exists, and not invent some caricature of it.

    In the process of standardization, the committee is supposed to exercise
    it's judgement, and if that judgement says that there's a better way to
    do something than the way for with there is existing practice, they have
    an obligation to correct the design of that feature accordingly.

    Ha. That type of altruism lasts only about a minute in a standards
    committee meeting. In reality, rather than breaking existing APIs,
    the standards committee will introduce new ones (posix_spawn, anyone?).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Tim Rentsch on Thu Apr 3 13:51:34 2025
    Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
    scott@slp53.sl.home (Scott Lurndal) writes:

    bart <bc@freeuk.com> writes:
    [...]
    So it is not true that you need include stddef.h, nor obvious
    that that is where NULL is defined, if you are used to having it
    available indirectly.

    Indeed, and it is well documented.

    For example, in the POSIX description for the string functions
    you'll find the following statement:

    [CX] Inclusion of the <string.h> header may also make visible
    all symbols from <stddef.h>. [Option End]

    This is true for a number of POSIX headers, include those you
    enumerate above.

    [CX] marks a POSIX extension to ISO C.

    How strange. I don't know why anyone would ever want either to
    rely on or to take advantage of this property.

    Some existing unix implementations at the time the standard was adopted
    had that behavior and the committee was not willing to break existing implementations.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Michael S on Thu Apr 3 16:03:18 2025
    On 03/04/2025 13:45, Michael S wrote:
    On Thu, 3 Apr 2025 11:41:31 +0200
    David Brown <david.brown@hesbynett.no> wrote:

    On 03/04/2025 10:45, Muttley@DastardlyHQ.org wrote:
    On Wed, 02 Apr 2025 16:16:27 GMT
    scott@slp53.sl.home (Scott Lurndal) wibbled:
    Muttley@DastardlyHQ.org writes:
    On Wed, 2 Apr 2025 16:59:45 +0200
    David Brown <david.brown@hesbynett.no> wibbled:
    On 02/04/2025 16:05, Muttley@DastardlyHQ.org wrote:
    ist first.

    18. "unreachable()" is now standard.

    Googled it - don't see the point.

    That's a defect in your understanding, not a defect in the
    standard.

    I've found the gcc equivelent useful often in standalone
    applications (OS, Hypervisor, standalone utilities, etc).

    Enlighten me then.


    I can't tell you what Scott uses it for, but I have used gcc's
    __builtin_unreachable() a fair number of times in my coding. I use
    it to inform both the compiler and human readers that a path is
    unreachable:

    switch (x) {
    case 1 : ...
    case 2 : ...
    case 3 : ...
    default : __builtin_unreachable();
    }

    I can also use it to inform the compiler about data :

    if ((x < 0) || (x > 10)) __builtin_unreachable();
    // x must be 1 .. 10

    Mostly I have it wrapped in macros that let me conveniently have
    run-time checking during testing or debugging, and extra efficiency
    in the code when I am confident it is bug-free.

    Good use of __builtin_unreachable() can result in smaller and faster
    code, and possibly improved static error checking. It is related to
    the C++23 "assume" attribute (which is also available as a gcc
    extension in any C and C++ version).



    In theory, compilers can use unreachable() to generated better code.
    In practice, every single time I looked at compiler output, it made no difference.


    In practice, almost every single time I used it, it made a difference to
    the generated code - because I regularly look at the generated code.
    Sometimes, however, it only makes a difference to the static error
    checking or human readers.

    Of course this will depend on the code you write, the compiler you have,
    the compiler options you use, and many other details.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Muttley@DastardlyHQ.org on Thu Apr 3 15:58:05 2025
    On 03/04/2025 13:07, Muttley@DastardlyHQ.org wrote:
    On Thu, 3 Apr 2025 11:41:31 +0200
    David Brown <david.brown@hesbynett.no> wibbled:
    On 03/04/2025 10:45, Muttley@DastardlyHQ.org wrote:
    On Wed, 02 Apr 2025 16:16:27 GMT
    scott@slp53.sl.home (Scott Lurndal) wibbled:
    Muttley@DastardlyHQ.org writes:
    On Wed, 2 Apr 2025 16:59:45 +0200
    David Brown <david.brown@hesbynett.no> wibbled:
    On 02/04/2025 16:05, Muttley@DastardlyHQ.org wrote:
    ist first.

    18. "unreachable()" is now standard.

    Googled it - don't see the point.

    That's a defect in your understanding, not a defect in the standard.

    I've found the gcc equivelent useful often in standalone
    applications (OS, Hypervisor, standalone utilities, etc).

    Enlighten me then.


    I can't tell you what Scott uses it for, but I have used gcc's
    __builtin_unreachable() a fair number of times in my coding. I use it
    to inform both the compiler and human readers that a path is unreachable:

    What for? The compiler doesn't care and a human reader would probably
    prefer a meaningful comment if its not obvious. If you're worried about the code accidently going there use an assert.

    The compiler /does/ care - as I said, it can generate better code and
    sometimes do better static error checking.

    Human readers prefer clear code to comments. Comments get out of sync -
    code does not.


    switch (x) {
    case 1 : ...
    case 2 : ...
    case 3 : ...
    default : __builtin_unreachable();
    }

    I can also use it to inform the compiler about data :

    if ((x < 0) || (x > 10)) __builtin_unreachable();
    // x must be 1 .. 10

    And that'll do what? You want the compiler to compile in a hidden value check?


    No, I want the compiler to be able to take advantage of the information
    that I have, that it could not otherwise infer from the code.

    Good use of __builtin_unreachable() can result in smaller and faster
    code, and possibly improved static error checking. It is related to the

    Sorry, don't see how. If you think a piece of code is unreachable then don't put it in in the first place!


    Ignorance is curable - wilful ignorance is much more stubborn. But I
    will try.

    Let me give you an example, paraphrased from the C23 standards:


    #include <stddef.h>

    enum Colours { red, green, blue };

    unsigned int colour_to_hex(enum Colours c) {
    switch (c) {
    case red : return 0xff'00'00;
    case green : return 0x00'ff'00;
    case blue : return 0x00'00'ff;
    }
    unreachable();
    }


    With "unreachable()", "gcc -std=c23 -O2 -Wall" gives :

    colour_to_hex:
    mov edi, edi
    mov eax, DWORD PTR CSWTCH.1[0+rdi*4]
    ret

    Without it, it gives :

    colour_to_hex:
    cmp edi, 2
    ja .L1
    mov edi, edi
    mov eax, DWORD PTR CSWTCH.1[0+rdi*4]
    .L1:
    ret

    That is noticeably bigger and slower code. gcc also gives a warning
    "control reaches end of non-void function".

    Neither "// This should never be reached" nor "assert(false);" is a
    suitable alternative.

    Try it for yourself.

    <https://godbolt.org/z/8EG11MW4o>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Muttley@DastardlyHQ.org on Thu Apr 3 14:14:20 2025
    Muttley@DastardlyHQ.org writes:
    On Wed, 02 Apr 2025 16:20:05 GMT
    scott@slp53.sl.home (Scott Lurndal) wibbled:
    Muttley@dastardlyhq.com writes:
    On Wed, 2 Apr 2025 16:33:46 +0100
    bart <bc@freeuk.com> gabbled:
    On 02/04/2025 16:12, Muttley@DastardlyHQ.org wrote:
    Meh.

    What's the problem with it? Here, tell me at a glance the magnitude of >>>>this number:

    10000000000

    And how often do you hard code values that large into a program? Almost >>>never I imagine unless its some hex value to set flags in a word.

    Every day, several times a day. 16 hex digit constants are very
    common in my work. The digit separator really helps with readability,

    Oh really? What are you doing, hardcoding password hashes?

    Modeling a very complicated 64-bit system-on-chip.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Thiago Adams on Thu Apr 3 16:49:46 2025
    On 03/04/2025 16:11, Thiago Adams wrote:

    I think NULL should have been promoted to keyword, just like true and
    false.


    I believe that nullptr does a better job than NULL - and that for
    C23-specific code, nullptr should be used in preference to NULL (or 0).
    There is therefore no point in making NULL a keyword. Those that want
    to continue to use NULL, can do so in the way they have always done, and
    people taking advantage of C23's features can use nullptr without any
    includes.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Michael S on Thu Apr 3 16:58:15 2025
    On 03/04/2025 15:44, Michael S wrote:
    On Thu, 3 Apr 2025 13:49:48 +0100
    bart <bc@freeuk.com> wrote:


    I understand C23 mode will be enabled by a compiler option
    (-std=c23);

    In 2025.
    An expectations are, however, that several years down the road it would
    be a default. Then people would have to specify compiler options in
    order to get older standard. And at some point older standards will be dropped. Not only K&r and C90. C99 will be dropped as well. Not that I
    expect to live that long.


    It's difficult to make predictions, especially about the future, but I
    would not expect gcc and clang to drop support for C90 or later C
    standards any time in the near decades.

    What I would like to see (but probably won't) is for compiler vendors to
    agree on the syntax for compiler standard versions as command line
    flags, and/or as #pragma's in the code. And I'd like the default
    standard for gcc and clang to be "#error You forgot to specify your
    choice of standard!" rather than changing over time.

    I know people can use pre-processor conditional compilation based on __STDC_VERSION__ to complain if code is compiled with an unexpected or unsupported standard, but few people outside of library header authors
    actually do that. I'd really like :

    #pragma STDC VERSION C17

    to force the compiler to use the equivalent of "-std=c17
    -pedantic-errors" in gcc.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Michael S on Thu Apr 3 15:59:46 2025
    On 03/04/2025 14:44, Michael S wrote:
    On Thu, 3 Apr 2025 13:49:48 +0100
    bart <bc@freeuk.com> wrote:

    On 03/04/2025 09:59, David Brown wrote:

    It is bold, perhaps, but there are certainly good reasons.

    Perhaps go bolder and drop the need to explicitly include those 30 or
    so standard headers. It's ridiculous having to micro-manage the
    availablity of fundamental language features ('uint8_t' for example!)
    in every module.

    I don't find it ridiculous.

    How far would it have to go before you found it so: 60 headers instead
    of 30; 120 headers? One header for each function/macro/type?

    If you were to use my language, then everything that is part of the
    language, plus functions of the standard library, is available without
    needing to specify anything. That makes it a joy to use.

    Other languages (not C++) are similar for core features, but they do
    tend to require explicit imports for standard libraries. I don't however believe they need 30 different headers or imports to cover everything
    that those provide in C.

    Coding in C, you're debugging someone's module say, and you need print something, so you need stdio.h included at the top. Then you detect some
    error and need to do exit(1); now you need stdlib.h. Then you want to do strcpy() or memcpy(), and you need string.h!

    (At some point, you decide you don't need those debug prints, but now
    what, do you have to get rid of those includes? It's just a pointless,
    annoying dance.)




    When I suggested this is the past, people were up in arms about the
    overheads of having to compile all those headers (in 2017, they were
    3-5KB lines in all for gcc on Windows/Linux).


    Overhead is a smaller concern. Name clashes are bigger concern.

    Examples? Somebody would be foolhardy to use names like 'printf' or
    'exit' for their own, unrelated functions. (Compilers will anyway warn
    about that.)

    But I suggested this was done in a 'new' compiler mode used for
    compiling fresh source code, not legacy code.



    Yet the same people think nothing of using libraries like SDL2 (50K
    lines of headers) or GTK2 (350K lines).

    This does mean that some pre-C23 code will be incompatible with
    C23.

    This was also my view in the past, to draw a line under 'old' C and
    to start using 'new' C.

    I understand C23 mode will be enabled by a compiler option
    (-std=c23);

    In 2025.
    An expectations are, however, that several years down the road it would
    be a default. Then people would have to specify compiler options in
    order to get older standard. And at some point older standards will be dropped. Not only K&r and C90. C99 will be dropped as well. Not that I
    expect to live that long.

    the same method could have been used to enable all std
    headers, and for that to be the default.

    Hello World then becomes this one-liner:

    int main() {puts("Hello, World!");}

    Somehow I don't feel excited by the prospect.

    It's an example of not having to specify '#include <stdio.h>'; i/o 'just works'.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Scott Lurndal on Thu Apr 3 16:52:26 2025
    On 03/04/2025 16:26, Scott Lurndal wrote:
    bart <bc@freeuk.com> writes:
    On 03/04/2025 14:44, Michael S wrote:

    Overhead is a smaller concern. Name clashes are bigger concern.

    Examples? Somebody would be foolhardy to use names like 'printf' or
    'exit' for their own, unrelated functions. (Compilers will anyway warn
    about that.)

    I've written my own printf and exit implementations in the
    past. Not all C code has a runtime that provides those name.

    Then you have to specify, somehow, that you don't want those
    automatically included.

    I mean, I'm sure there are people who want to buy cars with no engine,
    as they will install their own, but those will be in a tiny minority.

    It would make it super-annoying to have ensure you'd remembered to tick
    the boxes for 'engine' and '4 wheels' for 99.999% of people with normal
    needs.

    Since I wrote my post 50 minutes minutes ago, I had to put together a
    test program. I started off with 'stdio.h'. Then it need to use malloc (compiler reported an error) and I needed stdlib.h.

    I needed to zero that memory (need string.h for 'memset' after another
    compiler error).

    Then I wanted to time it; now I needed time.h (another compiler error,
    but here I had to guess it was time.h and not sys/time.h which also exists).

    It is quite exasperating. I can't even just use a header 'stdall.h'
    which contains all the rest, since there was a likelihood I'd have to
    post the test programs for others to try out, and you can't use private headers. Maybe paste a list of all 30 includes? That wouldn't be
    appreciated either!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to bart on Thu Apr 3 15:26:58 2025
    bart <bc@freeuk.com> writes:
    On 03/04/2025 14:44, Michael S wrote:

    Overhead is a smaller concern. Name clashes are bigger concern.

    Examples? Somebody would be foolhardy to use names like 'printf' or
    'exit' for their own, unrelated functions. (Compilers will anyway warn
    about that.)

    I've written my own printf and exit implementations in the
    past. Not all C code has a runtime that provides those name.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to BGB on Thu Apr 3 09:23:08 2025
    BGB <cr88192@gmail.com> writes:

    On 4/2/2025 11:06 PM, Tim Rentsch wrote:

    Kaz Kylheku <643-408-1753@kylheku.com> writes:

    [some symbols are defined in more than one header]

    (In my opinion, things would be better if headers were not allowed
    to behave as if they include other headers, or provide identifiers
    also given in other heards. Not in ISO C, and not in POSIX.
    Every identifier should be declared in exactly one home header,
    and no other header should provide that definition. [...])

    Not always practical. A good example is the type size_t. If a
    function takes an argument of type size_t, then the symbol size_t
    should be defined, no matter which header the function is being
    declared in. Similarly for NULL for any function that has defined
    behavior on some cases of arguments that include NULL. No doubt
    there are other compelling examples.

    Yes, basically true.
    Headers including headers that have needed functionality makes sense.



    At the other extreme, say we have a header, lets just call it
    "windows.h", which then proceeds to include nearly everything in the
    OS "core". No need to include different headers for the different OS subsystems, this header has got you covered.

    But, then one proceeds to "#include" all of the other C files into a
    single big translation unit, because it is faster to do everything all
    at once than to deal with "windows.h" for each individually (because
    even a moderate sized program is still smaller than all the stuff this
    header pulls in).


    But, then one has to care about relative order of headers, say:
    If you want all this extra stuff, "windows.h" needs to be included
    first, as the other headers will define _WIN32_LEAN_AND_MEAN (or
    something to this effect) which then causes it to omit all of the
    stuff that is less likely to be needed.

    So, say:
    #include <GL/gl.h>
    #include <windows.h>

    Will give different results from:
    #include <windows.h>
    #include <GL/gl.h>

    ...

    Yes, undisciplined use of #include leads to problems.

    The solution is to impose some rules on how header files are
    written, so as to avoid the kinds of problems you describe.
    That isn't hard, and people have been writing headers in
    such a way since well before the first C standard was
    written.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to Scott Lurndal on Thu Apr 3 19:32:31 2025
    On 03.04.2025 15:42, Scott Lurndal wrote:
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
    On 03.04.2025 01:32, Scott Lurndal wrote:
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
    [...]

    Obviously a question of opinion depending on where one comes from.

    Verilog uses _ as a digit separator.

    And Kornshell's 'printf' uses ',' for output formatting as in

    $ printf "%,d\n" 1234567
    1,234,567

    Maybe it should be configurable?

    It is already configurable in ksh

    Ah, right. (I hadn't tried that.)

    $ LANG=en_US.utf8 printf "$%'10.2f\n" $(( ( 7540.0 * 118.70 ) + ( 2295.0 * 412.88 ) ))

    $1,842,557.60

    $ LC_ALL=de_CH.UTF-8@isodate printf "%,d\n" 1234567
    1'234'567

    Works with %' and with %, as it seems.

    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Keith Thompson on Thu Apr 3 20:51:48 2025
    On 03/04/2025 20:31, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    [...]
    I understand C23 mode will be enabled by a compiler option (-std=c23);
    the same method could have been used to enable all std headers, and
    for that to be the default.

    The standard says exactly nothing about compiler options. "-std=c23"
    is a convention used by *some* compilers (gcc and other compilers
    designed to be compatible with it).

    Hello World then becomes this one-liner:

    int main() {puts("Hello, World!");}

    A compiler could provide such an option as a non-conforming extension
    with no change in the standard. I'm not aware that any compiler
    has done so, or that there's been any demand for it. One reason
    for the lack of demand might be that any code that depends on it
    is not portable. (Older versions of MS Visual Studio create a
    "stdafx.h" header, but newer versions appear to have dropped that.)


    gcc provides such an option :

    gcc -include stdio.h hello_world.c

    If someone really wanted to, they could easily make a shell script, bash
    alias, Windows bat file, or whatever, as a wrapper for gcc with a whole
    bunch of "-include" options for all the standard headers.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Keith Thompson on Thu Apr 3 20:54:31 2025
    On 03/04/2025 20:19, Keith Thompson wrote:
    David Brown <david.brown@hesbynett.no> writes:
    [...]
    It is bold, perhaps, but there are certainly good reasons. As far as
    I can see we have some keywords that have dropped their
    underscore-capital form:

    alignas
    alignof
    bool
    static_assert
    thread_local

    The underscore-capital forms still exist as alternate spellings.
    Dropping _Bool et al would have broken existing code.


    Yes. But they are listed as alternate spellings, rather than keywords.
    I don't think it makes any difference, other than that if they were
    called "keywords" then they would need to be mentioned more in the
    standards.

    And we have some new ones :

    constexpr
    false
    nullptr
    true
    typeof
    typeof_unequal

    That last one is "typeof_unqual".


    That makes much more sense :-)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Keith Thompson on Thu Apr 3 18:54:40 2025
    Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:
    scott@slp53.sl.home (Scott Lurndal) writes:
    Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
    scott@slp53.sl.home (Scott Lurndal) writes:
    bart <bc@freeuk.com> writes:
    [...]
    So it is not true that you need include stddef.h, nor obvious
    that that is where NULL is defined, if you are used to having it
    available indirectly.

    Indeed, and it is well documented.

    For example, in the POSIX description for the string functions
    you'll find the following statement:

    [CX] Inclusion of the <string.h> header may also make visible
    all symbols from <stddef.h>. [Option End]

    This is true for a number of POSIX headers, include those you
    enumerate above.

    [CX] marks a POSIX extension to ISO C.

    How strange. I don't know why anyone would ever want either to
    rely on or to take advantage of this property.

    Some existing unix implementations at the time the standard was adopted
    had that behavior and the committee was not willing to break existing
    implementations.

    You mean the POSIX standard, yes? The C standard does not permit
    <string.h> to include <stddef.h>.

    Yes, and POSIX explictly marks it as an extension to the C standard.

    So, if unix/linux system header files are posix compliant, they're
    technically not completely compliant with the C standard, although
    they will compile code that complies with the C standard.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Opus@21:1/5 to Michael S on Thu Apr 3 21:13:20 2025
    On 03/04/2025 15:27, Michael S wrote:
    On Thu, 3 Apr 2025 15:05:59 +0200
    Opus <ifonly@youknew.org> wrote:

    For instance, if I'm not mistaken,
    designated initializers, which are very handy and have been available
    in C since C99 (25 years ago) have appeared only in C++20, about 20
    years later.


    AFAIK, even C++23 provides only a subset of C99 designated initializers.
    The biggest difference is that in C++ initializers have to be
    specified in the same order as declarations for respective fields.

    Ah, you're right: apparently, they still need to be placed in the order
    of declaration, which is a severe limitation in my book.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to BGB on Thu Apr 3 19:37:31 2025
    On 2025-04-03, BGB <cr88192@gmail.com> wrote:
    On 4/3/2025 1:12 AM, Keith Thompson wrote:
    Kaz Kylheku <643-408-1753@kylheku.com> writes:
    On 2025-04-03, bart <bc@freeuk.com> wrote:
    On 02/04/2025 17:20, Scott Lurndal wrote:
    Muttley@dastardlyhq.com writes:
    On Wed, 2 Apr 2025 16:33:46 +0100
    bart <bc@freeuk.com> gabbled:
    On 02/04/2025 16:12, Muttley@DastardlyHQ.org wrote:
    Meh.

    What's the problem with it? Here, tell me at a glance the magnitude of >>>>>>> this number:

    10000000000

    And how often do you hard code values that large into a program? Almost >>>>>> never I imagine unless its some hex value to set flags in a word.

    Every day, several times a day. 16 hex digit constants are very
    common in my work. The digit separator really helps with readability, >>>>> although I would have preferred '_' over "'".

    Oh, I thought C23 used '_', since Python uses that. I prefer single
    quote as that is not shifted on my keyboard. (My language projects just >>>> allow both!)

    I made , (comma) the digit separator in TXR Lisp. Nobody uses _ in the
    real world.

    I understand that in some countries, that is the decimal point. That is >>> not relevant in programming languages that use a period for that and are >>> not localized.

    Comma means I can just copy and paste a figure from a financial document >>> or application, or any other document which uses that convention.

    The comma couldn't be used in C without the possibility of breaking
    existing code, since 123,456 is already a valid expression, and is
    likely to occur in a context like `foo(123,456)`.

    C23 borrowed 123'456 from C++ rather than 123_456 (which I would have
    preferred). C++ chose 123'456 because the C++ already used the
    underscore for user-defined literals. Apparently some countries, such
    as Switzerland, use the apostrophe as a digit separator.


    In my compiler, I did both ' and _, ...
    Personally though, I prefer using _ as a digit separator in these scenarios.

    But, yeah, can't use comma without creating syntactic ambiguity.

    False; you can't use comma because of an /existing/ ambiguity.

    (In fact you could still use a comma; the "only" problem is you would
    break some programs. If this is your own language that nobody else
    uses, that might not be a significant objection.)

    When you've designed the language such that f(1,234.00) is a function
    call with two arguments, equivalent to f(1, 124.00), that's where
    you created the ambiguity.

    Your rules for tokenizing and parsing may be unambiguous, but it's
    visually ambiguous to a human.

    You should have seen it coming when allowing comma punctuators to
    separate arguments, without any surrounding whitespace being required.

    Now you can't have nice things, like the comma digit separators that
    everyone uses in the English speaking world that uses . for the
    decimal separators.

    By the way ...

    One programming language that has comma separators is Fortran,
    by the way. Fortran persisted in providing this feature in spite of
    shooting itself in the foot with ambiguities.

    When Fortran was being designed, people were naive in writing
    compilers. They thought that it would simplify things if they
    removed all spaces from the code before lexically scanning it and
    parsing.

    Thus "DO I = 1, 10" becomes "DOI=1,10" and "FO I = 1, 10"
    becomes "FOI=1,10"

    After that you have to figure out that "DOI=1,10" is the
    header of a DO loop which steps I from 1 to 10,
    whereas "FOI=1,10" assigns 110 to variable FOI.

    Removing spaces before scanning anythning is a bad idea.

    Not requiring spaces between certain tokens is also a bad idea.

    In the token sequence 3) we wouldn't want to require a space
    between 3 and ).

    But it's a good idea to require 1,3 to be 1, 3 (if two numeric
    tokens separated by a comma are intended and not the
    number 1,3).

    Commas are "fluff punctuators". They could be removed without
    making a difference to the abstract syntax.

    Fun fact: early Lisp (when it was called LISP) had commas
    in lists. They were optional. (1, 2, 3) or (1 2 3). Your
    choice.

    Comma separation causes problems when arguments can be empty!

    In C preprocessing MAC() is actually a macro with one argument,
    which is empty. MAC(,) is a macro with two empty arguments
    and so on. You cannot write a macro call with zero arguments.

    Now, if macros didn't use commas, there wouldn't be a problem
    at all: MAC() -> zero args; MAC(abc) -> one arg;
    MAC(abc 2) -> two args.

    Wow, consistency. And no dangling comma nonsense to deal with in
    complex, variadic macros!

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to BGB on Thu Apr 3 20:35:58 2025
    On Thu, 3 Apr 2025 01:20:48 -0500, BGB wrote:

    So, extended features:
    _UBitInt(5) cr, cg, cb;
    _UBitInt(16) clr;
    clr = (_UBitInt(16)) { 0b0u1, cr, cg, cb };
    Composes an RGB555 value.

    cg = clr[9:5]; //extract bits
    clr[9:5] = cg; //assign bits
    clr[15:10] = clr[9:5]; //copy bits from one place to another.

    And:
    (_UBitInt(16)) { 0b0u1, cr, cg, cb } = clr;

    Decomposing it into components, any fixed-width constants being treated
    as placeholders.

    Next step: what if they’re variable-width bitfields, not fixed-width?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Kaz Kylheku on Thu Apr 3 21:48:40 2025
    On 03/04/2025 20:37, Kaz Kylheku wrote:
    On 2025-04-03, BGB <cr88192@gmail.com> wrote:

    In my compiler, I did both ' and _, ...
    Personally though, I prefer using _ as a digit separator in these scenarios. >>
    But, yeah, can't use comma without creating syntactic ambiguity.

    False; you can't use comma because of an /existing/ ambiguity.

    Commas are overwhelmingly used to separate list elements in programming languages.

    They only become possible for numeric separators if you abandon any sort
    of normal syntax and use one based, for example, on Lisp.

    Even then, someone looking at your language and seeing:

    55,688

    isn't going to to see the number 55688, they will see two numbers, 55
    and 688, because that is what is what they expect from a typical
    programming language.

    Even when they normally use "," for decimal point, they're not going to
    see 55.688 either, for the same reason.

    In my view, comma is 100 times more valuable as a list separator, than
    in being able to write 1,000,000 (which I can do as 1'000'000 or
    1_000_000 or even 1 million).

    I only use commas for output to be viewed as something that is pure
    numeric data, and not source code (so viewed by people who may not be programmers). Even then, the separator can be anything:

    1,000,000 decimal
    1011'0001 binary
    7FFF'FFFF hex

    I wouldn't use commas for non-decimal; it looks weird.



    Comma separation causes problems when arguments can be empty!

    It seems to be the other way around: how many missing arguments are
    there here between a and b:

    F(a b)

    When written as F(a,,,b) then it becomes clearer. (This is if you allow
    omitted in-between arguments, which I no longer do.)


    In C preprocessing MAC() is actually a macro with one argument,
    which is empty.

    I assume MAC() is a macro invocation? Then MAC could equally be a macro
    with zero arguments, and none is provided.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to Tim Rentsch on Thu Apr 3 23:19:23 2025
    On 03.04.2025 06:06, Tim Rentsch wrote:
    Kaz Kylheku <643-408-1753@kylheku.com> writes:

    [some symbols are defined in more than one header]

    (In my opinion, things would be better if headers were not allowed
    to behave as if they include other headers, or provide identifiers
    also given in other heards. Not in ISO C, and not in POSIX.
    Every identifier should be declared in exactly one home header,
    and no other header should provide that definition. [...])

    Not always practical. A good example is the type size_t. If a
    function takes an argument of type size_t, then the symbol size_t
    should be defined, no matter which header the function is being
    declared in. Similarly for NULL for any function that has defined
    behavior on some cases of arguments that include NULL. No doubt
    there are other compelling examples.

    I think that all that's said above (by Kaz and you) is basically
    correct.

    Obviously [to me] it is that 'size_t' and 'NULL' are so fundamental
    entities (a standard type and a standard pointer constant literal)
    that such items should have been inherent part of the "C" language,
    and not #include'd. (But we're speaking about "C" so it's probably
    pointless to discuss that from a more fundamental perspective...)

    The practical "C" approach was always to just include what you need
    (and don't make ones mind about "C" language design [or mis-design]).

    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Waldek Hebisch@21:1/5 to Tim Rentsch on Thu Apr 3 22:00:24 2025
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
    Kaz Kylheku <643-408-1753@kylheku.com> writes:

    [some symbols are defined in more than one header]

    (In my opinion, things would be better if headers were not allowed
    to behave as if they include other headers, or provide identifiers
    also given in other heards. Not in ISO C, and not in POSIX.
    Every identifier should be declared in exactly one home header,
    and no other header should provide that definition. [...])

    Not always practical. A good example is the type size_t. If a
    function takes an argument of type size_t, then the symbol size_t
    should be defined, no matter which header the function is being
    declared in.

    Why? One can use a type without a name for such type.

    Similarly for NULL for any function that has defined
    behavior on some cases of arguments that include NULL.

    Why? There are many ways to produce null pointers. And fact that
    a function had defined behavior for null pointers does not mean
    that users will need null pointers.

    No doubt
    there are other compelling examples.

    Do not look compelling at all.

    --
    Waldek Hebisch

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to David Brown on Thu Apr 3 23:39:00 2025
    On 03.04.2025 16:58, David Brown wrote:
    [...]

    I know people can use pre-processor conditional compilation based on __STDC_VERSION__ to complain if code is compiled with an unexpected or unsupported standard, but few people outside of library header authors actually do that. I'd really like :

    #pragma STDC VERSION C17

    to force the compiler to use the equivalent of "-std=c17
    -pedantic-errors" in gcc.

    (I understand the wish to have that #pragma supported.)

    Can there be linking problems when different "C" modules have
    been compiled with different '-std=cXX' or '#pragma STDC ...'
    settings? - The question just occurred to me.

    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Keith Thompson on Thu Apr 3 23:32:43 2025
    On 2025-04-03, Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    Kaz Kylheku <643-408-1753@kylheku.com> writes:
    [...]
    One programming language that has comma separators is Fortran,
    by the way. Fortran persisted in providing this feature in spite of
    shooting itself in the foot with ambiguities.

    When Fortran was being designed, people were naive in writing
    compilers. They thought that it would simplify things if they
    removed all spaces from the code before lexically scanning it and
    parsing.

    Thus "DO I = 1, 10" becomes "DOI=1,10" and "FO I = 1, 10"
    becomes "FOI=1,10"

    After that you have to figure out that "DOI=1,10" is the
    header of a DO loop which steps I from 1 to 10,
    whereas "FOI=1,10" assigns 110 to variable FOI.

    I don't think that's correct. My quick experiments with gfortran
    indicate that commas are *not* treated as digit separators.

    The classic Fortran (or FORTRAN?) error was that:
    DO 10 I = 1,100
    (a loop with bounds 1 to 100) was written as:
    DO 10 I = 1.100
    (which assigns the value 1.100 to the variable DO10I).

    An urban legend says that this error caused the loss of a spacecraft.
    In fact the error was caught and corrected before launch.

    Ah, OK; I must be misremembering that one. (Not the urban legend
    part; I'm not familiar with that embellishment).

    Wow, consistency. And no dangling comma nonsense to deal with in
    complex, variadic macros!

    Would MAC("foo" "bar") have one argument or two?

    I understand you're getting at multiple literals being one
    object (or not), but the bigger problem is, how many
    arguments does this have: f(x ++ - 3).

    Infix syntax with prefix and postfix operators drives
    the need for comma separation, other solutions being
    to require any nontrivial term to be parenthesized:

    f(a (b++ - 3) c)

    Funny we should get into this because I'm working on
    syntax not dissimilar from this.

    (About literals, I don't think it's a great feature to have
    adjacent literals specify one object, without requiring
    any operator to indicate that.)

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to David Brown on Fri Apr 4 04:50:37 2025
    On 03.04.2025 11:03, David Brown wrote:
    On 02/04/2025 23:43, Janis Papanagnou wrote:
    On 02.04.2025 16:59, David Brown wrote:
    [...]

    From the next version beyond C23, so far there is :

    1. Declarations in "if" and "switch" statements, like those in "for"
    loops, helps keep local variable scopes small and neat.

    Oh, I thought that would already be supported in some existing "C"
    version for the 'if'; I probably confused that with C++.


    C++17 has it.

    I guess the C committee waited until C++17 had been common enough that
    they could see if it was useful in real code, and if it lead to any unexpected problems in code or compilers before copying it for C.

    Really, that recent!? - I was positive that I used it long before 2017
    during the days when I did quite regularly C++ programming. - Could it
    be that some GNU compiler (C++ or "C") supported that before it became
    C++ standard?

    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to Muttley@DastardlyHQ.org on Fri Apr 4 04:43:42 2025
    On 03.04.2025 13:07, Muttley@DastardlyHQ.org wrote:
    On Thu, 3 Apr 2025 11:41:31 +0200
    David Brown <david.brown@hesbynett.no> wibbled:

    [ "unreachable()" is now standard. ]

    I can't tell you what Scott uses it for, but I have used gcc's
    __builtin_unreachable() a fair number of times in my coding. I use it
    to inform both the compiler and human readers that a path is unreachable:

    What for? The compiler doesn't care and a human reader would probably
    prefer a meaningful comment if its not obvious. If you're worried about the code accidently going there use an assert.

    switch (x) {
    case 1 : ...
    case 2 : ...
    case 3 : ...
    default : __builtin_unreachable();
    }

    I can also use it to inform the compiler about data :

    if ((x < 0) || (x > 10)) __builtin_unreachable();
    // x must be 1 .. 10

    And that'll do what? You want the compiler to compile in a hidden value check?

    I also don't see a point here; myself I'd write some sort of assertion
    in such cases, depending on the application case either just temporary
    for tests or a static one with sensible handling of the case.


    Good use of __builtin_unreachable() can result in smaller and faster
    code, and possibly improved static error checking. It is related to the

    Sorry, don't see how. If you think a piece of code is unreachable then don't put it in in the first place!

    Let me give that another spin...

    In cases like above 'switch' code I have the habit to (often) provide
    a default branch that contains a fprintf(stderr, "Internal error: ..."
    or a similar logging command and some form of exit or trap/catch code.
    I want some safety for the cases where in the _evolving_ program bugs
    sneak in by an oversight.[*]

    Personally I don't care about a compiler who is clever enough to warn
    me, say, about a lacking default branch but not clever enough to notice
    that it's intentionally, cannot be reached (say, in context of enums).
    I can understand that it might be of use for others, though. (There's
    certainly some demand if it's now standard.)

    I'm uninformed about __builtin_unreachable(), I don't know whether it
    can be overloaded, user-defined, or anything. If that's not the case
    I'd anyway write my own "Internal error: unexpected ..." function to
    use that in all such cases for error detection and tracking of bugs.

    Janis

    [*] This habit is actually a very old one and most probably resulting
    from an early observation with one of my first Simula programs coded
    on a mainframe that told me: "Internal error! Please contact the NCC
    in Oslo." - BTW; a nice suggestion, but useless since back these days
    there was no Email available to me and the NCC was in another country.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Keith Thompson on Fri Apr 4 02:55:57 2025
    On Wed, 02 Apr 2025 23:12:40 -0700, Keith Thompson wrote:

    Apparently some countries, such as Switzerland, use the apostrophe as a
    digit separator.

    In school I was taught to use a space as the international-standard
    thousands separator, and a centred dot for the decimal point. This was to
    get around regional differences over which is comma and which is a dot
    etc.

    Using “_” as an alternative to the space would have been consistent with its usage elsewhere, for word breaking in names etc.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to bart on Fri Apr 4 02:57:10 2025
    On Thu, 3 Apr 2025 21:48:40 +0100, bart wrote:

    Commas are overwhelmingly used to separate list elements in programming languages.

    Not just separate, but terminate. All the reasonable languages allow
    trailing commas.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to bart on Fri Apr 4 03:01:19 2025
    On Wed, 2 Apr 2025 16:33:46 +0100, bart wrote:

    Here, tell me at a glance the magnitude of
    this number:

    10000000000

    #define THOUSAND 1000
    #define MILLION (THOUSAND * THOUSAND)
    #define BILLION (THOUSAND * MILLION)

    uint64 num = 10 * BILLION;

    Much easier to figure out, don’t you think?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Waldek Hebisch@21:1/5 to bart on Fri Apr 4 03:05:23 2025
    bart <bc@freeuk.com> wrote:
    On 03/04/2025 20:37, Kaz Kylheku wrote:
    On 2025-04-03, BGB <cr88192@gmail.com> wrote:

    In my compiler, I did both ' and _, ...
    Personally though, I prefer using _ as a digit separator in these scenarios.

    But, yeah, can't use comma without creating syntactic ambiguity.

    False; you can't use comma because of an /existing/ ambiguity.

    Commas are overwhelmingly used to separate list elements in programming languages.

    They only become possible for numeric separators if you abandon any sort
    of normal syntax and use one based, for example, on Lisp.

    There is quite a lot of programming languages that have whitespace
    separated lists. Most of them have "Algol like" syntax.

    Even then, someone looking at your language and seeing:

    55,688

    isn't going to to see the number 55688, they will see two numbers, 55
    and 688,

    You may get list of 3 things:

    : [55,688] =>
    ** [55 , 688]


    because that is what is what they expect from a typical
    programming language.

    People should know language they use. The whole point of using
    a different language is because of some special features. So
    one should know them.

    Even when they normally use "," for decimal point, they're not going to
    see 55.688 either, for the same reason.

    In my view, comma is 100 times more valuable as a list separator, than
    in being able to write 1,000,000 (which I can do as 1'000'000 or
    1_000_000 or even 1 million).

    Whitespace actually may be quite good list separator. But using
    commas in numbers is too confusing, there are too many conventions
    used when printing numbers. My favorite is underscore for grouping,
    1_000.005 has only one sensible meaning, while 1.000,005 and 1,000.005
    can be easily confused.

    --
    Waldek Hebisch

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to bart on Fri Apr 4 02:54:02 2025
    On Thu, 3 Apr 2025 01:10:33 +0100, bart wrote:

    Oh, I thought C23 used '_', since Python uses that.

    Also Ada.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Muttley on Fri Apr 4 02:53:23 2025
    On Wed, 2 Apr 2025 15:12:20 -0000 (UTC), Muttley wrote:

    On Wed, 2 Apr 2025 11:12:07 -0300 Thiago Adams <thiago.adams@gmail.com> wibbled:

    - digit separator (better, safer)

    Meh.

    Dealt much with 64-bit integers?

    18_446_744_073_709_551_615

    vs

    18446744073709551615

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Fri Apr 4 09:42:11 2025
    On Thu, 03 Apr 2025 14:14:20 GMT
    scott@slp53.sl.home (Scott Lurndal) wibbled:
    Muttley@DastardlyHQ.org writes:
    On Wed, 02 Apr 2025 16:20:05 GMT
    scott@slp53.sl.home (Scott Lurndal) wibbled:
    Muttley@dastardlyhq.com writes:
    On Wed, 2 Apr 2025 16:33:46 +0100
    bart <bc@freeuk.com> gabbled:
    On 02/04/2025 16:12, Muttley@DastardlyHQ.org wrote:
    Meh.

    What's the problem with it? Here, tell me at a glance the magnitude of >>>>>this number:

    10000000000

    And how often do you hard code values that large into a program? Almost >>>>never I imagine unless its some hex value to set flags in a word.

    Every day, several times a day. 16 hex digit constants are very
    common in my work. The digit separator really helps with readability,

    Oh really? What are you doing, hardcoding password hashes?

    Modeling a very complicated 64-bit system-on-chip.

    If you're hardcoding all that you're doing it wrong. Should be in some kind
    of loaded config file.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Fri Apr 4 09:43:26 2025
    On Thu, 3 Apr 2025 16:01:18 -0700
    "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> wibbled:
    On 4/2/2025 1:09 PM, Chris M. Thomasson wrote:
    On 4/2/2025 8:16 AM, Muttley@DastardlyHQ.org wrote:
    On Wed, 2 Apr 2025 14:12:18 -0000 (UTC)
    antispam@fricas.org (Waldek Hebisch) wibbled:
    Muttley@dastardlyhq.org wrote:
    On Wed, 2 Apr 2025 10:57:29 +0100
    bart <bc@freeuk.com> wibbled:
    On 02/04/2025 06:59, Alexis wrote:

    Thought people here might be interested in this image on Jens
    Gustedt's
    blog, which translates section 6.2.5, "Types", of the C23 standard >>>>>>> into a graph of inclusions:

        https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23- >>>>>>> basic-types/


    So much for C being a 'simple' language.

    C should be left alone. It does what it needs to do for a systems
    language.
    Almost no use uses it for applications any more and sophisticated
    processing
    using complex types for example are far better done in C++.

    C99 has VMT (variable modified types).  Thanks to VMT and complex types >>>> C99 can naturaly do numeric computing that previously was done using
    Fortran 77.  Offical C++ has no VMT.  C++ mechanizms look nicer,

    Officially no, but I've never come across a C++ compiler that didn't
    support
    them given they're all C compilers too.

    All C++ compilers are also C compilers?

    To answer my own sarcastic question: No way. :^)

    So name one that isn't. Fairly simple way to prove your point.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Fri Apr 4 09:40:58 2025
    On Thu, 3 Apr 2025 15:58:05 +0200
    David Brown <david.brown@hesbynett.no> wibbled:
    Human readers prefer clear code to comments. Comments get out of sync -
    code does not.

    Thats not a reason for not using comments. Its very easy to understand your
    own code that you've just written - not so much for someone else or for you years down the line.

    Ignorance is curable - wilful ignorance is much more stubborn. But I
    will try.

    Guffaw! You should do standup.

    Let me give you an example, paraphrased from the C23 standards:


    #include <stddef.h>

    enum Colours { red, green, blue };

    unsigned int colour_to_hex(enum Colours c) {
    switch (c) {
    case red : return 0xff'00'00;
    case green : return 0x00'ff'00;
    case blue : return 0x00'00'ff;
    }
    unreachable();
    }


    With "unreachable()", "gcc -std=c23 -O2 -Wall" gives :

    colour_to_hex:
    mov edi, edi
    mov eax, DWORD PTR CSWTCH.1[0+rdi*4]
    ret

    Without it, it gives :

    colour_to_hex:
    cmp edi, 2
    ja .L1
    mov edi, edi
    mov eax, DWORD PTR CSWTCH.1[0+rdi*4]
    .L1:
    ret

    Except its not unreachable is it? There's nothing in C to prevent you
    calling that function with a value other than defined in the enum so what happens if there's a bug and it hits unreachable? Oh thats right , its "undefined" ie , a crash or hidden bug with bugger all info.

    Neither "// This should never be reached" nor "assert(false);" is a
    suitable alternative.

    In your opinion. I would never use that example above, its just asking for trouble down the line.

    Also FWIW, putting seperators in the hex values makes it less readable to me not more.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Scott Lurndal on Fri Apr 4 03:14:47 2025
    scott@slp53.sl.home (Scott Lurndal) writes:

    Tim Rentsch <tr.17687@z991.linuxsc.com> writes:

    scott@slp53.sl.home (Scott Lurndal) writes:

    bart <bc@freeuk.com> writes:

    [...]

    So it is not true that you need include stddef.h, nor obvious
    that that is where NULL is defined, if you are used to having it
    available indirectly.

    Indeed, and it is well documented.

    For example, in the POSIX description for the string functions
    you'll find the following statement:

    [CX] Inclusion of the <string.h> header may also make visible
    all symbols from <stddef.h>. [Option End]

    This is true for a number of POSIX headers, include those you
    enumerate above.

    [CX] marks a POSIX extension to ISO C.

    How strange. I don't know why anyone would ever want either to
    rely on or to take advantage of this property.

    Some existing unix implementations at the time the standard was
    adopted had that behavior and the committee was not willing to
    break existing implementations.

    My comment was only about clients, not about implementors or
    the POSIX standards group.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Scott Lurndal on Fri Apr 4 03:27:06 2025
    scott@slp53.sl.home (Scott Lurndal) writes:

    Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:

    scott@slp53.sl.home (Scott Lurndal) writes:

    Tim Rentsch <tr.17687@z991.linuxsc.com> writes:

    scott@slp53.sl.home (Scott Lurndal) writes:

    bart <bc@freeuk.com> writes:

    [...]

    So it is not true that you need include stddef.h, nor obvious
    that that is where NULL is defined, if you are used to having
    it available indirectly.

    Indeed, and it is well documented.

    For example, in the POSIX description for the string functions
    you'll find the following statement:

    [CX] Inclusion of the <string.h> header may also make
    visible all symbols from <stddef.h>. [Option End]

    This is true for a number of POSIX headers, include those you
    enumerate above.

    [CX] marks a POSIX extension to ISO C.

    How strange. I don't know why anyone would ever want either to
    rely on or to take advantage of this property.

    Some existing unix implementations at the time the standard was
    adopted had that behavior and the committee was not willing to
    break existing implementations.

    A shortsighted decision IMO, because it weakens confidence in the
    POSIX standard. Also the use of "break" there is odd, since
    those implementations were already broken.

    You mean the POSIX standard, yes? The C standard does not permit
    <string.h> to include <stddef.h>.

    Yes, and POSIX explictly marks it as an extension to the C
    standard.

    Strictly speaking it is not an extension as the C standard uses
    the term, because extensions are allowed only if they don't
    change the behavior of any strictly conforming program, and
    making <stddef.h> symbols visible due to #include <string.h>
    doesn't satisfy that condition.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Fri Apr 4 10:28:21 2025
    On Fri, 4 Apr 2025 03:25:23 -0700
    "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> wibbled:
    On 4/4/2025 2:43 AM, Muttley@DastardlyHQ.org wrote:
    On Thu, 3 Apr 2025 16:01:18 -0700
    "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> wibbled:
    On 4/2/2025 1:09 PM, Chris M. Thomasson wrote:
    On 4/2/2025 8:16 AM, Muttley@DastardlyHQ.org wrote:
    On Wed, 2 Apr 2025 14:12:18 -0000 (UTC)
    antispam@fricas.org (Waldek Hebisch) wibbled:
    Muttley@dastardlyhq.org wrote:
    On Wed, 2 Apr 2025 10:57:29 +0100
    bart <bc@freeuk.com> wibbled:
    On 02/04/2025 06:59, Alexis wrote:

    Thought people here might be interested in this image on Jens >>>>>>>>> Gustedt's
    blog, which translates section 6.2.5, "Types", of the C23 standard >>>>>>>>> into a graph of inclusions:

        https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23- >>>>>>>>> basic-types/


    So much for C being a 'simple' language.

    C should be left alone. It does what it needs to do for a systems >>>>>>> language.
    Almost no use uses it for applications any more and sophisticated >>>>>>> processing
    using complex types for example are far better done in C++.

    C99 has VMT (variable modified types).  Thanks to VMT and complex types >>>>>> C99 can naturaly do numeric computing that previously was done using >>>>>> Fortran 77.  Offical C++ has no VMT.  C++ mechanizms look nicer,

    Officially no, but I've never come across a C++ compiler that didn't >>>>> support
    them given they're all C compilers too.

    All C++ compilers are also C compilers?

    To answer my own sarcastic question: No way. :^)

    So name one that isn't. Fairly simple way to prove your point.


    Try to compile this in a C++ compiler:
    _____________
    #include <stdlib.h>
    #include <stdio.h>

    int main() {
    void *p = malloc(sizeof(int));
    int *ip = p;
    free(p);
    printf("done\n");
    return 0;
    }
    _____________


    $ cc -v
    Apple clang version 16.0.0 (clang-1600.0.26.6)
    Target: arm64-apple-darwin24.3.0
    Thread model: posix
    InstalledDir: /Library/Developer/CommandLineTools/usr/bin
    $ cc t.c
    $ a.out
    done

    What am I missing?

    You tell me mate.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Janis Papanagnou on Fri Apr 4 12:52:01 2025
    On 03/04/2025 23:39, Janis Papanagnou wrote:
    On 03.04.2025 16:58, David Brown wrote:
    [...]

    I know people can use pre-processor conditional compilation based on
    __STDC_VERSION__ to complain if code is compiled with an unexpected or
    unsupported standard, but few people outside of library header authors
    actually do that. I'd really like :

    #pragma STDC VERSION C17

    to force the compiler to use the equivalent of "-std=c17
    -pedantic-errors" in gcc.

    (I understand the wish to have that #pragma supported.)

    Can there be linking problems when different "C" modules have
    been compiled with different '-std=cXX' or '#pragma STDC ...'
    settings? - The question just occurred to me.


    I don't think there will be any non-obvious issues - at least, not
    unless you have code that relies on undefined behaviour. Of course you
    will also have trouble if you have a TU compiled for C90 that defines
    its own "quick_exit" function and you want to link that with a C11 TU
    using the standard library "quick_exit" function. The great majority of
    the changes in the standard apply within the translation unit, or are
    new library functions, or extensions to existing ones (like additional
    features in "printf").

    It is not at all uncommon to have static libraries compiled with one C
    standard flag linked to code compiled with a different flag.

    (Of course this is all implementation-dependent.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Kaz Kylheku on Fri Apr 4 14:07:22 2025
    On Thu, 3 Apr 2025 19:37:31 -0000 (UTC)
    Kaz Kylheku <643-408-1753@kylheku.com> wrote:

    On 2025-04-03, BGB <cr88192@gmail.com> wrote:
    On 4/3/2025 1:12 AM, Keith Thompson wrote:
    Kaz Kylheku <643-408-1753@kylheku.com> writes:
    On 2025-04-03, bart <bc@freeuk.com> wrote:
    On 02/04/2025 17:20, Scott Lurndal wrote:
    Muttley@dastardlyhq.com writes:
    On Wed, 2 Apr 2025 16:33:46 +0100

    But, yeah, can't use comma without creating syntactic ambiguity.

    False; you can't use comma because of an /existing/ ambiguity.

    (In fact you could still use a comma; the "only" problem is you would
    break some programs. If this is your own language that nobody else
    uses, that might not be a significant objection.)

    When you've designed the language such that f(1,234.00) is a function
    call with two arguments, equivalent to f(1, 124.00), that's where
    you created the ambiguity.

    Your rules for tokenizing and parsing may be unambiguous, but it's
    visually ambiguous to a human.

    You should have seen it coming when allowing comma punctuators to
    separate arguments, without any surrounding whitespace being required.

    Now you can't have nice things, like the comma digit separators that
    everyone uses in the English speaking world that uses . for the
    decimal separators.


    That not precise. According to Wikipedia, comma is not used as
    group separator in South Africa.
    Anyway, both international standardization bodies and standardization
    bodies of majority of English speaking countries, including USA, oppose
    such use of comma. They recommend thin space where available and either
    regular space or nothing at all when thin space is not available.


    By the way ...

    One programming language that has comma separators is Fortran,
    by the way. Fortran persisted in providing this feature in spite of
    shooting itself in the foot with ambiguities.

    When Fortran was being designed, people were naive in writing
    compilers. They thought that it would simplify things if they
    removed all spaces from the code before lexically scanning it and
    parsing.

    Thus "DO I = 1, 10" becomes "DOI=1,10" and "FO I = 1, 10"
    becomes "FOI=1,10"

    After that you have to figure out that "DOI=1,10" is the
    header of a DO loop which steps I from 1 to 10,
    whereas "FOI=1,10" assigns 110 to variable FOI.

    Removing spaces before scanning anythning is a bad idea.

    Not requiring spaces between certain tokens is also a bad idea.

    In the token sequence 3) we wouldn't want to require a space
    between 3 and ).

    But it's a good idea to require 1,3 to be 1, 3 (if two numeric
    tokens separated by a comma are intended and not the
    number 1,3).

    Commas are "fluff punctuators". They could be removed without
    making a difference to the abstract syntax.

    Fun fact: early Lisp (when it was called LISP) had commas
    in lists. They were optional. (1, 2, 3) or (1 2 3). Your
    choice.

    Comma separation causes problems when arguments can be empty!

    In C preprocessing MAC() is actually a macro with one argument,
    which is empty. MAC(,) is a macro with two empty arguments
    and so on. You cannot write a macro call with zero arguments.

    Now, if macros didn't use commas, there wouldn't be a problem
    at all: MAC() -> zero args; MAC(abc) -> one arg;
    MAC(abc 2) -> two args.

    Wow, consistency. And no dangling comma nonsense to deal with in
    complex, variadic macros!


    What exactly do you advocate? For comma to make no significance at all
    in all contexts except within string literal? With space as replacement
    in all contexts? Or in all contexts except comma operator?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to bart on Fri Apr 4 13:31:15 2025
    On 03/04/2025 17:52, bart wrote:
    On 03/04/2025 16:26, Scott Lurndal wrote:
    bart <bc@freeuk.com> writes:
    On 03/04/2025 14:44, Michael S wrote:

    Overhead is a smaller concern. Name clashes are bigger concern.

    Examples? Somebody would be foolhardy to use names like 'printf' or
    'exit' for their own, unrelated functions. (Compilers will anyway warn
    about that.)

    I've written my own printf and exit implementations in the
    past.   Not all C code has a runtime that provides those name.

    Then you have to specify, somehow, that you don't want those
    automatically included.


    It is not unusual in embedded systems to provide your own versions of
    standard library functions. For example, I have regularly implemented
    my own "exit" as something like :

    _Noreturn void exit(int status) {
    (void) status;
    while (true) ;
    }

    I do that because in my embedded systems, the program never ends - but
    the C startup code typically calls exit() after main() returns. This
    pulls in exit() from the library, which pulls in everything for handling at_exit() functions, which pulls in malloc(), and so on - for small microcontrollers, you can sometimes end up with a significant fraction
    of your flash used by library code you never want to use.

    Similarly, sometimes you might want to replace standard library IO
    functions with something appropriate for the small devices.

    This is all undefined behaviour in the C standards (mostly UB because it
    is not discussed or defined), but the way linkers work and the way most
    C standard libraries are arranged means it works fine.

    However, it's a bit more questionable if you are making your own
    functions with names that coincide with standard library function names
    but have different signatures.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Muttley@DastardlyHQ.org on Fri Apr 4 13:39:06 2025
    On 04/04/2025 11:40, Muttley@DastardlyHQ.org wrote:
    On Thu, 3 Apr 2025 15:58:05 +0200
    David Brown <david.brown@hesbynett.no> wibbled:
    Human readers prefer clear code to comments. Comments get out of sync -
    code does not.

    Thats not a reason for not using comments.

    It is a reason for never using a comment when you can express the same
    thing in code.

    Its very easy to understand your
    own code that you've just written - not so much for someone else or for you years down the line.

    If that's your problem, write better code - not more comments.

    Comments should say /why/ you are doing something, not /what/ you are doing.


    Ignorance is curable - wilful ignorance is much more stubborn. But I
    will try.

    Guffaw! You should do standup.

    Let me give you an example, paraphrased from the C23 standards:


    #include <stddef.h>

    enum Colours { red, green, blue };

    unsigned int colour_to_hex(enum Colours c) {
    switch (c) {
    case red : return 0xff'00'00;
    case green : return 0x00'ff'00;
    case blue : return 0x00'00'ff;
    }
    unreachable();
    }


    With "unreachable()", "gcc -std=c23 -O2 -Wall" gives :

    colour_to_hex:
    mov edi, edi
    mov eax, DWORD PTR CSWTCH.1[0+rdi*4]
    ret

    Without it, it gives :

    colour_to_hex:
    cmp edi, 2
    ja .L1
    mov edi, edi
    mov eax, DWORD PTR CSWTCH.1[0+rdi*4]
    .L1:
    ret

    Except its not unreachable is it?

    It /is/ unreachable. That's why I wrote it.

    There's nothing in C to prevent you
    calling that function with a value other than defined in the enum so what happens if there's a bug and it hits unreachable?

    There's nothing in the English language preventing me from calling you a
    "very stable genius" - but I can assure you that it is not going to happen.

    Oh thats right , its
    "undefined" ie , a crash or hidden bug with bugger all info.

    Welcome to the world of software development. If I specify a function
    as working for input values "red", "green", and "blue", and you choose
    to misuse it, that is /your/ fault, not mine. I write the code to work
    with valid inputs and give no promises about what will happen with any
    other input.


    Neither "// This should never be reached" nor "assert(false);" is a
    suitable alternative.

    In your opinion. I would never use that example above, its just asking for trouble down the line.

    Also FWIW, putting seperators in the hex values makes it less readable to me not more.


    Again, that's /your/ problem.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Waldek Hebisch@21:1/5 to Keith Thompson on Fri Apr 4 12:39:22 2025
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    antispam@fricas.org (Waldek Hebisch) writes:
    [...]
    People should know language they use. The whole point of using
    a different language is because of some special features. So
    one should know them.
    [...]
    Whitespace actually may be quite good list separator. But using
    commas in numbers is too confusing, there are too many conventions
    used when printing numbers. My favorite is underscore for grouping,
    1_000.005 has only one sensible meaning, while 1.000,005 and 1,000.005
    can be easily confused.

    People should know the language they use.

    Unlike other parts of language number formatting have tendency to
    leak to people different than developers.

    --
    Waldek Hebisch

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Janis Papanagnou on Fri Apr 4 15:38:52 2025
    On 04/04/2025 04:50, Janis Papanagnou wrote:
    On 03.04.2025 11:03, David Brown wrote:
    On 02/04/2025 23:43, Janis Papanagnou wrote:
    On 02.04.2025 16:59, David Brown wrote:
    [...]

    From the next version beyond C23, so far there is :

    1. Declarations in "if" and "switch" statements, like those in "for"
    loops, helps keep local variable scopes small and neat.

    Oh, I thought that would already be supported in some existing "C"
    version for the 'if'; I probably confused that with C++.


    C++17 has it.

    I guess the C committee waited until C++17 had been common enough that
    they could see if it was useful in real code, and if it lead to any
    unexpected problems in code or compilers before copying it for C.

    Really, that recent!? - I was positive that I used it long before 2017
    during the days when I did quite regularly C++ programming. - Could it
    be that some GNU compiler (C++ or "C") supported that before it became
    C++ standard?

    Janis


    To be clear, we are talking about :

    if (int x = get_next_value(); x > 10) {
    // We got a big value!
    }

    It was added in C++17. <https://en.cppreference.com/w/cpp/language/if>

    gcc did not have it as an extension, but they might have had it in the pre-standardised support for C++17 (before C++17 was published, gcc had "-std=c++1z" to get as many proposed C++17 features as possible before
    they were standardised. gcc has similar "pre-standard" support for all
    C and C++ versions).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Muttley@DastardlyHQ.org on Fri Apr 4 13:42:04 2025
    Muttley@DastardlyHQ.org writes:
    On Thu, 03 Apr 2025 14:14:20 GMT
    scott@slp53.sl.home (Scott Lurndal) wibbled:
    Muttley@DastardlyHQ.org writes:
    On Wed, 02 Apr 2025 16:20:05 GMT
    scott@slp53.sl.home (Scott Lurndal) wibbled:
    Muttley@dastardlyhq.com writes:
    On Wed, 2 Apr 2025 16:33:46 +0100
    bart <bc@freeuk.com> gabbled:
    On 02/04/2025 16:12, Muttley@DastardlyHQ.org wrote:
    Meh.

    What's the problem with it? Here, tell me at a glance the magnitude of >>>>>>this number:

    10000000000

    And how often do you hard code values that large into a program? Almost >>>>>never I imagine unless its some hex value to set flags in a word.

    Every day, several times a day. 16 hex digit constants are very
    common in my work. The digit separator really helps with readability,

    Oh really? What are you doing, hardcoding password hashes?

    Modeling a very complicated 64-bit system-on-chip.

    If you're hardcoding all that you're doing it wrong. Should be in some kind >of loaded config file.

    You're flailing around in the dark. Again.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Janis Papanagnou on Fri Apr 4 15:34:18 2025
    On 04/04/2025 04:43, Janis Papanagnou wrote:
    On 03.04.2025 13:07, Muttley@DastardlyHQ.org wrote:
    On Thu, 3 Apr 2025 11:41:31 +0200
    David Brown <david.brown@hesbynett.no> wibbled:

    [ "unreachable()" is now standard. ]

    I can't tell you what Scott uses it for, but I have used gcc's
    __builtin_unreachable() a fair number of times in my coding. I use it
    to inform both the compiler and human readers that a path is unreachable: >>
    What for? The compiler doesn't care and a human reader would probably
    prefer a meaningful comment if its not obvious. If you're worried about the >> code accidently going there use an assert.

    switch (x) {
    case 1 : ...
    case 2 : ...
    case 3 : ...
    default : __builtin_unreachable();
    }

    I can also use it to inform the compiler about data :

    if ((x < 0) || (x > 10)) __builtin_unreachable();
    // x must be 1 .. 10

    And that'll do what? You want the compiler to compile in a hidden value check?

    I also don't see a point here; myself I'd write some sort of assertion
    in such cases, depending on the application case either just temporary
    for tests or a static one with sensible handling of the case.


    It can't be a static assertion, since "x" is unknown at compile time.
    Dynamic assertions cost - runtime and code space. That's fine during
    testing and debugging, or for non-critical code. But for important code
    when you know a particular fact will hold true but the compile can't
    figure it out for itself, you don't want to pay that cost. You also
    don't want to have run-time code that is untestable - such as for
    handling a situation that can never occur.

    Usually I wrap my unreachable's in a macro that supports other static
    testing and optional run-time testing. But ultimately it often results
    in more efficient final code.



    Good use of __builtin_unreachable() can result in smaller and faster
    code, and possibly improved static error checking. It is related to the

    Sorry, don't see how. If you think a piece of code is unreachable then don't >> put it in in the first place!

    Let me give that another spin...

    In cases like above 'switch' code I have the habit to (often) provide
    a default branch that contains a fprintf(stderr, "Internal error: ..."
    or a similar logging command and some form of exit or trap/catch code.
    I want some safety for the cases where in the _evolving_ program bugs
    sneak in by an oversight.[*]

    I might enable extra checks during testing and debugging.

    But if the unreachable() is ever reached, the bug is not in that code -
    it is in the code that calls it. A message from that code could
    conceivably help locate the bugging calling code, in conjunction with a debugger and a call trace. But I don't want correct code to be weighed
    down by vague attempts at helping to find flaws in other code - if that
    were an acceptable way to write the code, I would not be using C in the
    first place!


    Personally I don't care about a compiler who is clever enough to warn
    me, say, about a lacking default branch but not clever enough to notice
    that it's intentionally, cannot be reached (say, in context of enums).

    enums in C are not guaranteed to be a value from the corresponding
    enumeration. The compiler can't assume that "colour_to_hex" will not be
    called with a value of, say, 42, because the language says it is
    perfectly reasonable to do that. (This is different from passing a
    "bool" parameter - the compiler /can/ assume that the parameter is
    either 0 or 1.)

    I can understand that it might be of use for others, though. (There's certainly some demand if it's now standard.)

    My example was paraphrased from the C23 standard - that /is/ an
    appropriate and common use of it.

    It has existed as a gcc extension for decades, and there is an
    equivalent in MSVC and many other serious compilers. It was added to
    C++ (C++23, IIRC), along with an "assume" attribute that effectively
    combines a conditional and an unreachable(). Compilers implement "unreachable()" by treating it as undefined behaviour.


    I'm uninformed about __builtin_unreachable(), I don't know whether it
    can be overloaded, user-defined, or anything.

    It is a gcc (and clang) extension - like all "__builtin" functions or pseudofunctions. No, it cannot be overloaded or user defined.

    If that's not the case
    I'd anyway write my own "Internal error: unexpected ..." function to
    use that in all such cases for error detection and tracking of bugs.

    Sure. I typically have the call wrapped in a macro with extra features.
    But those are a distraction in showing what unreachable() does and why
    it is useful.


    Janis

    [*] This habit is actually a very old one and most probably resulting
    from an early observation with one of my first Simula programs coded
    on a mainframe that told me: "Internal error! Please contact the NCC
    in Oslo." - BTW; a nice suggestion, but useless since back these days
    there was no Email available to me and the NCC was in another country.


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Muttley@DastardlyHQ.org on Fri Apr 4 15:46:48 2025
    On 04/04/2025 12:28, Muttley@DastardlyHQ.org wrote:
    On Fri, 4 Apr 2025 03:25:23 -0700
    "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> wibbled:
    On 4/4/2025 2:43 AM, Muttley@DastardlyHQ.org wrote:
    On Thu, 3 Apr 2025 16:01:18 -0700
    "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> wibbled:
    On 4/2/2025 1:09 PM, Chris M. Thomasson wrote:
    On 4/2/2025 8:16 AM, Muttley@DastardlyHQ.org wrote:
    On Wed, 2 Apr 2025 14:12:18 -0000 (UTC)
    antispam@fricas.org (Waldek Hebisch) wibbled:
    Muttley@dastardlyhq.org wrote:
    On Wed, 2 Apr 2025 10:57:29 +0100
    bart <bc@freeuk.com> wibbled:
    On 02/04/2025 06:59, Alexis wrote:

    Thought people here might be interested in this image on Jens >>>>>>>>>> Gustedt's
    blog, which translates section 6.2.5, "Types", of the C23 standard >>>>>>>>>> into a graph of inclusions:

        https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23- >>>>>>>>>> basic-types/


    So much for C being a 'simple' language.

    C should be left alone. It does what it needs to do for a systems >>>>>>>> language.
    Almost no use uses it for applications any more and sophisticated >>>>>>>> processing
    using complex types for example are far better done in C++.

    C99 has VMT (variable modified types).  Thanks to VMT and complex types
    C99 can naturaly do numeric computing that previously was done using >>>>>>> Fortran 77.  Offical C++ has no VMT.  C++ mechanizms look nicer, >>>>>>
    Officially no, but I've never come across a C++ compiler that didn't >>>>>> support
    them given they're all C compilers too.

    All C++ compilers are also C compilers?

    To answer my own sarcastic question: No way. :^)

    So name one that isn't. Fairly simple way to prove your point.


    Try to compile this in a C++ compiler:
    _____________
    #include <stdlib.h>
    #include <stdio.h>

    int main() {
    void *p = malloc(sizeof(int));
    int *ip = p;
    free(p);
    printf("done\n");
    return 0;
    }
    _____________


    $ cc -v
    Apple clang version 16.0.0 (clang-1600.0.26.6)
    Target: arm64-apple-darwin24.3.0
    Thread model: posix
    InstalledDir: /Library/Developer/CommandLineTools/usr/bin
    $ cc t.c
    $ a.out
    done

    What am I missing?

    You tell me mate.



    You are using a combined C and C++ compiler in C mode, and it compiles
    the C program as C. In that sense, most C++ compilers are also C
    compilers - or at least, that's how they appear to the user. (gcc has
    separate C and C++ compilers, but the "gcc" front-end driver program
    runs whichever is appropriate.) MSVC has such poor C support that it is arguable C++ only, but I know of no C++ compiler that doesn't at least
    try to be a C compiler as well.

    However, that is of no use when you say that C programmers can just
    compiler their code with a C++ compiler if they want "constexpr" or
    other new features - that would be handled by "cc -x c++ t.c", forcing
    the compiler to use C++ for the code.

    It is easy to write code that is valid C23, using a new feature copied
    from C++, but which is not valid C++ :

    constexpr size_t N = sizeof(int);
    int * p = malloc(N);

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Fri Apr 4 14:02:08 2025
    On Fri, 4 Apr 2025 15:46:48 +0200
    David Brown <david.brown@hesbynett.no> wibbled:
    On 04/04/2025 12:28, Muttley@DastardlyHQ.org wrote:
    On Fri, 4 Apr 2025 03:25:23 -0700
    "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> wibbled:
    On 4/4/2025 2:43 AM, Muttley@DastardlyHQ.org wrote:
    On Thu, 3 Apr 2025 16:01:18 -0700
    "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> wibbled:
    On 4/2/2025 1:09 PM, Chris M. Thomasson wrote:
    On 4/2/2025 8:16 AM, Muttley@DastardlyHQ.org wrote:
    On Wed, 2 Apr 2025 14:12:18 -0000 (UTC)
    antispam@fricas.org (Waldek Hebisch) wibbled:
    Muttley@dastardlyhq.org wrote:
    On Wed, 2 Apr 2025 10:57:29 +0100
    bart <bc@freeuk.com> wibbled:
    On 02/04/2025 06:59, Alexis wrote:

    Thought people here might be interested in this image on Jens >>>>>>>>>>> Gustedt's
    blog, which translates section 6.2.5, "Types", of the C23 standard >>>>>>>>>>> into a graph of inclusions:

        https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23- >>>>>>>>>>> basic-types/


    So much for C being a 'simple' language.

    C should be left alone. It does what it needs to do for a systems >>>>>>>>> language.
    Almost no use uses it for applications any more and sophisticated >>>>>>>>> processing
    using complex types for example are far better done in C++.

    C99 has VMT (variable modified types).  Thanks to VMT and complex >types
    C99 can naturaly do numeric computing that previously was done using >>>>>>>> Fortran 77.  Offical C++ has no VMT.  C++ mechanizms look nicer, >>>>>>>
    Officially no, but I've never come across a C++ compiler that didn't >>>>>>> support
    them given they're all C compilers too.

    All C++ compilers are also C compilers?

    To answer my own sarcastic question: No way. :^)

    So name one that isn't. Fairly simple way to prove your point.


    Try to compile this in a C++ compiler:
    _____________
    #include <stdlib.h>
    #include <stdio.h>

    int main() {
    void *p = malloc(sizeof(int));
    int *ip = p;
    free(p);
    printf("done\n");
    return 0;
    }
    _____________


    $ cc -v
    Apple clang version 16.0.0 (clang-1600.0.26.6)
    Target: arm64-apple-darwin24.3.0
    Thread model: posix
    InstalledDir: /Library/Developer/CommandLineTools/usr/bin
    $ cc t.c
    $ a.out
    done

    What am I missing?

    You tell me mate.



    You are using a combined C and C++ compiler in C mode, and it compiles
    the C program as C. In that sense, most C++ compilers are also C

    Err yes! Thats the whole point!!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Fri Apr 4 14:10:11 2025
    On Fri, 4 Apr 2025 13:39:06 +0200
    David Brown <david.brown@hesbynett.no> wibbled:
    On 04/04/2025 11:40, Muttley@DastardlyHQ.org wrote:
    On Thu, 3 Apr 2025 15:58:05 +0200
    David Brown <david.brown@hesbynett.no> wibbled:
    Human readers prefer clear code to comments. Comments get out of sync - >>> code does not.

    Thats not a reason for not using comments.

    It is a reason for never using a comment when you can express the same
    thing in code.

    If that's your problem, write better code - not more comments.

    Ah, the typical arrogant programmer who thinks their code is so well written that anyone can understand it and comments arn't required. Glad I don't have
    to work on anything you've written.


    Comments should say /why/ you are doing something, not /what/ you are doing.

    Rubbish. A lot of the time what is being done is just as obtuse as why.

    Except its not unreachable is it?

    It /is/ unreachable. That's why I wrote it.

    Really?

    int main()
    {
    colour_to_hex(10);
    return 0;
    }

    You have no idea how someone might try and use that function in the future. Just assuming they'll always pass parameters within limits is not just cretinous, its dangerous.

    There's nothing in C to prevent you
    calling that function with a value other than defined in the enum so what
    happens if there's a bug and it hits unreachable?

    There's nothing in the English language preventing me from calling you a >"very stable genius" - but I can assure you that it is not going to happen.

    Poor analogy.

    Oh thats right , its
    "undefined" ie , a crash or hidden bug with bugger all info.

    Welcome to the world of software development. If I specify a function
    as working for input values "red", "green", and "blue", and you choose
    to misuse it, that is /your/ fault, not mine. I write the code to work
    with valid inputs and give no promises about what will happen with any
    other input.

    Its your fault if it dies in a heap with no info or worse returns but does
    some random shit. Any well written API function should do at least basic
    sanity checking on its inputs and return a fail or assert unless its very low level and speed is the priority eg strlen().

    But then you're arrogant, so no surprise really.

    Also FWIW, putting seperators in the hex values makes it less readable to me >> not more.


    Again, that's /your/ problem.

    See above.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to David Brown on Fri Apr 4 15:14:45 2025
    On 04/04/2025 14:38, David Brown wrote:
    On 04/04/2025 04:50, Janis Papanagnou wrote:

    Really, that recent!? - I was positive that I used it long before 2017
    during the days when I did quite regularly C++ programming. - Could it
    be that some GNU compiler (C++ or "C") supported that before it became
    C++ standard?

    Janis


    To be clear, we are talking about :

        if (int x = get_next_value(); x > 10) {
            // We got a big value!
        }

    It was added in C++17.  <https://en.cppreference.com/w/cpp/language/if>

    gcc did not have it as an extension, but they might have had it in the pre-standardised support for C++17 (before C++17 was published, gcc had "-std=c++1z" to get as many proposed C++17 features as possible before
    they were standardised.  gcc has similar "pre-standard" support for all
    C and C++ versions).

    So, this is a proposal still for C, as it doesn't work for any current
    version of C (I should have read the above more carefully first!).

    There are appear to be two new features:

    * Allowing a declaration where a conditional expresson normally goes

    * Having several things there separated with ";" (yes, here ";" is a
    separator, not a terminator).

    Someone said they weren't excited by my proposal of being able to leave
    out '#include <stdio.>'. Well I'm not that excited by this.

    In fact I would actively avoid such a feature, as it adds clutter to
    code. It might look at first as though it saves you having to add a
    separate declaration, until you're writing the pattern for the fourth
    time in your function and realised you now have 4 declarations for 'x'!

    And also the type of 'x' is hardcoded in four places instead of one (so
    if 'get_next_value' changes its return type, you now have more
    maintenance and a real risk of missing out one).

    (If you say that those 4 instances could call different functions so
    each 'x' is a different type, then it would be a different kind of anti-pattern.)

    Currently it would need this (it is assumed that 'x' is needed in the body):

    int x;

    if ((x = getnextvalue()) > 10) {
    // We got a big value!
    }

    It's a little cleaner. (Getting rid of those outer parameter parentheses
    would be far more useful IMO.)

    (My language can already do this stuff:

    if int x := get_next_value(); x > 10 then
    println "big"
    fi

    But it is uncommon, and it would be over-cluttery even here. However I
    don't have the local scope of 'x' that presumably is the big deal in the
    C++ feature.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Muttley@DastardlyHQ.org on Fri Apr 4 17:12:42 2025
    On 04/04/2025 16:10, Muttley@DastardlyHQ.org wrote:
    On Fri, 4 Apr 2025 13:39:06 +0200
    David Brown <david.brown@hesbynett.no> wibbled:
    On 04/04/2025 11:40, Muttley@DastardlyHQ.org wrote:
    On Thu, 3 Apr 2025 15:58:05 +0200
    David Brown <david.brown@hesbynett.no> wibbled:
    Human readers prefer clear code to comments. Comments get out of sync - >>>> code does not.

    Thats not a reason for not using comments.

    It is a reason for never using a comment when you can express the same
    thing in code.

    If that's your problem, write better code - not more comments.

    Ah, the typical arrogant programmer who thinks their code is so well written that anyone can understand it and comments arn't required. Glad I don't have to work on anything you've written.

    Arrogance would be judging my code without having seen it. Writing code
    that is clear and does not require comments to say what it does is not arrogance - it is good coding.



    Comments should say /why/ you are doing something, not /what/ you are doing.

    Rubbish. A lot of the time what is being done is just as obtuse as why.

    That can /occasionally/ be the case. But if it happens a lot of the
    time, you are writing poor code. It's time to refactor or rename.


    Except its not unreachable is it?

    It /is/ unreachable. That's why I wrote it.

    Really?

    int main()
    {
    colour_to_hex(10);
    return 0;
    }

    UB. It's /your/ fault.

    Most of my code is written on the assumption that the people using it
    are not incompetent morons. They may make mistakes in their coding, but
    not like that. The code in question is unreachable because the kind of
    person who could write a call like that would not be working with my code.

    There are situations where you want to make your code handle as wide a
    range of inputs as possible, and provide error returns to help catch
    mistakes. Typically that is for boundary or interface code - such as
    libraries written for other people to use.

    For internal code within TU's or parts of a project, such things are
    just a waste of effort. They can make it significantly harder to design
    the code, since you have to figure out what behaviour is appropriate for invalid input - what should a function called "colour_to_hex" do when
    presented with an input that is not a colour? Sometimes they are
    impossible to implement - how do you check the precondition "this is a
    valid pointer" ? They limit your functionality and future expansion -
    if I have specified that inputs other than "red", "green" or "blue" give
    the result 0x00'00'00, then I can't add "purple" to the list of colours.

    There are all sorts of reasons why it is a good idea for functions to
    have pre-conditions, and for letting calls to the function without
    satisfying those pre-conditions be undefined behaviour.


    You have no idea how someone might try and use that function in the future.

    Yes, I do.

    Just assuming they'll always pass parameters within limits is not just cretinous, its dangerous.

    Nope. It is how software development works. If you don't understand
    about function specifications, you might want to read up on some basic
    computer science.


    There's nothing in C to prevent you
    calling that function with a value other than defined in the enum so what >>> happens if there's a bug and it hits unreachable?

    There's nothing in the English language preventing me from calling you a
    "very stable genius" - but I can assure you that it is not going to happen.

    Poor analogy.

    Oh thats right , its
    "undefined" ie , a crash or hidden bug with bugger all info.

    Welcome to the world of software development. If I specify a function
    as working for input values "red", "green", and "blue", and you choose
    to misuse it, that is /your/ fault, not mine. I write the code to work
    with valid inputs and give no promises about what will happen with any
    other input.

    Its your fault if it dies in a heap with no info or worse returns but does some random shit.

    If the caller fails to satisfy the pre-conditions of the function,
    that's the caller's fault. If the function fails to satisfy the post-conditions when called with correct pre-conditions, that's the
    function's fault. That's the contract, and that's the basis of all programming.

    Any well written API function should do at least basic
    sanity checking on its inputs and return a fail or assert unless its very low level and speed is the priority eg strlen().


    At clear boundaries of development responsibility - such as the public functions of a library - then it is often nice to add some extra
    checking and error feedback to help users of the library find their bugs.

    But then you're arrogant, so no surprise really.

    Also FWIW, putting seperators in the hex values makes it less readable to me
    not more.


    Again, that's /your/ problem.

    See above.


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Muttley@DastardlyHQ.org on Fri Apr 4 14:27:44 2025
    Muttley@DastardlyHQ.org writes:
    On Fri, 04 Apr 2025 13:42:04 GMT
    scott@slp53.sl.home (Scott Lurndal) wibbled:
    Muttley@DastardlyHQ.org writes:
    On Thu, 03 Apr 2025 14:14:20 GMT
    scott@slp53.sl.home (Scott Lurndal) wibbled:
    Muttley@DastardlyHQ.org writes:
    On Wed, 02 Apr 2025 16:20:05 GMT
    scott@slp53.sl.home (Scott Lurndal) wibbled:
    Muttley@dastardlyhq.com writes:
    On Wed, 2 Apr 2025 16:33:46 +0100
    bart <bc@freeuk.com> gabbled:
    On 02/04/2025 16:12, Muttley@DastardlyHQ.org wrote:
    Meh.

    What's the problem with it? Here, tell me at a glance the magnitude of >>>>>>>>this number:

    10000000000

    And how often do you hard code values that large into a program? Almost >>>>>>>never I imagine unless its some hex value to set flags in a word.

    Every day, several times a day. 16 hex digit constants are very >>>>>>common in my work. The digit separator really helps with readability, >>>>>
    Oh really? What are you doing, hardcoding password hashes?

    Modeling a very complicated 64-bit system-on-chip.

    If you're hardcoding all that you're doing it wrong. Should be in some kind >>>of loaded config file.

    You're flailing around in the dark. Again.

    Its good practice. Feel free to not follow it.


    You know -nothing- about the code base that allows you to suggest anything.

    One does, after all, need to read those configuration (or more likely, generated header files).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@dastardlyhq.com@21:1/5 to All on Fri Apr 4 16:12:50 2025
    On Fri, 4 Apr 2025 17:28:42 +0200
    David Brown <david.brown@hesbynett.no> gabbled:
    On 04/04/2025 16:02, Muttley@DastardlyHQ.org wrote:
    You are using a combined C and C++ compiler in C mode, and it compiles
    the C program as C. In that sense, most C++ compilers are also C

    Err yes! Thats the whole point!!


    Then if we back up the thread to where you said C programmers could just
    use a C++ compiler to get new features, you were clearly wrong. Of
    course, we all knew you were wrong already, the only question was in
    what way you were wrong.

    You think having to add an extra cast is so onorous that it doesn't count
    as C any more? Any decent C dev would add it by default. Obviously I don't include you in that grouping.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@dastardlyhq.com@21:1/5 to All on Fri Apr 4 16:11:31 2025
    On Fri, 4 Apr 2025 17:12:42 +0200
    David Brown <david.brown@hesbynett.no> gabbled:
    On 04/04/2025 16:10, Muttley@DastardlyHQ.org wrote:
    On Fri, 4 Apr 2025 13:39:06 +0200
    David Brown <david.brown@hesbynett.no> wibbled:
    On 04/04/2025 11:40, Muttley@DastardlyHQ.org wrote:
    On Thu, 3 Apr 2025 15:58:05 +0200
    David Brown <david.brown@hesbynett.no> wibbled:
    Human readers prefer clear code to comments. Comments get out of sync - >>>>> code does not.

    Thats not a reason for not using comments.

    It is a reason for never using a comment when you can express the same
    thing in code.

    If that's your problem, write better code - not more comments.

    Ah, the typical arrogant programmer who thinks their code is so well written >> that anyone can understand it and comments arn't required. Glad I don't have >> to work on anything you've written.

    Arrogance would be judging my code without having seen it. Writing code
    that is clear and does not require comments to say what it does is not >arrogance - it is good coding.

    Any sufficiently complicated code requires comments. Thats why comments exist. The fact that you think you're code is so amazing that it doesn't says a lot about you. And no, it isn't that you're an incredible dev, more the exact opposite.

    Rubbish. A lot of the time what is being done is just as obtuse as why.

    That can /occasionally/ be the case. But if it happens a lot of the
    time, you are writing poor code. It's time to refactor or rename.

    I'm guessing you've never written any sufficiently complicated code where
    there may be numerous steps to create a single action.

    int main()
    {
    colour_to_hex(10);
    return 0;
    }

    UB. It's /your/ fault.

    Yup, you are one of *those* devs.

    Rest of self justifying blah snipped.

    Just assuming they'll always pass parameters within limits is not just
    cretinous, its dangerous.

    Nope. It is how software development works. If you don't understand

    It really isn't. Get out of your bunker some time.

    tl;dr

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Muttley@DastardlyHQ.org on Fri Apr 4 17:28:42 2025
    On 04/04/2025 16:02, Muttley@DastardlyHQ.org wrote:
    On Fri, 4 Apr 2025 15:46:48 +0200
    David Brown <david.brown@hesbynett.no> wibbled:
    On 04/04/2025 12:28, Muttley@DastardlyHQ.org wrote:
    On Fri, 4 Apr 2025 03:25:23 -0700
    "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> wibbled:
    On 4/4/2025 2:43 AM, Muttley@DastardlyHQ.org wrote:
    On Thu, 3 Apr 2025 16:01:18 -0700
    "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> wibbled:
    On 4/2/2025 1:09 PM, Chris M. Thomasson wrote:
    On 4/2/2025 8:16 AM, Muttley@DastardlyHQ.org wrote:
    On Wed, 2 Apr 2025 14:12:18 -0000 (UTC)
    antispam@fricas.org (Waldek Hebisch) wibbled:
    Muttley@dastardlyhq.org wrote:
    On Wed, 2 Apr 2025 10:57:29 +0100
    bart <bc@freeuk.com> wibbled:
    On 02/04/2025 06:59, Alexis wrote:

    Thought people here might be interested in this image on Jens >>>>>>>>>>>> Gustedt's
    blog, which translates section 6.2.5, "Types", of the C23 standard >>>>>>>>>>>> into a graph of inclusions:

        https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23-
    basic-types/


    So much for C being a 'simple' language.

    C should be left alone. It does what it needs to do for a systems >>>>>>>>>> language.
    Almost no use uses it for applications any more and sophisticated >>>>>>>>>> processing
    using complex types for example are far better done in C++. >>>>>>>>>
    C99 has VMT (variable modified types).  Thanks to VMT and complex
    types
    C99 can naturaly do numeric computing that previously was done using >>>>>>>>> Fortran 77.  Offical C++ has no VMT.  C++ mechanizms look nicer, >>>>>>>>
    Officially no, but I've never come across a C++ compiler that didn't >>>>>>>> support
    them given they're all C compilers too.

    All C++ compilers are also C compilers?

    To answer my own sarcastic question: No way. :^)

    So name one that isn't. Fairly simple way to prove your point.


    Try to compile this in a C++ compiler:
    _____________
    #include <stdlib.h>
    #include <stdio.h>

    int main() {
    void *p = malloc(sizeof(int));
    int *ip = p;
    free(p);
    printf("done\n");
    return 0;
    }
    _____________


    $ cc -v
    Apple clang version 16.0.0 (clang-1600.0.26.6)
    Target: arm64-apple-darwin24.3.0
    Thread model: posix
    InstalledDir: /Library/Developer/CommandLineTools/usr/bin
    $ cc t.c
    $ a.out
    done

    What am I missing?

    You tell me mate.



    You are using a combined C and C++ compiler in C mode, and it compiles
    the C program as C. In that sense, most C++ compilers are also C

    Err yes! Thats the whole point!!


    Then if we back up the thread to where you said C programmers could just
    use a C++ compiler to get new features, you were clearly wrong. Of
    course, we all knew you were wrong already, the only question was in
    what way you were wrong.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to bart on Fri Apr 4 17:26:26 2025
    On 04/04/2025 16:14, bart wrote:
    On 04/04/2025 14:38, David Brown wrote:
    On 04/04/2025 04:50, Janis Papanagnou wrote:

    Really, that recent!? - I was positive that I used it long before 2017
    during the days when I did quite regularly C++ programming. - Could it
    be that some GNU compiler (C++ or "C") supported that before it became
    C++ standard?

    Janis


    To be clear, we are talking about :

         if (int x = get_next_value(); x > 10) {
             // We got a big value!
         }

    It was added in C++17.  <https://en.cppreference.com/w/cpp/language/if>

    gcc did not have it as an extension, but they might have had it in the
    pre-standardised support for C++17 (before C++17 was published, gcc
    had "-std=c++1z" to get as many proposed C++17 features as possible
    before they were standardised.  gcc has similar "pre-standard" support
    for all C and C++ versions).

    So, this is a proposal still for C, as it doesn't work for any current version of C (I should have read the above more carefully first!).

    Yes, that is correct. The feature has made it into the public drafts
    for the post-C23 version of C standards, but I have no idea when that
    will be complete.


    There are appear to be two new features:

    * Allowing a declaration where a conditional expresson normally goes

    Yes.


    * Having several things there separated with ";" (yes, here ";" is a separator, not a terminator).

    Two things, rather than several.

    Thus:

    if (int x = get_next_value()) {
    ...
    }

    is equivalent to :

    if (int x = get_next_value(); x) {
    ...
    }

    and

    {
    int x = get_next_value();
    if (x) {
    ...
    }
    }

    <https://open-std.org/JTC1/SC22/WG14/www/docs/n3467.pdf>
    Page 163 (labelled page numbers) or page 179 (of the pdf).


    Someone said they weren't excited by my proposal of being able to leave
    out '#include <stdio.>'. Well I'm not that excited by this.

    OK. I would not expect you to be - you prefer your variables to be
    declared at the head of functions rather than at minimal scope. For
    other C programmers, this is similar to "for (int i = 0; i < 10; i++)"
    and will likely be quite popular once C23 support gets established.


    In fact I would actively avoid such a feature, as it adds clutter to
    code. It might look at first as though it saves you having to add a
    separate declaration, until you're writing the pattern for the fourth
    time in your function and realised you now have 4 declarations for 'x'!

    And also the type of 'x' is hardcoded in four places instead of one (so
    if 'get_next_value' changes its return type, you now have more
    maintenance and a real risk of missing out one).


    I think your counting is off.

    if (int x = get_next_value()) { ... }

    is /one/ use of "x".

    A C90 style of :

    int x;
    ...
    x = get_next_value();
    if (x) { ... }

    is /three/ uses of "x".


    Different arrangements and conditionals will have different counts, of
    course.

    But a major point of having small scopes is that the length and
    description power of an identifier should be roughly proportional to its
    scope. A variable that exists throughout a sizeable function needs a
    longer and more descriptive name than a short-lived variable in a small
    scope, whose purpose is immediately obvious from a few lines of code.


    (If you say that those 4 instances could call different functions so
    each 'x' is a different type, then it would be a different kind of anti-pattern.)

    Currently it would need this (it is assumed that 'x' is needed in the
    body):

        int x;

        if ((x = getnextvalue()) > 10) {
              // We got a big value!
        }

    It's a little cleaner. (Getting rid of those outer parameter parentheses would be far more useful IMO.)

    (My language can already do this stuff:

        if int x := get_next_value(); x > 10 then
            println "big"
        fi

    But it is uncommon, and it would be over-cluttery even here. However I
    don't have the local scope of 'x' that presumably is the big deal in the
    C++ feature.)

    Yes, small scope is the point. Small scopes are better than big scopes
    (within reason).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Muttley@dastardlyhq.com on Fri Apr 4 19:25:59 2025
    On 04/04/2025 18:12, Muttley@dastardlyhq.com wrote:
    On Fri, 4 Apr 2025 17:28:42 +0200
    David Brown <david.brown@hesbynett.no> gabbled:
    On 04/04/2025 16:02, Muttley@DastardlyHQ.org wrote:
    You are using a combined C and C++ compiler in C mode, and it compiles >>>> the C program as C.  In that sense, most C++ compilers are also C

    Err yes! Thats the whole point!!


    Then if we back up the thread to where you said C programmers could
    just use a C++ compiler to get new features, you were clearly wrong.
    Of course, we all knew you were wrong already, the only question was
    in what way you were wrong.

    You think having to add an extra cast is so onorous that it doesn't count
    as C any more? Any decent C dev would add it by default. Obviously I don't include you in that grouping.


    Do you understand the concept of "example" ?

    Chris (not me) gave an /example/ of code that is valid C, but not valid
    C++. There are many other things that he could have picked - this is
    just a clear and simple one.

    And yes, I know that adding a cast here is easy. And I know that /some/
    C developers do that anyway - I am one of them, partly because C++ compatibility in my C code is sometimes important for my work. (I very
    rarely have dynamic memory in my code in the first place.) Equally, I
    know that many C developers do /not/ put in such a cast, and feel that
    it is a bad thing to have which could hide certain potential errors from compilers or linters.

    Other cases of C / C++ incompatibility that would cause a lot more inconvenience would be the use of things like "new" as identifiers, type-punning unions (which are UB in C++), compound literals, designated initialisers (even with C++20 support, there are plenty of differences),
    etc.

    It is certainly the case that most normal, well-written C is mostly
    compatible with C++ and with the same semantics. But "most" is not
    sufficient if you want to compile your C code as though it were C++.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Waldek Hebisch@21:1/5 to Muttley@dastardlyhq.org on Fri Apr 4 18:49:12 2025
    Muttley@dastardlyhq.org wrote:
    On Wed, 2 Apr 2025 14:12:18 -0000 (UTC)
    antispam@fricas.org (Waldek Hebisch) wibbled:
    Muttley@dastardlyhq.org wrote:
    On Wed, 2 Apr 2025 10:57:29 +0100
    bart <bc@freeuk.com> wibbled:
    On 02/04/2025 06:59, Alexis wrote:

    Thought people here might be interested in this image on Jens Gustedt's >>>>> blog, which translates section 6.2.5, "Types", of the C23 standard
    into a graph of inclusions:

    https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23-basic-types/ >>>>>

    So much for C being a 'simple' language.

    C should be left alone. It does what it needs to do for a systems language. >>> Almost no use uses it for applications any more and sophisticated processing
    using complex types for example are far better done in C++.

    C99 has VMT (variable modified types). Thanks to VMT and complex types
    C99 can naturaly do numeric computing that previously was done using >>Fortran 77. Offical C++ has no VMT. C++ mechanizms look nicer,

    Officially no, but I've never come across a C++ compiler that didn't support them given they're all C compilers too.

    I myself do not use Microsoft compilers, but I was repeatedy told
    that they do not support VMT.

    --
    Waldek Hebisch

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to Muttley@dastardlyhq.org on Fri Apr 4 21:08:36 2025
    Muttley@dastardlyhq.org wrote:
    On Wed, 2 Apr 2025 14:12:18 -0000 (UTC)
    antispam@fricas.org (Waldek Hebisch) wibbled:
    ...
    C99 has VMT (variable modified types). Thanks to VMT and complex types
    C99 can naturaly do numeric computing that previously was done using >>Fortran 77. Offical C++ has no VMT. C++ mechanizms look nicer,

    Officially no, but I've never come across a C++ compiler that didn't support them given they're all C compilers too.

    There exist many programs that can compile either C code and C++ code, depending either upon the extension of the file name or explicit command
    line options to determine which language's rules to apply. That doesn't qualify. Do you know of any compiler that accepts VMTs when compiling
    according to C++ rules? If so, please provide an example. It will help
    if the code has some features that are well-formed code in C++, but
    syntax errors in C, to make it clear that C++'s rules are being implemented.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Keith Thompson on Sat Apr 5 17:36:15 2025
    On 05/04/2025 04:15, Keith Thompson wrote:
    James Kuyper <jameskuyper@alumni.caltech.edu> writes:
    Muttley@dastardlyhq.org wrote:
    On Wed, 2 Apr 2025 14:12:18 -0000 (UTC)
    antispam@fricas.org (Waldek Hebisch) wibbled:
    ...
    C99 has VMT (variable modified types). Thanks to VMT and complex types >>>> C99 can naturaly do numeric computing that previously was done using
    Fortran 77. Offical C++ has no VMT. C++ mechanizms look nicer,

    Officially no, but I've never come across a C++ compiler that didn't support
    them given they're all C compilers too.

    There exist many programs that can compile either C code and C++ code,
    depending either upon the extension of the file name or explicit command
    line options to determine which language's rules to apply. That doesn't
    qualify. Do you know of any compiler that accepts VMTs when compiling
    according to C++ rules? If so, please provide an example. It will help
    if the code has some features that are well-formed code in C++, but
    syntax errors in C, to make it clear that C++'s rules are being implemented.

    g++ and clang++ both do so:

    int main() {
    class foo { };
    int len = 42;
    int vla[len];
    }

    Both warn about the variable length array when invoked with "-pedantic"
    and reject it with "-pedantic-errors".


    There are also things that are VLA's in C, but ordinary arrays in C++,
    and acceptable in both languages :

    int main() {
    const int len = 42;
    int arr[len];
    }


    Microsoft's C and C++ compilers do not support VLAs. (Their C compiler
    never supported C99, and VLAs were made optional in C11, so that's not a coformance issue.)


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Keith Thompson on Sat Apr 5 17:34:13 2025
    On 04/04/2025 21:18, Keith Thompson wrote:
    David Brown <david.brown@hesbynett.no> writes:
    [...]
    It is easy to write code that is valid C23, using a new feature copied
    from C++, but which is not valid C++ :

    constexpr size_t N = sizeof(int);
    int * p = malloc(N);

    It's much easier than that.

    int class;

    Every C compiler will accept that. Every C++ compiler will reject
    it. (I think the standard only requires a diagnostic, which can
    be non-fatal, but I'd be surprised to see a C or C++ compiler that
    generates an object file after encountering a syntax error).

    Muttley seems to think that because, for example, "gcc -c foo.c"
    will compile C code and "gcc -c foo.cpp" will compile C++ code,
    the C and C++ compilers are the same compiler. In fact they're
    distinct frontends with shared backend code, invoked differently
    based on the source file suffix. (And "g++" is recommended for C++
    code, but let's not get into that.)

    For the same compiler to compile both C and C++, assuming you don't unreasonably stretch the meaning of "same compiler", you'd have to
    have a parser that conditionally recognizes "class" as a keyword or
    as an identifier, among a huge number of other differences between
    the two grammars. As far as I know, nobody does that.

    Mr. Flibble's universal compiler? :-)


    You and I know he's wrong. Arguing with him is a waste of everyone's
    time.


    Yes, it seems that way. Sometimes he makes posts that are worth
    answering or correcting, but the threads with him inevitably go downhill.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Philipp Klaus Krause@21:1/5 to All on Sat Apr 5 19:56:33 2025
    Am 02.04.25 um 11:57 schrieb bart:
    * Where are the fixed-width types from stdint.h?

    Same as for size_t, etc: They don't exist. Those are not separate types,
    just typedefs to some other types. E.g. uint16_t could be typedef'ed to unsigned int.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Michael S on Sun Apr 6 03:31:05 2025
    Michael S <already5chosen@yahoo.com> writes:

    On Thu, 3 Apr 2025 15:05:59 +0200
    Opus <ifonly@youknew.org> wrote:

    For instance, if I'm not mistaken,
    designated initializers, which are very handy and have been available
    in C since C99 (25 years ago) have appeared only in C++20, about 20
    years later.

    AFAIK, even C++23 provides only a subset of C99 designated initializers.
    The biggest difference is that in C++ initializers have to be
    specified in the same order as declarations for respective fields.

    More importantly, C++ does not accept compound literals at all.
    (Disclaimer: to the best of my understanding. I have given up
    trying to follow what is happening in C++.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Keith Thompson on Mon Apr 7 04:09:54 2025
    On 2025-04-07, Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    antispam@fricas.org (Waldek Hebisch) writes:
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
    [...]
    Not always practical. A good example is the type size_t. If a
    function takes an argument of type size_t, then the symbol size_t
    should be defined, no matter which header the function is being
    declared in.

    Why? One can use a type without a name for such type.

    Convenience and existing practice. Sure, an implementation of
    <string.h> could provide a declaration of memcpy() without making
    size_t visible, but what would be the point?

    Ther eis a point to such a discipline; you get ultra squeaky clean
    modules whose header files define only their contribution to
    the program, and do not transitively reveal any of the identifiers
    from their dependencies.

    In large programs, this clean practice can can help prevent
    clashes.

    Now memcpy is a bad example.

    But imagine some large API. Your program uses, say 5% of
    the API. Somewhere in the API is a utility function you're
    not interested in, It does something involving the API,
    and some type you don't care about.

    Why should the type be revealed to your translation unit?

    Using memcpy as an example, it could be declared as

    void *memcpy(void * restrict d, const void * restrict s,
    __size_t size);

    size_t is not revealed, but a private type __size_t.

    To get __size_t, some private header is included <sys/priv_types.h>
    or whatever.

    The <stddef.h> header just includes that one and typedefs __size_t
    size_t (if it were to work that way).

    A system vendor which provides many API's and has the privilege of being
    able to use the __* space could do things like this.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Janis Papanagnou on Mon Apr 7 06:46:25 2025
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:

    On 03.04.2025 16:58, David Brown wrote:

    [...]

    I know people can use pre-processor conditional compilation based on
    __STDC_VERSION__ to complain if code is compiled with an unexpected or
    unsupported standard, but few people outside of library header authors
    actually do that. I'd really like :

    #pragma STDC VERSION C17

    to force the compiler to use the equivalent of "-std=c17
    -pedantic-errors" in gcc.

    (I understand the wish to have that #pragma supported.)

    It never will be, for reasons that are quite obvious.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Lawrence D'Oliveiro on Mon Apr 7 19:02:34 2025
    On 04/04/2025 04:01, Lawrence D'Oliveiro wrote:
    On Wed, 2 Apr 2025 16:33:46 +0100, bart wrote:

    Here, tell me at a glance the magnitude of
    this number:

    10000000000

    #define THOUSAND 1000
    #define MILLION (THOUSAND * THOUSAND)
    #define BILLION (THOUSAND * MILLION)

    uint64 num = 10 * BILLION;

    Much easier to figure out, don’t you think?

    Try 20 * BILLION; it will overflow if not careful.

    I'd normally write '20 billion' outside of C, since I use such numbers,
    with lots of zeros, constantly when writing test code.

    But when it isn't all zeros, or the base isn't 10, then numeric
    separators are better.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to bart on Mon Apr 7 21:12:16 2025
    On Mon, 7 Apr 2025 19:02:34 +0100
    bart <bc@freeuk.com> wrote:

    On 04/04/2025 04:01, Lawrence D'Oliveiro wrote:
    On Wed, 2 Apr 2025 16:33:46 +0100, bart wrote:

    Here, tell me at a glance the magnitude of
    this number:

    10000000000

    #define THOUSAND 1000
    #define MILLION (THOUSAND * THOUSAND)
    #define BILLION (THOUSAND * MILLION)

    uint64 num = 10 * BILLION;

    Much easier to figure out, don’t you think?

    Try 20 * BILLION; it will overflow if not careful.

    I'd normally write '20 billion' outside of C, since I use such
    numbers, with lots of zeros, constantly when writing test code.

    But when it isn't all zeros, or the base isn't 10, then numeric
    separators are better.


    Is not it "20 milliards" in British English?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From candycanearter07@21:1/5 to Lawrence D'Oliveiro on Mon Apr 7 17:30:03 2025
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote at 03:01 this Friday (GMT):
    On Wed, 2 Apr 2025 16:33:46 +0100, bart wrote:

    Here, tell me at a glance the magnitude of
    this number:

    10000000000

    #define THOUSAND 1000
    #define MILLION (THOUSAND * THOUSAND)
    #define BILLION (THOUSAND * MILLION)

    uint64 num = 10 * BILLION;

    Much easier to figure out, don’t you think?


    I used to do a bit of code for a codebase that did that with SECONDS and MINUTES since (almost) every "time" variable was in milliseconds, and it
    was very nice. That is just my subjective opinion, though. :P

    it was more like
    #define SECONDS *10
    #define MINUTES SECONDS*60
    #define HOURS MINUTES*60

    , though. Probably would be more notably annoying to debug in weird
    cases if the whole language/codebase wasnt borked spagetti :D
    --
    user <candycane> is generated from /dev/urandom

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Waldek Hebisch@21:1/5 to Keith Thompson on Mon Apr 7 18:31:03 2025
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    antispam@fricas.org (Waldek Hebisch) writes:
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
    [...]
    Not always practical. A good example is the type size_t. If a
    function takes an argument of type size_t, then the symbol size_t
    should be defined, no matter which header the function is being
    declared in.

    Why? One can use a type without a name for such type.

    Convenience and existing practice. Sure, an implementation of
    <string.h> could provide a declaration of memcpy() without making
    size_t visible, but what would be the point?

    Cleanliness of definitions? Consistency? Fragment that you
    replaced by [...] contained a proposal:

    Every identifier should be declared in exactly one home header,
    and no other header should provide that definition.

    That would be pretty clean and consistent rule: if you need some
    standard symbol, then you should include corresponding header.

    Tim claimed that this in not practical. Clearly C standard changed
    previous practice about headers, so existing practice is _not_
    a practical problem with adapting such proposal. With current
    standard and practice one frequently needs symbols from several
    headers, so "convenience" is also not a practival problem with
    such proposal. People not interested in clean name space can
    define private "all.h" which includes all standard C headers
    and possibly other things that they need, so for them overhead
    is close to zero.

    --
    Waldek Hebisch

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to Waldek Hebisch on Mon Apr 7 14:35:52 2025
    On 4/3/25 18:00, Waldek Hebisch wrote:
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
    ...
    Not always practical. A good example is the type size_t. If a
    function takes an argument of type size_t, then the symbol size_t
    should be defined, no matter which header the function is being
    declared in.

    Why? One can use a type without a name for such type.

    How would you declare a pointer to a function type such that it is
    compatible with such a function's type?
    When a variable is needed to store a value that would be passed as the
    size_t argument to such a function, I would (in the absence of any
    specific reason to do otherwise) want to declare that object to have the
    type size_t.
    Why should I have to #include a different header just because I want to
    do these things?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Michael S on Mon Apr 7 19:18:48 2025
    On 07/04/2025 19:12, Michael S wrote:
    On Mon, 7 Apr 2025 19:02:34 +0100
    bart <bc@freeuk.com> wrote:

    On 04/04/2025 04:01, Lawrence D'Oliveiro wrote:
    On Wed, 2 Apr 2025 16:33:46 +0100, bart wrote:

    Here, tell me at a glance the magnitude of
    this number:

    10000000000

    #define THOUSAND 1000
    #define MILLION (THOUSAND * THOUSAND)
    #define BILLION (THOUSAND * MILLION)

    uint64 num = 10 * BILLION;

    Much easier to figure out, don’t you think?

    Try 20 * BILLION; it will overflow if not careful.

    (Actually both 10/20 billion will overflow u32; I was thinking of 20
    billion billion overflowing u64.)

    I'd normally write '20 billion' outside of C, since I use such
    numbers, with lots of zeros, constantly when writing test code.

    But when it isn't all zeros, or the base isn't 10, then numeric
    separators are better.


    Is not it "20 milliards" in British English?

    We (UK) now use 'billion' for 1E9; in the past it meant 1E12.

    'Milliardo' is Italian for 'billion'; perhaps in a few other languages too.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From G@21:1/5 to bart on Mon Apr 7 18:41:56 2025
    bart <bc@freeuk.com> wrote:
    On 07/04/2025 19:12, Michael S wrote:
    On Mon, 7 Apr 2025 19:02:34 +0100
    bart <bc@freeuk.com> wrote:

    On 04/04/2025 04:01, Lawrence D'Oliveiro wrote:
    On Wed, 2 Apr 2025 16:33:46 +0100, bart wrote:

    Here, tell me at a glance the magnitude of
    this number:

    10000000000

    #define THOUSAND 1000
    #define MILLION (THOUSAND * THOUSAND)
    #define BILLION (THOUSAND * MILLION)

    uint64 num = 10 * BILLION;

    Much easier to figure out, don’t you think?

    Try 20 * BILLION; it will overflow if not careful.

    (Actually both 10/20 billion will overflow u32; I was thinking of 20
    billion billion overflowing u64.)

    I'd normally write '20 billion' outside of C, since I use such
    numbers, with lots of zeros, constantly when writing test code.

    But when it isn't all zeros, or the base isn't 10, then numeric
    separators are better.


    Is not it "20 milliards" in British English?

    We (UK) now use 'billion' for 1E9; in the past it meant 1E12.

    'Milliardo' is Italian for 'billion'; perhaps in a few other languages too.

    It's "miliardo", like "milione" (1e6), but there is also "bilione"(1e12), All with only one "l".

    G

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Waldek Hebisch on Mon Apr 7 18:55:55 2025
    antispam@fricas.org (Waldek Hebisch) writes:
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    antispam@fricas.org (Waldek Hebisch) writes:
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
    [...]
    Not always practical. A good example is the type size_t. If a
    function takes an argument of type size_t, then the symbol size_t
    should be defined, no matter which header the function is being
    declared in.

    Why? One can use a type without a name for such type.

    Convenience and existing practice. Sure, an implementation of
    <string.h> could provide a declaration of memcpy() without making
    size_t visible, but what would be the point?

    Cleanliness of definitions? Consistency? Fragment that you
    replaced by [...] contained a proposal:

    Every identifier should be declared in exactly one home header,
    and no other header should provide that definition.

    That would be pretty clean and consistent rule: if you need some
    standard symbol, then you should include corresponding header.

    Unfortunately, there was a considerable amount of existing
    code that didn't include the corresponding header. Particularly
    around the time that size_t was introduced.

    Forcing code to include <stddef.h> in order to use the interfaces
    in <string.h> or <memory.h> would have broken existing applications
    and impaired portability of existing applications. Hence the
    standards organizations such as X/Open and POSIX (now joined)
    chose to allow implementations to implicitly include <stddef.h>
    in cases where it preserved backward compatibility. Which
    generally required the implementation to use macros to prevent
    compilation errors if <stddef.h> is subsequently included.

    From SVR4/mk stddef.h, for example:

    #ifndef _SIZE_T
    # define _SIZE_T
    typedef unsigned int size_t;
    #endif


    As for the C standard, I wasn't involved in that so I won't
    comment other than noting that C without either the POSIX
    standard or some other operating system interface isn't
    particularly useful or interesting.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Heathfield@21:1/5 to Michael S on Mon Apr 7 20:29:17 2025
    On 07/04/2025 19:12, Michael S wrote:
    On Mon, 7 Apr 2025 19:02:34 +0100
    bart <bc@freeuk.com> wrote:

    On 04/04/2025 04:01, Lawrence D'Oliveiro wrote:
    On Wed, 2 Apr 2025 16:33:46 +0100, bart wrote:

    Here, tell me at a glance the magnitude of
    this number:

    10000000000

    #define THOUSAND 1000
    #define MILLION (THOUSAND * THOUSAND)
    #define BILLION (THOUSAND * MILLION)

    uint64 num = 10 * BILLION;

    Much easier to figure out, don’t you think?

    Try 20 * BILLION; it will overflow if not careful.

    I'd normally write '20 billion' outside of C, since I use such
    numbers, with lots of zeros, constantly when writing test code.

    But when it isn't all zeros, or the base isn't 10, then numeric
    separators are better.


    Is not it "20 milliards" in British English?

    Yes. The British use

    1 - one
    10 - ten
    100 - hundred
    1 000 - thousand
    10 000 - myriad
    100 000 - pool
    1 000 000 - million
    1 000 000 000 - milliard
    1 000 000 000 000 - billion
    1 000 000 000 000 000 - billiard
    1 000 000 000 000 000 000 - trillion
    1 000 000 000 000 000 000 000 - trilliard
    1 000 000 000 000 000 000 000 000 - snooker
    except for journalists, politicians, stockbrokers, and anyone
    else who spends far too much time talking to Americans.

    The biggest number you're likely to need in the real world is 100 tredecimillion, which is approximately the number of atoms in the
    known universe.

    ObC: I am currently roughing out a proposal for the ISO folks to
    introduce the 288-bit long long long long long long long long
    long int, or universe_t for short, so that programs will be able
    to keep track of those 100 tredecimillion atoms. Each universe_t
    will be able to count atoms in almost five million observable
    universes, which should be enough to be going on with.

    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to All on Mon Apr 7 21:49:02 2025
    On 07.04.2025 19:30, candycanearter07 wrote:
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote at 03:01 this Friday (GMT):
    On Wed, 2 Apr 2025 16:33:46 +0100, bart wrote:

    Here, tell me at a glance the magnitude of
    this number:

    10000000000

    #define THOUSAND 1000
    #define MILLION (THOUSAND * THOUSAND)
    #define BILLION (THOUSAND * MILLION)

    uint64 num = 10 * BILLION;

    Much easier to figure out, don’t you think?

    Yes, where appropriate that's fine.

    But that pattern doesn't work for numbers like 299792458 [m/s]
    (i.e. in the general case, as opposed to primitive scalers).

    And it's also not good for international languages (different
    to US American and the like), where "billion" means something
    else (namely 10^12, and not 10^9), so that its semantics isn't
    unambiguously clear in the first place.

    And sometimes you have large numeric literals and don't want
    to add such CPP ballast just for readability; especially if
    there is (or would be) a standard number grouping for literals
    available.

    So it's generally a gain to have a grouping syntax available.



    I used to do a bit of code for a codebase that did that with SECONDS and MINUTES since (almost) every "time" variable was in milliseconds, and it
    was very nice. That is just my subjective opinion, though. :P

    That actually depends on what you do. Milliseconds was (for our
    applications) often either not good enough a resolution, or, on
    a larger scale, unnecessary or reducing the available range.

    Quasi "norming" an integral value to represent a milliseconds unit
    I consider especially bad, although not that bad as units of 0.01s
    (that I think have met in Javascript). I also seem to recall that
    MS DOS had such arbitrary sub-seconds units, but I'm not quite sure
    about that any more.

    A better unit is, IMO, a second resolution (which at least is a
    basic physical unit) and a separate integer for sub-seconds. (An
    older Unix I used supported the not uncommon nanoseconds attribute
    but where only milli- and micro-seconds were uses, the rest was 0.)

    Or have an abstraction layer that hides all implementation details
    and don't have to care any more about implementation details of
    such "time types".


    it was more like
    #define SECONDS *10
    #define MINUTES SECONDS*60
    #define HOURS MINUTES*60

    , though. Probably would be more notably annoying to debug in weird
    cases if the whole language/codebase wasnt borked spagetti :D

    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to bart on Mon Apr 7 22:14:20 2025
    On 07.04.2025 20:18, bart wrote:
    On 07/04/2025 19:12, Michael S wrote:
    On Mon, 7 Apr 2025 19:02:34 +0100
    bart <bc@freeuk.com> wrote:

    On 04/04/2025 04:01, Lawrence D'Oliveiro wrote:
    On Wed, 2 Apr 2025 16:33:46 +0100, bart wrote:

    Here, tell me at a glance the magnitude of
    this number:

    10000000000

    #define THOUSAND 1000
    #define MILLION (THOUSAND * THOUSAND)
    #define BILLION (THOUSAND * MILLION)

    uint64 num = 10 * BILLION;

    Much easier to figure out, don’t you think?

    Try 20 * BILLION; it will overflow if not careful.

    (Actually both 10/20 billion will overflow u32; I was thinking of 20
    billion billion overflowing u64.)

    I'd normally write '20 billion' outside of C, since I use such
    numbers, with lots of zeros, constantly when writing test code.

    But when it isn't all zeros, or the base isn't 10, then numeric
    separators are better.


    Is not it "20 milliards" in British English?

    We (UK) now use 'billion' for 1E9; in the past it meant 1E12.

    'Milliardo' is Italian for 'billion'; perhaps in a few other languages too.

    "In a few other languages"? - That was not my impression;
    and a quick look into Wikipedia seems to support that.

    The global map[*] is interesting!

    (Read the articles for the details, the historic base, and
    especially what's standard in countries, and why the common
    standard is in some cases like GB not used primarily today.)

    Janis

    https://upload.wikimedia.org/wikipedia/commons/7/7c/World_map_of_long_and_short_scales.svg

    Green - long scale
    Blue - short scale
    Turquoise - both, long and short
    Yellow - other scales

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Janis Papanagnou on Mon Apr 7 23:49:50 2025
    On Mon, 7 Apr 2025 22:14:20 +0200
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:

    On 07.04.2025 20:18, bart wrote:
    On 07/04/2025 19:12, Michael S wrote:
    On Mon, 7 Apr 2025 19:02:34 +0100
    bart <bc@freeuk.com> wrote:

    On 04/04/2025 04:01, Lawrence D'Oliveiro wrote:
    On Wed, 2 Apr 2025 16:33:46 +0100, bart wrote:

    Here, tell me at a glance the magnitude of
    this number:

    10000000000

    #define THOUSAND 1000
    #define MILLION (THOUSAND * THOUSAND)
    #define BILLION (THOUSAND * MILLION)

    uint64 num = 10 * BILLION;

    Much easier to figure out, don’t you think?

    Try 20 * BILLION; it will overflow if not careful.

    (Actually both 10/20 billion will overflow u32; I was thinking of 20 billion billion overflowing u64.)

    I'd normally write '20 billion' outside of C, since I use such
    numbers, with lots of zeros, constantly when writing test code.

    But when it isn't all zeros, or the base isn't 10, then numeric
    separators are better.


    Is not it "20 milliards" in British English?

    We (UK) now use 'billion' for 1E9; in the past it meant 1E12.

    'Milliardo' is Italian for 'billion'; perhaps in a few other
    languages too.

    "In a few other languages"? - That was not my impression;
    and a quick look into Wikipedia seems to support that.

    The global map[*] is interesting!

    (Read the articles for the details, the historic base, and
    especially what's standard in countries, and why the common
    standard is in some cases like GB not used primarily today.)

    Janis

    https://upload.wikimedia.org/wikipedia/commons/7/7c/World_map_of_long_and_short_scales.svg

    Green - long scale
    Blue - short scale
    Turquoise - both, long and short
    Yellow - other scales


    I think that this map misses one important detail - majority of "blue" non-English-speaking countries spell 1e9 as milliard/miliard.
    I.e. for that specific scale they are aligned with "green" countries.
    If you don't believe me, try google translate.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to Richard Heathfield on Mon Apr 7 22:30:15 2025
    On 07.04.2025 21:29, Richard Heathfield wrote:

    ObC: I am currently roughing out a proposal for the ISO folks to
    introduce the 288-bit long long long long long long long long long int,
    or universe_t for short, so that programs will be able to keep track of
    those 100 tredecimillion atoms. Each universe_t will be able to count
    atoms in almost five million observable universes, which should be
    enough to be going on with.

    Thus artificially restricting the foundational research not only of
    theoretical physics but also of pure mathematics and philosophy? ;-)
    Mind that "640kB is enough" experience! :-)

    More seriously; there's already tools and libraries that support
    "arbitrary" precision were necessary. Not in an 'int' type, though.

    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Heathfield@21:1/5 to Janis Papanagnou on Mon Apr 7 22:26:46 2025
    On 07/04/2025 21:30, Janis Papanagnou wrote:
    On 07.04.2025 21:29, Richard Heathfield wrote:

    ObC: I am currently roughing out a proposal for the ISO folks to
    introduce the 288-bit long long long long long long long long long int,
    or universe_t for short, so that programs will be able to keep track of
    those 100 tredecimillion atoms. Each universe_t will be able to count
    atoms in almost five million observable universes, which should be
    enough to be going on with.

    Thus artificially restricting the foundational research not only of theoretical physics but also of pure mathematics and philosophy? ;-)
    Mind that "640kB is enough" experience! :-)

    A 640kB integer type would probably suffice for now, though.

    More seriously; there's already tools and libraries that support
    "arbitrary" precision were necessary. Not in an 'int' type, though.

    Yes, I know; about a thousand years ago I wrote one, so here's a
    40-bit prime just for you: 761072582689

    (I was trying for 256 bits, but it was taking forever; I didn't
    say it was a /good/ library...)

    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to Michael S on Mon Apr 7 23:18:53 2025
    On 07.04.2025 22:49, Michael S wrote:
    On Mon, 7 Apr 2025 22:14:20 +0200
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:

    On 07.04.2025 20:18, bart wrote:
    On 07/04/2025 19:12, Michael S wrote:
    On Mon, 7 Apr 2025 19:02:34 +0100
    bart <bc@freeuk.com> wrote:

    On 04/04/2025 04:01, Lawrence D'Oliveiro wrote:
    On Wed, 2 Apr 2025 16:33:46 +0100, bart wrote:

    Here, tell me at a glance the magnitude of
    this number:

    10000000000

    #define THOUSAND 1000
    #define MILLION (THOUSAND * THOUSAND)
    #define BILLION (THOUSAND * MILLION)

    uint64 num = 10 * BILLION;

    Much easier to figure out, don’t you think?

    Try 20 * BILLION; it will overflow if not careful.

    (Actually both 10/20 billion will overflow u32; I was thinking of 20
    billion billion overflowing u64.)

    I'd normally write '20 billion' outside of C, since I use such
    numbers, with lots of zeros, constantly when writing test code.

    But when it isn't all zeros, or the base isn't 10, then numeric
    separators are better.


    Is not it "20 milliards" in British English?

    We (UK) now use 'billion' for 1E9; in the past it meant 1E12.

    'Milliardo' is Italian for 'billion'; perhaps in a few other
    languages too.

    "In a few other languages"? - That was not my impression;
    and a quick look into Wikipedia seems to support that.

    The global map[*] is interesting!

    (Read the articles for the details, the historic base, and
    especially what's standard in countries, and why the common
    standard is in some cases like GB not used primarily today.)

    Janis

    https://upload.wikimedia.org/wikipedia/commons/7/7c/World_map_of_long_and_short_scales.svg

    Green - long scale
    Blue - short scale
    Turquoise - both, long and short
    Yellow - other scales


    I think that this map misses one important detail - majority of "blue" non-English-speaking countries spell 1e9 as milliard/miliard.
    I.e. for that specific scale they are aligned with "green" countries.
    If you don't believe me, try google translate.

    I cannot tell whether google translate is sufficiently authoritative.

    But if it's as you say (and if I understand you correctly) that would
    just support my impression that it's not only "few other languages"
    that would use the Long Scale system.

    Also mind that there's standards and common practice in countries and
    both not always match.

    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Janis Papanagnou on Mon Apr 7 22:46:49 2025
    On 07/04/2025 21:14, Janis Papanagnou wrote:
    On 07.04.2025 20:18, bart wrote:
    On 07/04/2025 19:12, Michael S wrote:
    On Mon, 7 Apr 2025 19:02:34 +0100
    bart <bc@freeuk.com> wrote:

    On 04/04/2025 04:01, Lawrence D'Oliveiro wrote:
    On Wed, 2 Apr 2025 16:33:46 +0100, bart wrote:

    Here, tell me at a glance the magnitude of
    this number:

    10000000000

    #define THOUSAND 1000
    #define MILLION (THOUSAND * THOUSAND)
    #define BILLION (THOUSAND * MILLION)

    uint64 num = 10 * BILLION;

    Much easier to figure out, don’t you think?

    Try 20 * BILLION; it will overflow if not careful.

    (Actually both 10/20 billion will overflow u32; I was thinking of 20
    billion billion overflowing u64.)

    I'd normally write '20 billion' outside of C, since I use such
    numbers, with lots of zeros, constantly when writing test code.

    But when it isn't all zeros, or the base isn't 10, then numeric
    separators are better.


    Is not it "20 milliards" in British English?

    We (UK) now use 'billion' for 1E9; in the past it meant 1E12.

    'Milliardo' is Italian for 'billion'; perhaps in a few other languages too.

    "In a few other languages"? - That was not my impression;
    and a quick look into Wikipedia seems to support that.

    The global map[*] is interesting!

    (Read the articles for the details, the historic base, and
    especially what's standard in countries, and why the common
    standard is in some cases like GB not used primarily today.)


    https://upload.wikimedia.org/wikipedia/commons/7/7c/World_map_of_long_and_short_scales.svg

    I'd never heard of short and long scales. The full article is here:

    https://en.wikipedia.org/wiki/Long_and_short_scales

    I only knew about the old and new meanings of 'billion' in the UK, its
    US meaning, and the use of 'milliard' (however it is spelt, since I'm
    only familiar with it in speech), in Italian.

    (In source code, it would also be useful to use 1e9 or 1e12,
    unfortunately those normally yield floating point values. I can't do
    much about that in C, but I will see what can be done with my own stuff.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to bart on Mon Apr 7 23:57:08 2025
    On 07/04/2025 22:46, bart wrote:

    On 04/04/2025 04:01, Lawrence D'Oliveiro wrote:
    On Wed, 2 Apr 2025 16:33:46 +0100, bart wrote:
    Here, tell me at a glance the magnitude of
    this number:

            10000000000

           #define THOUSAND 1000
           #define MILLION (THOUSAND * THOUSAND)
           #define BILLION (THOUSAND * MILLION)

           uint64 num = 10 * BILLION;

    Much easier to figure out, don’t you think?

    (In source code, it would also be useful to use 1e9 or 1e12,
    unfortunately those normally yield floating point values. I can't do
    much about that in C, but I will see what can be done with my own stuff.)

    Since numbers using exponents without also using decimal points are rare
    in my code base, I've decided to experiment with numbers like 1e6 being
    integer constants rather that floats. (This is IN my language.)

    I've done the change in one compiler and will see how well it works. It
    will only be for base 10.

    I'd find it useful in C too, but in my compiler, I'd need to find a way
    of making it optional, so that it works that way in programs I write,
    and is conforming for anything else.

    (Or I maybe I'll just say to hell with it, and make that change anyway.
    I'm fed up with laboriously counting zeros.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Michael S on Tue Apr 8 02:29:55 2025
    On Mon, 7 Apr 2025 21:12:16 +0300, Michael S wrote:

    Is not it "20 milliards" in British English?

    The one and only time I can recall that word being used in the last N
    decades, in English, was in the name “Milliard Gargantu-Brain” ...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Waldek Hebisch on Tue Apr 8 02:36:33 2025
    On Fri, 4 Apr 2025 03:05:23 -0000 (UTC), Waldek Hebisch wrote:

    There is quite a lot of programming languages that have whitespace
    separated lists. Most of them have "Algol like" syntax.

    POP-2 or POP-11, from what I recall:

    [a b c]

    is a list literal, while

    [% "a", "b", "c" %]

    is a list expression.

    PostScript:

    [2 2 add dup 8 mul dup 2 div]

    is a long-winded way of writing

    [4 32 16.0]

    In Lisp, whether an S-expression is meant to be taken literally or now
    comes down to a single quote mark:

    '(+ 2 2)

    is the literal list “(+ 2 2)”, while

    (+ 2 2)

    is 4.

    Which of these would you consider to have “Algol like” syntax? Only the first one.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to James Kuyper on Tue Apr 8 02:39:42 2025
    On Fri, 4 Apr 2025 21:08:36 -0400, James Kuyper wrote:

    There exist many programs that can compile either C code and C++ code, depending either upon the extension of the file name or explicit command
    line options to determine which language's rules to apply.

    But note that the *nix tradition is for the “cc” command to invoke nothing more than a “driver” program, which processes each input file according to its extension by spawning additional processes running the actual file- specific processors. And these processors include the linker, for
    combining object files created by the various compilers into an actual executable (or perhaps a shared library).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to Janis Papanagnou on Mon Apr 7 22:37:02 2025
    On 4/7/25 17:18, Janis Papanagnou wrote:
    On 07.04.2025 22:49, Michael S wrote:
    On Mon, 7 Apr 2025 22:14:20 +0200
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote:
    ...
    "In a few other languages"? - That was not my impression;
    and a quick look into Wikipedia seems to support that.

    The global map[*] is interesting!

    (Read the articles for the details, the historic base, and
    especially what's standard in countries, and why the common
    standard is in some cases like GB not used primarily today.)

    Janis

    https://upload.wikimedia.org/wikipedia/commons/7/7c/World_map_of_long_and_short_scales.svg

    Green - long scale
    Blue - short scale
    Turquoise - both, long and short
    Yellow - other scales


    I think that this map misses one important detail - majority of "blue"
    non-English-speaking countries spell 1e9 as milliard/miliard.
    I.e. for that specific scale they are aligned with "green" countries.
    If you don't believe me, try google translate.

    I cannot tell whether google translate is sufficiently authoritative.

    I would consider Wiktionary to be more reliable for that kind of thing.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to All on Tue Apr 8 02:27:50 2025
    On Mon, 7 Apr 2025 17:30:03 -0000 (UTC), candycanearter07 wrote:

    I used to do a bit of code for a codebase that did that with SECONDS and MINUTES since (almost) every "time" variable was in milliseconds, and it
    was very nice. That is just my subjective opinion, though. :P

    That is the best way to handle any kind of unit conversions: define a
    single conversion factor for each unit, that is used for conversion to (by multiplication) or from (by division) a common canonical unit for that dimension.

    (“All irregularities will be handled by the forces controlling each dimension.”)

    This particularly applies to angles, where people endlessly argue over
    whether radians or degrees are better. Radians are more natural for
    expressing calculations, but degrees are more comprehensible to humans.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to Lawrence D'Oliveiro on Mon Apr 7 23:26:26 2025
    On 4/7/25 22:39, Lawrence D'Oliveiro wrote:
    On Fri, 4 Apr 2025 21:08:36 -0400, James Kuyper wrote:

    There exist many programs that can compile either C code and C++ code,
    depending either upon the extension of the file name or explicit command
    line options to determine which language's rules to apply.

    But note that the *nix tradition is for the “cc” command to invoke nothing
    more than a “driver” program, which processes each input file according to
    its extension by spawning additional processes running the actual file- specific processors. And these processors include the linker, for
    combining object files created by the various compilers into an actual executable (or perhaps a shared library).

    My point was that it doesn't matter if the same program can process C++
    code, and also accepts VMTs when processing C code. The question was
    whether it accepts VMTs when processing C++ code. Whether it executes
    some other program to actually process the code, or does the processing
    itself, is irrelevant to that issue.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to bart on Tue Apr 8 10:25:19 2025
    On 07/04/2025 23:46, bart wrote:
    On 07/04/2025 21:14, Janis Papanagnou wrote:
    On 07.04.2025 20:18, bart wrote:
    On 07/04/2025 19:12, Michael S wrote:
    On Mon, 7 Apr 2025 19:02:34 +0100
    bart <bc@freeuk.com> wrote:

    On 04/04/2025 04:01, Lawrence D'Oliveiro wrote:
    On Wed, 2 Apr 2025 16:33:46 +0100, bart wrote:
    Here, tell me at a glance the magnitude of
    this number:

            10000000000

           #define THOUSAND 1000
           #define MILLION (THOUSAND * THOUSAND)
           #define BILLION (THOUSAND * MILLION)

           uint64 num = 10 * BILLION;

    Much easier to figure out, don’t you think?

    Try 20 * BILLION; it will overflow if not careful.

    (Actually both 10/20 billion will overflow u32; I was thinking of 20
    billion billion overflowing u64.)

    I'd normally write '20 billion' outside of C, since I use such
    numbers, with lots of zeros, constantly when writing test code.

    But when it isn't all zeros, or the base isn't 10, then numeric
    separators are better.


    Is not it "20 milliards" in British English?

    We (UK) now use 'billion' for 1E9; in the past it meant 1E12.

    'Milliardo' is Italian for 'billion'; perhaps in a few other
    languages too.

    "In a few other languages"? - That was not my impression;
    and a quick look into Wikipedia seems to support that.

    The global map[*] is interesting!

    (Read the articles for the details, the historic base, and
    especially what's standard in countries, and why the common
    standard is in some cases like GB not used primarily today.)


    https://upload.wikimedia.org/wikipedia/commons/7/7c/World_map_of_long_and_short_scales.svg

    I'd never heard of short and long scales. The full article is here:

    https://en.wikipedia.org/wiki/Long_and_short_scales

    I only knew about the old and new meanings of 'billion' in the UK, its
    US meaning, and the use of 'milliard' (however it is spelt, since I'm
    only familiar with it in speech), in Italian.

    (In source code, it would also be useful to use 1e9 or 1e12,
    unfortunately those normally yield floating point values. I can't do
    much about that in C, but I will see what can be done with my own stuff.)


    In Norwegian, we use the long scale - "million, milliard, billion,
    billiard, trillion, trilliard". The spelling is the same as in English
    (I don't know about after "trilliard"), but the pronunciation is a
    little different.

    I think it is safest to say "thousand million", "million million", or
    use SI prefixes or scientific notation, which are often more appropriate
    in the context. (The exception is for things like national debts or the
    price of new fighter jets - and then the numbers are so meaninglessly
    big that being three orders of magnitude out does not change how you
    feel about them!)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Keith Thompson on Tue Apr 8 11:45:45 2025
    On Mon, 07 Apr 2025 16:01:06 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    bart <bc@freeuk.com> writes:
    [...]
    Since numbers using exponents without also using decimal points are
    rare in my code base, I've decided to experiment with numbers like
    1e6 being integer constants rather that floats. (This is IN my
    language.)

    You might want to look at Ada for existing practice.

    In C, a constant with either a decimal point or an exponent is floating-point. In Ada, 1.0e6 is floating-point and 1e6 is an
    integer. Of course this isn't very helpful if you want to represent
    numbers with a lot of non-zero digits; for that, you need digit
    separators.

    [...]


    The same in VHDL.
    I don't know about everybody, but for me it is a constant source of
    syntax errors.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to James Kuyper on Tue Apr 8 10:39:37 2025
    On 07/04/2025 20:35, James Kuyper wrote:
    On 4/3/25 18:00, Waldek Hebisch wrote:
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
    ...
    Not always practical. A good example is the type size_t. If a
    function takes an argument of type size_t, then the symbol size_t
    should be defined, no matter which header the function is being
    declared in.

    Why? One can use a type without a name for such type.

    How would you declare a pointer to a function type such that it is
    compatible with such a function's type?

    The C23 "typeof" operator lets you work with the type of a value or
    expression. So you first have an object or value of type "size_t",
    that's all you need. Unfortunately, there are no convenient literal
    suffixes that could be used here.

    When a variable is needed to store a value that would be passed as the
    size_t argument to such a function, I would (in the absence of any
    specific reason to do otherwise) want to declare that object to have the
    type size_t.
    Why should I have to #include a different header just because I want to
    do these things?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Richard Heathfield on Tue Apr 8 10:29:13 2025
    On 07/04/2025 21:29, Richard Heathfield wrote:
    On 07/04/2025 19:12, Michael S wrote:
    On Mon, 7 Apr 2025 19:02:34 +0100
    bart <bc@freeuk.com> wrote:

    On 04/04/2025 04:01, Lawrence D'Oliveiro wrote:
    On Wed, 2 Apr 2025 16:33:46 +0100, bart wrote:
    Here, tell me at a glance the magnitude of
    this number:

           10000000000

          #define THOUSAND 1000
          #define MILLION (THOUSAND * THOUSAND)
          #define BILLION (THOUSAND * MILLION)

          uint64 num = 10 * BILLION;

    Much easier to figure out, don’t you think?

    Try 20 * BILLION; it will overflow if not careful.

    I'd normally write '20 billion' outside of C, since I use such
    numbers, with lots of zeros, constantly when writing test code.

    But when it isn't all zeros, or the base isn't 10, then numeric
    separators are better.


    Is not it "20 milliards" in British English?

    Yes. The British use

    1 - one
    10 - ten
    100 - hundred
    1 000 - thousand
    10 000 - myriad
    100 000 - pool
    1 000 000 - million
    1 000 000 000 - milliard
    1 000 000 000 000 - billion
    1 000 000 000 000 000 - billiard
    1 000 000 000 000 000 000 - trillion
    1 000 000 000 000 000 000 000 - trilliard
    1 000 000 000 000 000 000 000 000 - snooker
    except for journalists, politicians, stockbrokers, and anyone else who
    spends far too much time talking to Americans.

    The biggest number you're likely to need in the real world is 100 tredecimillion, which is approximately the number of atoms in the known universe.

    ObC: I am currently roughing out a proposal for the ISO folks to
    introduce the 288-bit long long long long long long long long long int,
    or universe_t for short, so that programs will be able to keep track of
    those 100 tredecimillion atoms. Each universe_t will be able to count
    atoms in almost five million observable universes, which should be
    enough to be going on with.


    I remember reading a proposal to generalise the C integer type names,
    allowing for things like "short long int" for 24-bit, "short short short
    int" for 4-bit, and so on. It was not accepted into the standards -
    perhaps because of the date (first of April).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Tue Apr 8 10:54:12 2025
    On Tue, 8 Apr 2025 10:29:13 +0200
    David Brown <david.brown@hesbynett.no> wibbled:
    On 07/04/2025 21:29, Richard Heathfield wrote:
    Is not it "20 milliards" in British English?

    Yes. The British use

    No we don't.


    1 - one
    10 - ten
    100 - hundred
    1 000 - thousand
    10 000 - myriad
    100 000 - pool
    1 000 000 - million
    1 000 000 000 - milliard

    Is this a late april fool?

    Absolutely no one in britain says myriad for 10K , pool (wtf?) for 100K or milliard apart from maybe history of science professor and you'd probably be hard pressed to find many people who'd even heard of them in that context.
    The only reason I knew milliard is because I can speak (sort of) french and thats the french billion.

    except for journalists, politicians, stockbrokers, and anyone else who
    spends far too much time talking to Americans.

    Pfft. The standard mathematical million-billion-trillion sequence has been
    used in the UK since at least I was at school almost 40 years ago.

    Where do you get your information from, The Disney Guide to the UK?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Keith Thompson on Tue Apr 8 11:37:31 2025
    On 08/04/2025 00:01, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    [...]
    Since numbers using exponents without also using decimal points are
    rare in my code base, I've decided to experiment with numbers like 1e6
    being integer constants rather that floats. (This is in my language.)

    You might want to look at Ada for existing practice.

    In C, a constant with either a decimal point or an exponent is floating-point. In Ada, 1.0e6 is floating-point and 1e6 is an integer.
    Of course this isn't very helpful if you want to represent numbers with
    a lot of non-zero digits; for that, you need digit separators.

    The context here /is/ lots of zeros. In the real world, usage such as
    such as 'N million' or 'N billion' typically scales N by a million or
    billion.

    But it's good to know I'm copying existing practice from Ada.

    (However it turns out that trying to add it to my C compiler is
    pointless; '1e9' needs to be an integer billion across multiple compilers.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Heathfield@21:1/5 to David Brown on Tue Apr 8 13:00:04 2025
    On 08/04/2025 12:39, David Brown wrote:

    <snip>

    "myriad" means 10,000, coming directly from the Greek.  But the
    word is usually used to mean "a great many" or "more than you can
    count".  (It's like the use of "40" in the Bible - I guess the
    ancient Greeks were better at counting than the ancient Canaanites.)

    Yes, 'myriad' is widely known, and I know several people living
    in Britain who do use it in that sense when the occasion arises,
    but I wouldn't necessarily expect people in my killfile to be
    aware of the word, let alone know anyone who uses it.

    You are unlikely to find the word "myriad" meaning specifically
    10,000 outside of translated Classical Greek or Latin literature,
    or in old military history contexts.

    I have not heard of the word "pool" meaning 100,000.  But then, I
    am not as old as Richard :-)

    In India and other parts of Asia, 100,000 has a specific name
    such as "lakh" - written as 1,00,000 (it's not just the digit
    separator that varies between country, but also where the
    separators are placed).

    My first draft did indeed give 'lakh', but in the light of
    'billiard' the totally fabricated 'pool' and 'snooker' had a
    (very light) touch of potential for humour. For anyone who hasn't
    heard of humour, it was very big in the Sixties and to this day
    still makes occasional appearances for old times' sake.

    The UK officially (as a government standard) used the "long
    scale" (billion = 10 ^ 12) until 1974.  Unofficially, it was
    still sometimes used long afterwards - equally, the "short scale"
    (billion = 10 ^ 9) was often used long before that.  So the short
    scale is the norm in the UK now (except for politicians talking
    about national debt - "billions" doesn't sound as bad as
    "trillions"), but Richard may have learned the long scale at school.

    I don't recall ever being taught the long scale explicitly, and
    whether I glarked it at school or in my reading at home I know
    not, but had the short scale been in common use I would have
    soaked it up instead of the long scale.

    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Muttley@DastardlyHQ.org on Tue Apr 8 13:39:14 2025
    On 08/04/2025 12:54, Muttley@DastardlyHQ.org wrote:
    On Tue, 8 Apr 2025 10:29:13 +0200
    David Brown <david.brown@hesbynett.no> wibbled:
    On 07/04/2025 21:29, Richard Heathfield wrote:
    Is not it "20 milliards" in British English?

    Yes. The British use

    No we don't.


    1 - one
    10 - ten
    100 - hundred
    1 000 - thousand
    10 000 - myriad
    100 000 - pool
    1 000 000 - million
    1 000 000 000 - milliard

    Is this a late april fool?

    Absolutely no one in britain says myriad for 10K , pool (wtf?) for 100K or milliard apart from maybe history of science professor and you'd probably be hard pressed to find many people who'd even heard of them in that context. The only reason I knew milliard is because I can speak (sort of) french and thats the french billion.


    "myriad" means 10,000, coming directly from the Greek. But the word is
    usually used to mean "a great many" or "more than you can count". (It's
    like the use of "40" in the Bible - I guess the ancient Greeks were
    better at counting than the ancient Canaanites.)

    You are unlikely to find the word "myriad" meaning specifically 10,000
    outside of translated Classical Greek or Latin literature, or in old
    military history contexts.

    I have not heard of the word "pool" meaning 100,000. But then, I am not
    as old as Richard :-)

    In India and other parts of Asia, 100,000 has a specific name such as
    "lakh" - written as 1,00,000 (it's not just the digit separator that
    varies between country, but also where the separators are placed).


    except for journalists, politicians, stockbrokers, and anyone else who
    spends far too much time talking to Americans.

    Pfft. The standard mathematical million-billion-trillion sequence has been used in the UK since at least I was at school almost 40 years ago.


    The UK officially (as a government standard) used the "long scale"
    (billion = 10 ^ 12) until 1974. Unofficially, it was still sometimes
    used long afterwards - equally, the "short scale" (billion = 10 ^ 9) was
    often used long before that. So the short scale is the norm in the UK
    now (except for politicians talking about national debt - "billions"
    doesn't sound as bad as "trillions"), but Richard may have learned the
    long scale at school.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Muttley@DastardlyHQ.org on Tue Apr 8 14:20:10 2025
    On Tue, 8 Apr 2025 10:54:12 -0000 (UTC)
    Muttley@DastardlyHQ.org wrote:


    Where do you get your information from, The Disney Guide to the UK?



    Googling around suggests that in UK replacement of word 'milliard'
    in the meaning 1e9 by the word 'billion' and simulatanius withering of
    the billion=1e12 started in the early 1950s and was completed by
    official verdicts in the mid 1970s.

    So, John Couch Adams used milliard. Not sure about Alan Turing, but it
    seem probable.


    The question to which I found no answer by googling is when Americans themselves decided that billion means 1e9.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Philipp Klaus Krause on Tue Apr 8 14:32:59 2025
    On 05/04/2025 18:56, Philipp Klaus Krause wrote:
    Am 02.04.25 um 11:57 schrieb bart:
    * Where are the fixed-width types from stdint.h?

    Same as for size_t, etc: They don't exist. Those are not separate types,
    just typedefs to some other types. E.g. uint16_t could be typedef'ed to unsigned int.


    This is the point I made a few weeks back, but others insisted they were
    part of C:


    Me:
    stdint.h et al are just ungainly bolt-ons, not fully supported by the
    language.

    Keith Thompson:

    No, they're fully supported by the language. They've been in the ISO standard since 1999.


    This is an exchange posted on 20-Mar-2025 at 19:10 GMT (header shows
    'Thu, 20 Mar 2025 12:10:22 -0700')

    Clearly, they're not quite as fully supported as short, int etc; they
    are usually just aliases. But that needn't stop them being shown on such
    a chart.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Richard Heathfield on Tue Apr 8 16:55:47 2025
    On 08/04/2025 14:00, Richard Heathfield wrote:

    My first draft did indeed give 'lakh', but in the light of 'billiard'
    the totally fabricated 'pool' and 'snooker' had a (very light) touch of potential for humour. For anyone who hasn't heard of humour, it was very
    big in the Sixties and to this day still makes occasional appearances
    for old times' sake.


    One thing I miss from all these online etymological dictionaries is the reasoning behind some of the origins of words. For example, the game of
    "pool" is so-called because players bet on it by putting their money in
    a pile - a "pool". This form of managing a bet comes from an old French
    game "poule" (meaning "chicken") where players put their bet in a bowl.
    Then a chicken is released in the room, and players throw stones at it -
    the first to knock over the chicken, wins the pool!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to bart on Tue Apr 8 16:57:17 2025
    On 08/04/2025 15:32, bart wrote:
    On 05/04/2025 18:56, Philipp Klaus Krause wrote:
    Am 02.04.25 um 11:57 schrieb bart:
    * Where are the fixed-width types from stdint.h?

    Same as for size_t, etc: They don't exist. Those are not separate
    types, just typedefs to some other types. E.g. uint16_t could be
    typedef'ed to unsigned int.


    This is the point I made a few weeks back, but others insisted they were
    part of C:


    Me:
    stdint.h et al are just ungainly bolt-ons, not fully supported by the
    language.

    Keith Thompson:

    No, they're fully supported by the language.  They've been in the ISO standard since 1999.


    This is an exchange posted on 20-Mar-2025 at 19:10 GMT (header shows
    'Thu, 20 Mar 2025 12:10:22 -0700')

    Clearly, they're not quite as fully supported as short, int etc; they
    are usually just aliases. But that needn't stop them being shown on such
    a chart.

    Standard aliases are part of the language standard, and therefore
    standard and fully supported parts of the language.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to bart on Tue Apr 8 16:08:51 2025
    bart <bc@freeuk.com> writes:
    On 08/04/2025 15:57, David Brown wrote:
    On 08/04/2025 15:32, bart wrote:


    [1] Maybe _t names are reserved, but this:

    typedef struct {int x,y;} uint64_t;


    7.34.15 Integer types <stdint.h>

    Typedef names beginning with int or uint and ending with _t are potentially reserved identifiers and may be added to the types defined in the <stdint.h> header. Macro names beginning with INT or UINT and ending with _MAX, _MIN, _WIDTH, or _C are potentially reserved identifiers and may
    be added to the macros defined in the <stdint.h> header.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to David Brown on Tue Apr 8 16:47:07 2025
    On 08/04/2025 15:57, David Brown wrote:
    On 08/04/2025 15:32, bart wrote:
    On 05/04/2025 18:56, Philipp Klaus Krause wrote:
    Am 02.04.25 um 11:57 schrieb bart:
    * Where are the fixed-width types from stdint.h?

    Same as for size_t, etc: They don't exist. Those are not separate
    types, just typedefs to some other types. E.g. uint16_t could be
    typedef'ed to unsigned int.


    This is the point I made a few weeks back, but others insisted they
    were part of C:


    Me:
    stdint.h et al are just ungainly bolt-ons, not fully supported by the >>  >> language.

    Keith Thompson:

    No, they're fully supported by the language.  They've been in the ISO >>  > standard since 1999.
    ;

    This is an exchange posted on 20-Mar-2025 at 19:10 GMT (header shows
    'Thu, 20 Mar 2025 12:10:22 -0700')

    Clearly, they're not quite as fully supported as short, int etc; they
    are usually just aliases. But that needn't stop them being shown on
    such a chart.

    Standard aliases are part of the language standard, and therefore
    standard and fully supported parts of the language.


    So, should they have been on that chart?

    and fully supported parts of the language.

    Differences between 'unsigned long long int' and 'uint64_t' up to C23:

    uint64_t unsigned long long int

    Works without header No Yes

    Literal suffix No Yes (ULL etc)

    Dedicated printf format No Yes (%llu)

    Dedicated scanf format No Yes (whatever that might be)

    sizeof() might not be 8 No Maybe

    Reserved word[1] No Yes

    Outside lexical scope[2] No Yes

    Incompatible with
    unsigned long int No Yes


    [1] Maybe _t names are reserved, but this:

    typedef struct {int x,y;} uint64_t;

    compiles cleanly with:

    gcc -std=c23 -Wall -Wextra -pedantic

    This means that they could legally be used for any user-defined types.

    [2] This is possible with uint64_t:

    #include <stdint.h>

    int main() {
    typedef struct {int x,y;} uint64_t;
    }

    You can shadow the names from stdint.h.

    So I'd dispute they are as fully supported and 'special' as built-in types.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to David Brown on Tue Apr 8 19:25:05 2025
    On 08.04.2025 13:39, David Brown wrote:

    "myriad" means 10,000, coming directly from the Greek. But the word is usually used to mean "a great many" or "more than you can count". [...]

    You are unlikely to find the word "myriad" meaning specifically 10,000 outside of translated Classical Greek or Latin literature, or in old
    military history contexts.

    I cannot tell about how the _standalone_ word "myriad" is used there
    but in Greece it's still ubiquitously used as part of the Greek word
    for "million" (εκατομμύριο = "a hundred myriads") in the contemporary
    Greek language.

    I think the entity '10000' is still used in some countries in Asia,
    of course with their own (non-western) typography.

    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to David Brown on Tue Apr 8 20:53:45 2025
    On Tue, 8 Apr 2025 13:39:14 +0200
    David Brown <david.brown@hesbynett.no> wrote:

    On 08/04/2025 12:54, Muttley@DastardlyHQ.org wrote:
    On Tue, 8 Apr 2025 10:29:13 +0200
    David Brown <david.brown@hesbynett.no> wibbled:
    On 07/04/2025 21:29, Richard Heathfield wrote:
    Is not it "20 milliards" in British English?

    Yes. The British use

    No we don't.


    1 - one
    10 - ten
    100 - hundred
    1 000 - thousand
    10 000 - myriad
    100 000 - pool
    1 000 000 - million
    1 000 000 000 - milliard

    Is this a late april fool?

    Absolutely no one in britain says myriad for 10K , pool (wtf?) for
    100K or milliard apart from maybe history of science professor and
    you'd probably be hard pressed to find many people who'd even heard
    of them in that context. The only reason I knew milliard is because
    I can speak (sort of) french and thats the french billion.


    "myriad" means 10,000, coming directly from the Greek. But the word
    is usually used to mean "a great many" or "more than you can count".
    (It's like the use of "40" in the Bible - I guess the ancient Greeks
    were better at counting than the ancient Canaanites.)



    In the Bible?
    Or, may be, in imprecise translations of the Bible that confuse the
    word רבבה that means 10000 with remotely similar word ארבעים that means
    40 ?




    רבבה

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Scott Lurndal on Tue Apr 8 11:05:59 2025
    scott@slp53.sl.home (Scott Lurndal) writes:

    bart <bc@freeuk.com> writes:

    On 08/04/2025 15:57, David Brown wrote:

    On 08/04/2025 15:32, bart wrote:

    [1] Maybe _t names are reserved, but this:

    typedef struct {int x,y;} uint64_t;

    7.34.15 Integer types <stdint.h>

    Typedef names beginning with int or uint and ending with _t are potentially reserved identifiers and may be added to the types defined in the <stdint.h> header. Macro names beginning with INT or UINT and ending with _MAX, _MIN, _WIDTH, or _C are potentially reserved identifiers and may
    be added to the macros defined in the <stdint.h> header.

    It would be better if it had been explained that these statements
    are not currently among the requirements given by the C standard,
    but only advisories listed in the "Future Directions" section.
    Names like uint64_t are not reserved, even in the not-yet-ratified
    draft C standard.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to David Brown on Tue Apr 8 14:14:36 2025
    On 4/8/25 04:39, David Brown wrote:
    On 07/04/2025 20:35, James Kuyper wrote:
    On 4/3/25 18:00, Waldek Hebisch wrote:
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
    ...
    Not always practical. A good example is the type size_t. If a
    function takes an argument of type size_t, then the symbol size_t
    should be defined, no matter which header the function is being
    declared in.

    Why? One can use a type without a name for such type.

    How would you declare a pointer to a function type such that it is
    compatible with such a function's type?

    The C23 "typeof" operator lets you work with the type of a value or expression. So you first have an object or value of type "size_t",
    that's all you need. Unfortunately, there are no convenient literal
    suffixes that could be used here.

    I can see how that would work with the return type of a function, but
    how would it apply to an argument of a function?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to James Kuyper on Tue Apr 8 21:29:57 2025
    On Tue, 8 Apr 2025 14:25:52 -0400
    James Kuyper <jameskuyper@alumni.caltech.edu> wrote:

    On 4/8/25 07:20, Michael S wrote:
    On Tue, 8 Apr 2025 10:54:12 -0000 (UTC)
    ...
    The question to which I found no answer by googling is when
    Americans themselves decided that billion means 1e9.

    I generally find Wikipedia a more useful source that Google for this
    kind of information.

    The previously referenced Wikipedia article on long scales versus
    short scales asserts that "The short scale was never widespread
    before its general adoption in the United States. It has been taught
    in American schools since the early 1800s"

    It cites " Smith, David Eugene (1953) [first published 1925]. History
    of Mathematics. Vol. II. Courier Dover Publications. p. 81. ISBN 978-0-486-20430-7." as the source for that claim.

    It also says "The first American appearance of the short scale value
    of billion as 109 was published in the Greenwood Book of 1729, written anonymously by Prof. Isaac Greenwood of Harvard College.", citing the
    same reference. This does not contradict the first statement - it
    might have taken 70 years to become widespread from the first time it appeared.

    And finally, it says "In the United States, the short scale has been
    taught in school since the early 19th century. It is therefore used exclusively"

    Citing "Cambridge Dictionaries Online. Cambridge University Press.
    Retrieved 21 August 2011." as it's source for that statement.

    Thank you. That's a lot earlier than I was imagining.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to Janis Papanagnou on Tue Apr 8 14:32:35 2025
    On 4/8/25 13:25, Janis Papanagnou wrote:
    ...
    I think the entity '10000' is still used in some countries in Asia,
    of course with their own (non-western) typography.

    Yes, the Chinese word for 10000 is 萬. That makes it a little tricky to translate expressions meaning numbers that large or larger between
    English and Chinese.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to David Brown on Tue Apr 8 14:34:44 2025
    On 4/8/25 07:39, David Brown wrote:
    On 08/04/2025 12:54, Muttley@DastardlyHQ.org wrote:
    ...
    Pfft. The standard mathematical million-billion-trillion sequence has been >> used in the UK since at least I was at school almost 40 years ago.


    The UK officially (as a government standard) used the "long scale"
    (billion = 10 ^ 12) until 1974.
    That was before he started school, so as far as he's concerned, it
    doesn't count.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to Michael S on Tue Apr 8 14:25:52 2025
    On 4/8/25 07:20, Michael S wrote:
    On Tue, 8 Apr 2025 10:54:12 -0000 (UTC)
    ...
    The question to which I found no answer by googling is when Americans themselves decided that billion means 1e9.

    I generally find Wikipedia a more useful source that Google for this
    kind of information.

    The previously referenced Wikipedia article on long scales versus short
    scales asserts that "The short scale was never widespread before its
    general adoption in the United States. It has been taught in American
    schools since the early 1800s"

    It cites " Smith, David Eugene (1953) [first published 1925]. History of Mathematics. Vol. II. Courier Dover Publications. p. 81. ISBN 978-0-486-20430-7." as the source for that claim.

    It also says "The first American appearance of the short scale value of
    billion as 109 was published in the Greenwood Book of 1729, written
    anonymously by Prof. Isaac Greenwood of Harvard College.", citing the
    same reference. This does not contradict the first statement - it might
    have taken 70 years to become widespread from the first time it appeared.

    And finally, it says "In the United States, the short scale has been
    taught in school since the early 19th century. It is therefore used exclusively"

    Citing "Cambridge Dictionaries Online. Cambridge University Press.
    Retrieved 21 August 2011." as it's source for that statement.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From candycanearter07@21:1/5 to Janis Papanagnou on Tue Apr 8 18:40:03 2025
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote at 19:49 this Monday (GMT):
    On 07.04.2025 19:30, candycanearter07 wrote:
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote at 03:01 this Friday (GMT):
    On Wed, 2 Apr 2025 16:33:46 +0100, bart wrote:

    Here, tell me at a glance the magnitude of
    this number:

    10000000000

    #define THOUSAND 1000
    #define MILLION (THOUSAND * THOUSAND)
    #define BILLION (THOUSAND * MILLION)

    uint64 num = 10 * BILLION;

    Much easier to figure out, don’t you think?

    Yes, where appropriate that's fine.

    But that pattern doesn't work for numbers like 299792458 [m/s]
    (i.e. in the general case, as opposed to primitive scalers).

    And it's also not good for international languages (different
    to US American and the like), where "billion" means something
    else (namely 10^12, and not 10^9), so that its semantics isn't
    unambiguously clear in the first place.

    I'd also say the difference between Megabytes (MB) and MiB is VERY easy
    to mess up, but it's close enough for displaying to the end user until
    you get into the really big files.

    And sometimes you have large numeric literals and don't want
    to add such CPP ballast just for readability; especially if
    there is (or would be) a standard number grouping for literals
    available.

    So it's generally a gain to have a grouping syntax available.



    I used to do a bit of code for a codebase that did that with SECONDS and
    MINUTES since (almost) every "time" variable was in milliseconds, and it
    was very nice. That is just my subjective opinion, though. :P

    That actually depends on what you do. Milliseconds was (for our
    applications) often either not good enough a resolution, or, on
    a larger scale, unnecessary or reducing the available range.

    Quasi "norming" an integral value to represent a milliseconds unit
    I consider especially bad, although not that bad as units of 0.01s
    (that I think have met in Javascript). I also seem to recall that
    MS DOS had such arbitrary sub-seconds units, but I'm not quite sure
    about that any more.

    Well, yeah. Using doubles for timescales is always going to be messy.

    A better unit is, IMO, a second resolution (which at least is a
    basic physical unit) and a separate integer for sub-seconds. (An
    older Unix I used supported the not uncommon nanoseconds attribute
    but where only milli- and micro-seconds were uses, the rest was 0.)

    Or have an abstraction layer that hides all implementation details
    and don't have to care any more about implementation details of
    such "time types".

    I mean it was abstracted, everything timewise is measured as ticks
    against world.time (10 ticks per second), so ye

    it was more like
    #define SECONDS *10
    #define MINUTES SECONDS*60
    #define HOURS MINUTES*60

    , though. Probably would be more notably annoying to debug in weird
    cases if the whole language/codebase wasnt borked spagetti :D

    Janis



    --
    user <candycane> is generated from /dev/urandom

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Michael S on Tue Apr 8 22:30:48 2025
    On Tue, 8 Apr 2025 20:53:45 +0300
    Michael S <already5chosen@yahoo.com> wrote:

    On Tue, 8 Apr 2025 13:39:14 +0200
    David Brown <david.brown@hesbynett.no> wrote:

    On 08/04/2025 12:54, Muttley@DastardlyHQ.org wrote:
    On Tue, 8 Apr 2025 10:29:13 +0200
    David Brown <david.brown@hesbynett.no> wibbled:
    On 07/04/2025 21:29, Richard Heathfield wrote:
    Is not it "20 milliards" in British English?

    Yes. The British use

    No we don't.


    1 - one
    10 - ten
    100 - hundred
    1 000 - thousand
    10 000 - myriad
    100 000 - pool
    1 000 000 - million
    1 000 000 000 - milliard

    Is this a late april fool?

    Absolutely no one in britain says myriad for 10K , pool (wtf?) for
    100K or milliard apart from maybe history of science professor and
    you'd probably be hard pressed to find many people who'd even
    heard of them in that context. The only reason I knew milliard is
    because I can speak (sort of) french and thats the french billion.


    "myriad" means 10,000, coming directly from the Greek. But the word
    is usually used to mean "a great many" or "more than you can count".
    (It's like the use of "40" in the Bible - I guess the ancient Greeks
    were better at counting than the ancient Canaanites.)



    In the Bible?
    Or, may be, in imprecise translations of the Bible that confuse the
    word רבבה that means 10000 with remotely similar word ארבעים that means 40 ?


    I looked at translation of the first appearance of the word רבבה in KJB. https://www.kingjamesbibleonline.org/Genesis-24-60/
    It is translated as million. I.e. incorrect, but in other direction.







    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Keith Thompson on Tue Apr 8 23:34:10 2025
    On 08/04/2025 22:46, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:

    Clearly, they're not quite as fully supported as short, int etc; they
    are usually just aliases. But that needn't stop them being shown on
    such a chart.

    Apparently the author of the chart chose to include types that are
    defined by the core language, not by the library.

    So here you're finally admitteding they are a different rank.

    I think that was a
    perfectly valid choice. Adding all the types specified in the library
    would make the chart far too big and not much more informative.

    So there is a place for 'extended integer types', '_Bitint',
    '_Decimal128' and 'long double _Complex', which people could spend years
    coding in C and never use, but not for the equivalents of these everyday
    types:

    i8 i16 i32 i64
    u8 u16 u32 u64

    which are the core integer types of languages like C#, D, Go, Rust, Zig,
    Java (signed only), Nim and Odin.

    To me it is astounding that such fundamental machine types (and C is one
    of the those closest to hardware) should be omitted from such a diagram.

    But clearly you have a different view even after insisting they are an
    fully integrated part of the language.


    If you don't like it, make your own chart.


    Actually I can't quite see the purpose of this chart, why it has to be
    so complicated (even with bits missing) or who it is for.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to bart on Tue Apr 8 22:47:09 2025
    bart <bc@freeuk.com> writes:
    On 08/04/2025 22:46, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    Apparently the author of the chart chose to include types that are
    defined by the core language, not by the library.

    So here you're finally admitteding they are a different rank.

    The core language and the library are equal in rank, both being
    different parts of any implementation of C.

    Actually I can't quite see the purpose of this chart, why it has to be
    so complicated (even with bits missing) or who it is for.

    Every category shown on that chart has rules that are apply only to
    types in that category. The chart is for people who have not yet
    memorized the relationships shown, and who need to understand the rules
    that apply to each category. That clearly doesn't apply to you, since understanding the rules would make it more difficult for you to complain
    about them.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to James Kuyper on Tue Apr 8 21:36:27 2025
    James Kuyper <jameskuyper@alumni.caltech.edu> writes:

    bart <bc@freeuk.com> writes:

    On 08/04/2025 22:46, Keith Thompson wrote:

    bart <bc@freeuk.com> writes:

    Apparently the author of the chart chose to include types
    that are

    defined by the core language, not by the library.

    So here you're finally admitteding they are a different rank.

    The core language and the library are equal in rank, both being
    different parts of any implementation of C.

    This statement isn't exactly right. Some parts of the standard
    library are available only in hosted implementations, and not in
    freestanding implementations.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Keith Thompson on Wed Apr 9 10:55:49 2025
    On Tue, 08 Apr 2025 23:12:13 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
    James Kuyper <jameskuyper@alumni.caltech.edu> writes:
    bart <bc@freeuk.com> writes:
    On 08/04/2025 22:46, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    Apparently the author of the chart chose to include types
    that are

    defined by the core language, not by the library.

    So here you're finally admitteding they are a different rank.

    The core language and the library are equal in rank, both being
    different parts of any implementation of C.

    This statement isn't exactly right. Some parts of the standard
    library are available only in hosted implementations, and not in freestanding implementations.

    True. Also, freestanding implementations must support <stddef.h>
    and <stdint.h>, among several other headers.


    May be in some formal sense headers and library routines that are
    mandatory for freestanding implementations belong to the same rank as
    core language. But in practice there exists an obvious difference. In
    the first case, name clashes are avoidable (sometimes with toothless
    threat that they can happen in the future) and in the second case they
    are unavoidable.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to bart on Wed Apr 9 11:20:43 2025
    On Tue, 8 Apr 2025 16:47:07 +0100
    bart <bc@freeuk.com> wrote:

    On 08/04/2025 15:57, David Brown wrote:
    On 08/04/2025 15:32, bart wrote:
    On 05/04/2025 18:56, Philipp Klaus Krause wrote:
    Am 02.04.25 um 11:57 schrieb bart:
    * Where are the fixed-width types from stdint.h?

    Same as for size_t, etc: They don't exist. Those are not separate
    types, just typedefs to some other types. E.g. uint16_t could be
    typedef'ed to unsigned int.


    This is the point I made a few weeks back, but others insisted
    they were part of C:


    Me:
    stdint.h et al are just ungainly bolt-ons, not fully supported
    by the >> language.

    Keith Thompson:

    No, they're fully supported by the language.  They've been in
    the ISO > standard since 1999.


    This is an exchange posted on 20-Mar-2025 at 19:10 GMT (header
    shows 'Thu, 20 Mar 2025 12:10:22 -0700')

    Clearly, they're not quite as fully supported as short, int etc;
    they are usually just aliases. But that needn't stop them being
    shown on such a chart.

    Standard aliases are part of the language standard, and therefore
    standard and fully supported parts of the language.


    So, should they have been on that chart?

    and fully supported parts of the language.

    Differences between 'unsigned long long int' and 'uint64_t' up to C23:

    uint64_t unsigned long long int

    Works without header No Yes

    Literal suffix No Yes (ULL etc)

    Dedicated printf format No Yes (%llu)

    Dedicated scanf format No Yes (whatever that might be)

    sizeof() might not be 8 No Maybe

    I don't think that 'No' above is correct.
    Take, for example, ADI SHARC DSPs. Traditionally, sizeof(uint64_t) was
    2.
    I looked into the latest manual and see that now their compiler have
    an option -char-size-8 and with this option sizeof(uint64_t)=8. But
    this option is available only for those members of SHARC family that
    have HW support for byte addressing. Even for those, -char-size-8 is
    not a default.


    Reserved word[1] No Yes

    Outside lexical scope[2] No Yes

    Incompatible with
    unsigned long int No Yes


    I don't understand why you say 'No'. AFAIK, on all existing systems
    except 64-bit Unixen the answer is 'Yes'.


    [1] Maybe _t names are reserved, but this:

    typedef struct {int x,y;} uint64_t;

    compiles cleanly with:

    gcc -std=c23 -Wall -Wextra -pedantic

    This means that they could legally be used for any user-defined types.

    [2] This is possible with uint64_t:

    #include <stdint.h>

    int main() {
    typedef struct {int x,y;} uint64_t;
    }

    You can shadow the names from stdint.h.

    So I'd dispute they are as fully supported and 'special' as built-in
    types.



    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Wed Apr 9 09:00:27 2025
    On Tue, 8 Apr 2025 14:34:44 -0400
    James Kuyper <jameskuyper@alumni.caltech.edu> wibbled:
    On 4/8/25 07:39, David Brown wrote:
    On 08/04/2025 12:54, Muttley@DastardlyHQ.org wrote:
    ....
    Pfft. The standard mathematical million-billion-trillion sequence has been >>> used in the UK since at least I was at school almost 40 years ago.


    The UK officially (as a government standard) used the "long scale"
    (billion = 10 ^ 12) until 1974.
    That was before he started school, so as far as he's concerned, it
    doesn't count.

    It doesn't count of no one speaks like that today otherwise you might as well claim we all still speak anglo saxon.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Wed Apr 9 09:01:34 2025
    On Tue, 8 Apr 2025 20:53:45 +0300
    Michael S <already5chosen@yahoo.com> wibbled:
    "myriad" means 10,000, coming directly from the Greek. But the word
    is usually used to mean "a great many" or "more than you can count".
    (It's like the use of "40" in the Bible - I guess the ancient Greeks
    were better at counting than the ancient Canaanites.)
    =20


    In the Bible?
    Or, may be, in imprecise translations of the Bible that confuse the=20
    word =D7=A8=D7=91=D7=91=D7=94 that means 10000 with remotely similar word = >=D7=90=D7=A8=D7=91=D7=A2=D7=99=D7=9D that means
    40 ?




    =D7=A8=D7=91=D7=91=D7=94=20

    Any chance of using utf8 rather than whatever the hell encoding this is.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ike Naar@21:1/5 to bart on Wed Apr 9 08:53:59 2025
    On 2025-04-08, bart <bc@freeuk.com> wrote:
    Differences between 'unsigned long long int' and 'uint64_t' up to C23:

    uint64_t unsigned long long int

    [...]

    Literal suffix No Yes (ULL etc)

    UINT64_C

    Dedicated printf format No Yes (%llu)

    PRIu64

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Muttley@DastardlyHQ.org on Wed Apr 9 12:23:40 2025
    On Wed, 9 Apr 2025 09:01:34 -0000 (UTC)
    Muttley@DastardlyHQ.org wrote:

    On Tue, 8 Apr 2025 20:53:45 +0300
    Michael S <already5chosen@yahoo.com> wibbled:
    "myriad" means 10,000, coming directly from the Greek. But the
    word is usually used to mean "a great many" or "more than you can
    count". (It's like the use of "40" in the Bible - I guess the
    ancient Greeks were better at counting than the ancient
    Canaanites.)
    =20


    In the Bible?
    Or, may be, in imprecise translations of the Bible that confuse
    the=20 word =D7=A8=D7=91=D7=91=D7=94 that means 10000 with remotely
    similar word = =D7=90=D7=A8=D7=91=D7=A2=D7=99=D7=9D that means
    40 ?




    =D7=A8=D7=91=D7=91=D7=94=20

    Any chance of using utf8 rather than whatever the hell encoding this
    is.


    It seems, UTF8 is the only option available in the editor of my
    newsreader agent. Could it be that your user agent or your usenet
    provider is at fault?
    I see my message rendered correctly both in my reader and here:

    https://www.novabbs.com/devel/article-flat.php?id=43400&group=comp.lang.c#43400

    Pay attention that in recent weeks novabbs is not in a good shape, so
    possibly you will need to wait several seconds and probably to refresh
    the page several times.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Lawrence D'Oliveiro on Wed Apr 9 12:49:00 2025
    On Fri, 4 Apr 2025 02:57:10 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Thu, 3 Apr 2025 21:48:40 +0100, bart wrote:

    Commas are overwhelmingly used to separate list elements in
    programming languages.

    Not just separate, but terminate.

    I disagree. I am in favor of optional trailing commas rather than
    mandatory ones.

    All the reasonable languages allow
    trailing commas.

    Are your sure that C Standard does not allow trailing commas?
    That is, they are obviously legal in initializer lists.
    All compilers that I tried reject trailing comma in function calls.

    For example

    void bar(int);
    void foo(void) {
    bar(1,);
    }

    MSVC:
    comma.c(3): error C2059: syntax error: ')'

    clang:
    comma.c:3:9: error: expected expression
    3 | bar(1,);
    | ^

    gcc:
    comma.c: In function 'foo':
    comma.c:3:9: error: expected expression before ')' token
    3 | bar(1,);
    | ^
    comma.c:3:3: error: too many arguments to function 'bar'
    3 | bar(1,);
    | ^~~
    comma.c:1:6: note: declared here
    1 | void bar(int);
    | ^~~

    But is it (rejection) really required by the Standard? I don't know.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Wed Apr 9 10:08:50 2025
    On Wed, 9 Apr 2025 12:23:40 +0300
    Michael S <already5chosen@yahoo.com> wibbled:
    On Wed, 9 Apr 2025 09:01:34 -0000 (UTC)
    Muttley@DastardlyHQ.org wrote:

    On Tue, 8 Apr 2025 20:53:45 +0300
    Michael S <already5chosen@yahoo.com> wibbled:
    "myriad" means 10,000, coming directly from the Greek. But the
    word is usually used to mean "a great many" or "more than you can
    count". (It's like the use of "40" in the Bible - I guess the
    ancient Greeks were better at counting than the ancient
    Canaanites.)
    =20


    In the Bible?
    Or, may be, in imprecise translations of the Bible that confuse
    the=20 word =D7=A8=D7=91=D7=91=D7=94 that means 10000 with remotely
    similar word = =D7=90=D7=A8=D7=91=D7=A2=D7=99=D7=9D that means
    40 ?




    =D7=A8=D7=91=D7=91=D7=94=20

    Any chance of using utf8 rather than whatever the hell encoding this
    is.


    It seems, UTF8 is the only option available in the editor of my
    newsreader agent. Could it be that your user agent or your usenet
    provider is at fault?

    Thats definately not uft8. Seems something is converting it to quoted
    printable encoding.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Muttley@DastardlyHQ.org on Wed Apr 9 13:32:15 2025
    On Wed, 9 Apr 2025 10:08:50 -0000 (UTC)
    Muttley@DastardlyHQ.org wrote:

    On Wed, 9 Apr 2025 12:23:40 +0300
    Michael S <already5chosen@yahoo.com> wibbled:
    On Wed, 9 Apr 2025 09:01:34 -0000 (UTC)
    Muttley@DastardlyHQ.org wrote:

    On Tue, 8 Apr 2025 20:53:45 +0300
    Michael S <already5chosen@yahoo.com> wibbled:
    "myriad" means 10,000, coming directly from the Greek. But the
    word is usually used to mean "a great many" or "more than you
    can count". (It's like the use of "40" in the Bible - I guess
    the ancient Greeks were better at counting than the ancient
    Canaanites.)
    =20


    In the Bible?
    Or, may be, in imprecise translations of the Bible that confuse
    the=20 word =D7=A8=D7=91=D7=91=D7=94 that means 10000 with
    remotely similar word = =D7=90=D7=A8=D7=91=D7=A2=D7=99=D7=9D that
    means 40 ?




    =D7=A8=D7=91=D7=91=D7=94=20

    Any chance of using utf8 rather than whatever the hell encoding
    this is.


    It seems, UTF8 is the only option available in the editor of my
    newsreader agent. Could it be that your user agent or your usenet
    provider is at fault?

    Thats definately not uft8. Seems something is converting it to quoted printable encoding.


    Message headers indicated that it is UTF-8 encoded as quoted printable.
    I don't know (and don't want to know) too much about usenet mechanics,
    but it seems to me that decent newsreader should decode it back into
    UTF-8.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to James Kuyper on Wed Apr 9 11:21:11 2025
    On 09/04/2025 03:47, James Kuyper wrote:
    bart <bc@freeuk.com> writes:
    On 08/04/2025 22:46, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    Apparently the author of the chart chose to include types that are
    defined by the core language, not by the library.

    So here you're finally admitteding they are a different rank.

    The core language and the library are equal in rank, both being
    different parts of any implementation of C.


    So do you are agree or disagree with the following table I posted yesterday?

    If yes, do you still claim they are equal in rank? And if so, why are
    the stdint.h types not on that chart?

    -----------------------------
    Differences between 'unsigned long long int' and 'uint64_t' up to C23:

    uint64_t unsigned long long int

    Works without header No Yes

    Literal suffix No Yes (ULL etc)

    Dedicated printf format No Yes (%llu)

    Dedicated scanf format No Yes (whatever that might be)

    sizeof() might not be 8 No Maybe

    Reserved word[1] No Yes

    Outside lexical scope[2] No Yes

    Incompatible with
    unsigned long int No Yes
    -----------------------------


    (Original, with notes, posted yesterday at 16:47 GMT)



    Actually I can't quite see the purpose of this chart, why it has to be
    so complicated (even with bits missing) or who it is for.

    Every category shown on that chart has rules that are apply only to
    types in that category. The chart is for people who have not yet
    memorized the relationships shown, and who need to understand the rules
    that apply to each category. That clearly doesn't apply to you, since understanding the rules would make it more difficult for you to complain about them.

    You don't think it is odd for the a language which is supposed to be
    famous for its simplicity and unsophisticated types to have such a
    complicated chart? There is is no way it could be simplified?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to Muttley@DastardlyHQ.org on Wed Apr 9 12:35:00 2025
    On 09.04.2025 11:01, Muttley@DastardlyHQ.org wrote:
    On Tue, 8 Apr 2025 20:53:45 +0300
    Michael S <already5chosen@yahoo.com> wibbled:

    =D7=A8=D7=91=D7=91=D7=94=20

    Any chance of using utf8 rather than whatever the hell encoding this is.

    Could it be that your newsreader garbled that? Probably because
    it doesn't expect or knows how to decode UTF-8 encoded Hebrew?

    My newsreader can display it and text that contains this line
    "רבבה that means 10000 with remotely similar word ארבעים"
    is also identified as UTF-8.

    If you can't read or display that two words; it's contains just
    two Hebrew words with meaning as indicated by the posted context.

    HTH

    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Michael S on Wed Apr 9 11:32:30 2025
    On 09/04/2025 09:20, Michael S wrote:
    On Tue, 8 Apr 2025 16:47:07 +0100
    bart <bc@freeuk.com> wrote:

    On 08/04/2025 15:57, David Brown wrote:
    On 08/04/2025 15:32, bart wrote:
    On 05/04/2025 18:56, Philipp Klaus Krause wrote:
    Am 02.04.25 um 11:57 schrieb bart:
    * Where are the fixed-width types from stdint.h?

    Same as for size_t, etc: They don't exist. Those are not separate
    types, just typedefs to some other types. E.g. uint16_t could be
    typedef'ed to unsigned int.


    This is the point I made a few weeks back, but others insisted
    they were part of C:


    Me:
    ; stdint.h et al are just ungainly bolt-ons, not fully supported
    by the >> language.

    Keith Thompson:

    ; No, they're fully supported by the language.  They've been in
    the ISO > standard since 1999.
    t;

    This is an exchange posted on 20-Mar-2025 at 19:10 GMT (header
    shows 'Thu, 20 Mar 2025 12:10:22 -0700')

    Clearly, they're not quite as fully supported as short, int etc;
    they are usually just aliases. But that needn't stop them being
    shown on such a chart.

    Standard aliases are part of the language standard, and therefore
    standard and fully supported parts of the language.


    So, should they have been on that chart?

    > and fully supported parts of the language.

    Differences between 'unsigned long long int' and 'uint64_t' up to C23:

    uint64_t unsigned long long int

    Works without header No Yes

    Literal suffix No Yes (ULL etc)

    Dedicated printf format No Yes (%llu)

    Dedicated scanf format No Yes (whatever that might be)

    sizeof() might not be 8 No Maybe

    I don't think that 'No' above is correct.
    Take, for example, ADI SHARC DSPs. Traditionally, sizeof(uint64_t) was
    2.
    I looked into the latest manual and see that now their compiler have
    an option -char-size-8 and with this option sizeof(uint64_t)=8. But
    this option is available only for those members of SHARC family that
    have HW support for byte addressing. Even for those, -char-size-8 is
    not a default.

    There was a stipulation for an 8-bit 'char' type, but it got lost. Maybe
    I deleted it and intended to have it as a note later on.

    But better here would be whether the width (sizeof*CHAR_BIT) could be
    over 64 bits.


    Reserved word[1] No Yes

    Outside lexical scope[2] No Yes

    Incompatible with
    unsigned long int No Yes


    I don't understand why you say 'No'. AFAIK, on all existing systems
    except 64-bit Unixen the answer is 'Yes'.

    That last one was an oversight which I only spotted after posting (in
    Reddit you can edit your posts!).

    The No should be Maybe.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Wed Apr 9 11:02:10 2025
    On Wed, 9 Apr 2025 12:35:00 +0200
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wibbled:
    On 09.04.2025 11:01, Muttley@DastardlyHQ.org wrote:
    On Tue, 8 Apr 2025 20:53:45 +0300
    Michael S <already5chosen@yahoo.com> wibbled:

    =D7=A8=D7=91=D7=91=D7=94=20

    Any chance of using utf8 rather than whatever the hell encoding this is.

    Could it be that your newsreader garbled that? Probably because
    it doesn't expect or knows how to decode UTF-8 encoded Hebrew?

    It wasn't uft8, it was QPE encoded utf8.

    My newsreader can display it and text that contains this line
    "רבבה that means 10000 with remotely similar word ארבעים"
    is also identified as UTF-8.

    That displays fine.

    Why anyone is quoting hebrew is another matter of course.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to Muttley@DastardlyHQ.org on Wed Apr 9 13:04:57 2025
    On 09.04.2025 13:00, Muttley@DastardlyHQ.org wrote:
    On Wed, 9 Apr 2025 13:32:15 +0300
    Michael S <already5chosen@yahoo.com> wibbled:
    Message headers indicated that it is UTF-8 encoded as quoted printable.
    I don't know (and don't want to know) too much about usenet mechanics,
    but it seems to me that decent newsreader should decode it back into
    UTF-8.

    There's nothing in your header that says uft8:

    MIME-Version: 1.0
    Content-Type: text/plain; charset=US-ASCII
    Content-Transfer-Encoding: 7bit

    Just for comparison, here's what I see in that header...

    MIME-Version: 1.0
    Content-Type: text/plain; charset=UTF-8
    Content-Transfer-Encoding: quoted-printable


    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Wed Apr 9 11:00:25 2025
    On Wed, 9 Apr 2025 13:32:15 +0300
    Michael S <already5chosen@yahoo.com> wibbled:
    Message headers indicated that it is UTF-8 encoded as quoted printable.
    I don't know (and don't want to know) too much about usenet mechanics,
    but it seems to me that decent newsreader should decode it back into
    UTF-8.

    There's nothing in your header that says uft8:

    MIME-Version: 1.0
    Content-Type: text/plain; charset=US-ASCII
    Content-Transfer-Encoding: 7bit

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Muttley@DastardlyHQ.org on Wed Apr 9 14:33:53 2025
    On Wed, 9 Apr 2025 11:02:10 -0000 (UTC)
    Muttley@DastardlyHQ.org wrote:

    On Wed, 9 Apr 2025 12:35:00 +0200
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wibbled:
    On 09.04.2025 11:01, Muttley@DastardlyHQ.org wrote:
    On Tue, 8 Apr 2025 20:53:45 +0300
    Michael S <already5chosen@yahoo.com> wibbled:

    =D7=A8=D7=91=D7=91=D7=94=20

    Any chance of using utf8 rather than whatever the hell encoding
    this is.

    Could it be that your newsreader garbled that? Probably because
    it doesn't expect or knows how to decode UTF-8 encoded Hebrew?

    It wasn't uft8, it was QPE encoded utf8.

    My newsreader can display it and text that contains this line
    "רבבה that means 10000 with remotely similar word ארבעים"
    is also identified as UTF-8.

    That displays fine.


    That proves that your newsreader can not decode UTF-8 encoded as quoted printable back into UTF-8.
    I suppose that such capability is not mandatory according to various
    RFCs, but it makes sense.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Michael S on Wed Apr 9 15:01:24 2025
    On 09/04/2025 11:49, Michael S wrote:
    On Fri, 4 Apr 2025 02:57:10 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Thu, 3 Apr 2025 21:48:40 +0100, bart wrote:

    Commas are overwhelmingly used to separate list elements in
    programming languages.

    Not just separate, but terminate.

    I disagree. I am in favor of optional trailing commas rather than
    mandatory ones.

    I am certainly in favour of them for things like initialiser lists and
    enum declarations.


    All the reasonable languages allow
    trailing commas.

    Are your sure that C Standard does not allow trailing commas?
    That is, they are obviously legal in initializer lists.
    All compilers that I tried reject trailing comma in function calls.

    ...
    But is it (rejection) really required by the Standard? I don't know.



    Yes. The syntax (in 6.5.2p1) is :

    postfix-expression:
    ...
    postfix-expression ( argument-expression-list opt )
    ...

    argument-expression-list :
    argument-expression
    argument-expression-list , argument-expression



    I don't think it is unreasonable to suggest that it might be nice to
    allow a trailing comma, at least in variadic function calls, but the
    syntax of C does not allow it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Janis Papanagnou on Wed Apr 9 15:24:53 2025
    On 09/04/2025 13:04, Janis Papanagnou wrote:
    On 09.04.2025 13:00, Muttley@DastardlyHQ.org wrote:
    On Wed, 9 Apr 2025 13:32:15 +0300
    Michael S <already5chosen@yahoo.com> wibbled:
    Message headers indicated that it is UTF-8 encoded as quoted printable.
    I don't know (and don't want to know) too much about usenet mechanics,
    but it seems to me that decent newsreader should decode it back into
    UTF-8.

    There's nothing in your header that says uft8:

    MIME-Version: 1.0
    Content-Type: text/plain; charset=US-ASCII
    Content-Transfer-Encoding: 7bit

    Just for comparison, here's what I see in that header...

    MIME-Version: 1.0
    Content-Type: text/plain; charset=UTF-8
    Content-Transfer-Encoding: quoted-printable



    Michael's Usenet client, like many as far as I can see, adapts the
    charset and Content-Transfer-Encoding according to the lowest that
    supports the characters used in the post.

    Thus when he posts without any non-ASCII characters, it uses US-ASCII
    and 7-bit encoding. That includes replying to Muttley after Muttley's
    client failed to understand the "quoted-printable" encoding and thus interpreted the quoted-printable characters naïvely.

    Michael's posts do use UTF-8 when he uses non-ASCII characters, and
    certainly Thunderbird has no problem with the quoted-printable encoding
    (though it uses 8-bit encoding for its own posts when 7-bit is not enough).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to James Kuyper on Wed Apr 9 15:29:37 2025
    On 08/04/2025 20:14, James Kuyper wrote:
    On 4/8/25 04:39, David Brown wrote:
    On 07/04/2025 20:35, James Kuyper wrote:
    On 4/3/25 18:00, Waldek Hebisch wrote:
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
    ...
    Not always practical. A good example is the type size_t. If a
    function takes an argument of type size_t, then the symbol size_t
    should be defined, no matter which header the function is being
    declared in.

    Why? One can use a type without a name for such type.

    How would you declare a pointer to a function type such that it is
    compatible with such a function's type?

    The C23 "typeof" operator lets you work with the type of a value or
    expression. So you first have an object or value of type "size_t",
    that's all you need. Unfortunately, there are no convenient literal
    suffixes that could be used here.

    I can see how that would work with the return type of a function, but
    how would it apply to an argument of a function?


    Something like :

    memcpy(p, q, (typeof(sizeof(int))) 100);

    I haven't tested that on a C23 compiler, because I really don't think it
    is the kind of thing I'd write in real code!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Michael S on Wed Apr 9 15:16:41 2025
    On 08/04/2025 19:53, Michael S wrote:
    On Tue, 8 Apr 2025 13:39:14 +0200
    David Brown <david.brown@hesbynett.no> wrote:

    On 08/04/2025 12:54, Muttley@DastardlyHQ.org wrote:
    On Tue, 8 Apr 2025 10:29:13 +0200
    David Brown <david.brown@hesbynett.no> wibbled:
    On 07/04/2025 21:29, Richard Heathfield wrote:
    Is not it "20 milliards" in British English?

    Yes. The British use

    No we don't.


    1 - one
    10 - ten
    100 - hundred
    1 000 - thousand
    10 000 - myriad
    100 000 - pool
    1 000 000 - million
    1 000 000 000 - milliard

    Is this a late april fool?

    Absolutely no one in britain says myriad for 10K , pool (wtf?) for
    100K or milliard apart from maybe history of science professor and
    you'd probably be hard pressed to find many people who'd even heard
    of them in that context. The only reason I knew milliard is because
    I can speak (sort of) french and thats the french billion.


    "myriad" means 10,000, coming directly from the Greek. But the word
    is usually used to mean "a great many" or "more than you can count".
    (It's like the use of "40" in the Bible - I guess the ancient Greeks
    were better at counting than the ancient Canaanites.)



    In the Bible?
    Or, may be, in imprecise translations of the Bible that confuse the
    word רבבה that means 10000 with remotely similar word ארבעים that means
    40 ?


    No, I simply mean that the number 40 is used many times in the Bible to
    mean "a large number", rather than for a specific number.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to David Brown on Wed Apr 9 16:56:08 2025
    On Wed, 9 Apr 2025 15:16:41 +0200
    David Brown <david.brown@hesbynett.no> wrote:

    On 08/04/2025 19:53, Michael S wrote:
    On Tue, 8 Apr 2025 13:39:14 +0200
    David Brown <david.brown@hesbynett.no> wrote:

    On 08/04/2025 12:54, Muttley@DastardlyHQ.org wrote:
    On Tue, 8 Apr 2025 10:29:13 +0200
    David Brown <david.brown@hesbynett.no> wibbled:
    On 07/04/2025 21:29, Richard Heathfield wrote:
    Is not it "20 milliards" in British English?

    Yes. The British use

    No we don't.


    1 - one
    10 - ten
    100 - hundred
    1 000 - thousand
    10 000 - myriad
    100 000 - pool
    1 000 000 - million
    1 000 000 000 - milliard

    Is this a late april fool?

    Absolutely no one in britain says myriad for 10K , pool (wtf?) for
    100K or milliard apart from maybe history of science professor and
    you'd probably be hard pressed to find many people who'd even
    heard of them in that context. The only reason I knew milliard is
    because I can speak (sort of) french and thats the french billion.


    "myriad" means 10,000, coming directly from the Greek. But the
    word is usually used to mean "a great many" or "more than you can
    count". (It's like the use of "40" in the Bible - I guess the
    ancient Greeks were better at counting than the ancient
    Canaanites.)


    In the Bible?
    Or, may be, in imprecise translations of the Bible that confuse the
    word רבבה that means 10000 with remotely similar word ארבעים that means 40 ?


    No, I simply mean that the number 40 is used many times in the Bible
    to mean "a large number", rather than for a specific number.



    Can you give me few example of use of number 40 in the meaning "a
    large number"?

    The very first appearance of 40 as as individual number (rather than
    the part of 840) is in duration of the rain that caused flood (40 days
    and 40 nights). I think, in this case it was meant literally.
    In drier parts of Mesopotamia even 40 minutes of intense rain can cause dangerous flood. The same in Negev desert. After 40 hours of intense
    continuous rain very serious flood in lower places is pretty much
    guaranteed. So, in opinion of people that live in such areas, 40 days
    would be more than sufficient for The Flood of Noah. The author of the
    text probably thought that he is overestimating a duration of the rain.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Michael S on Wed Apr 9 18:19:44 2025
    On 09/04/2025 15:56, Michael S wrote:


    Can you give me few example of use of number 40 in the meaning "a
    large number"?


    I really don't want to go into a religious discussion here. It is the
    general opinion of academic Biblical scholars that the use of "40" in
    the Bible is not trying to give an exact value - merely meaning "lots".
    It presumably does not mean "vast numbers", nor "a few" - it is more
    akin to "dozens" in colloquial English. I did not intentionally imply
    it is used to mean "tens of thousands", if that is what you thought I
    was saying.

    If you want to discuss it more, feel free to email me - I am interested
    in religious history, but it would be even less suitable for comp.lang.c
    than etymology!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Harnden@21:1/5 to David Brown on Wed Apr 9 18:32:46 2025
    On 09/04/2025 17:19, David Brown wrote:
    On 09/04/2025 15:56, Michael S wrote:


    Can you give me few example of use of number 40 in the meaning  "a
    large number"?


    I really don't want to go into a religious discussion here.  It is the general opinion of academic Biblical scholars that the use of "40" in
    the Bible is not trying to give an exact value - merely meaning "lots".
    It presumably does not mean "vast numbers", nor "a few" - it is more
    akin to "dozens" in colloquial English.  I did not intentionally imply
    it is used to mean "tens of thousands", if that is what you thought I
    was saying.

    If you want to discuss it more, feel free to email me - I am interested
    in religious history, but it would be even less suitable for comp.lang.c
    than etymology!



    Were the authors of the bible French?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to BGB on Wed Apr 9 20:11:28 2025
    On 09/04/2025 18:26, BGB wrote:
    On 4/9/2025 8:01 AM, David Brown wrote:
    On 09/04/2025 11:49, Michael S wrote:
    On Fri, 4 Apr 2025 02:57:10 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Thu, 3 Apr 2025 21:48:40 +0100, bart wrote:

    Commas are overwhelmingly used to separate list elements in
    programming languages.

    Not just separate, but terminate.

    I disagree. I am in favor of optional trailing commas rather than
    mandatory ones.

    I am certainly in favour of them for things like initialiser lists and
    enum declarations.


    All the reasonable languages allow
    trailing commas.

    Are your sure that C Standard does not allow trailing commas?
    That is, they are obviously legal in initializer lists.
    All compilers that I tried reject trailing comma in function calls.

    ...
    But is it (rejection) really required by the Standard? I don't know.



    Yes.  The syntax (in 6.5.2p1) is :

    postfix-expression:
         ...
         postfix-expression ( argument-expression-list opt )
         ...

    argument-expression-list :
         argument-expression
         argument-expression-list , argument-expression



    I don't think it is unreasonable to suggest that it might be nice to
    allow a trailing comma, at least in variadic function calls, but the
    syntax of C does not allow it.


    Yeah, pretty much.


    It might have also been interesting if C allowed optional named arguments: int foo(int x=3, int y=4)
    {
      return x+y;
    }

    foo() => 7
    foo(.y=2) => 5

    Likely would be following any fixed arguments (if present), and likely
    (for sake of implementation sanity) named arguments and varargs being mutually exclusive (alternative being that named arguments precede
    varargs if both are used).

    Well, at least ".y=val" as "y: val" likely wouldn't go over well even if
    it is what several other languages with this feature used (well or,
    "y=val", which is used in some others).

    In the most likely case, the named argument form would be transformed
    into the equivalent fixed argument form at compile time.
      So: "foo(.y=2)" would be functionally equivalent to "foo(3,2)".

    There are all sorts of problems in adding this to C. For example, this
    is legal:

    void F(int a, float b, char* c);
    void F(int c, float a, char* b);
    void F(int b, float c, char* a) {}

    The sets of parameter names are all different (and that's in the same
    file!); which is the official set?

    Another is to do with defining default values (essential if named
    arguments are to be fully used). First, similar thing to the above:

    void F(int a = x + y);
    void F(int a = DEFAULT);

    Which expression is to be used? Another is:

    void F(int a = x + y);
    ...
    void F(int a = x + y);

    The names 'x' and 'y' would normally refer to whatever x/y names are in
    scope at the declaration site. But if multiple declarations exist at
    different locations, different x/y could be visible; which will be used?

    Usually it is not meaningful to have x/y be the names in scope at the call-site; the author of the function will have no idea which x/y names
    will have in scope, if there are any at all.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Michael S on Wed Apr 9 13:14:55 2025
    Michael S <already5chosen@yahoo.com> writes:

    On Fri, 4 Apr 2025 02:57:10 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Thu, 3 Apr 2025 21:48:40 +0100, bart wrote:

    Commas are overwhelmingly used to separate list elements in
    programming languages.

    Not just separate, but terminate.

    I disagree. I am in favor of optional trailing commas rather than
    mandatory ones.

    All the reasonable languages allow
    trailing commas.

    Are your sure that C Standard does not allow trailing commas?
    That is, they are obviously legal in initializer lists.
    All compilers that I tried reject trailing comma in function calls.

    For example

    void bar(int);
    void foo(void) {
    bar(1,);
    }

    MSVC:
    comma.c(3): error C2059: syntax error: ')'

    clang:
    comma.c:3:9: error: expected expression
    3 | bar(1,);
    | ^

    gcc:
    comma.c: In function 'foo':
    comma.c:3:9: error: expected expression before ')' token
    3 | bar(1,);
    | ^
    comma.c:3:3: error: too many arguments to function 'bar'
    3 | bar(1,);
    | ^~~
    comma.c:1:6: note: declared here
    1 | void bar(int);
    | ^~~

    But is it (rejection) really required by the Standard? I don't know.

    It is required in the sense that it is a syntax error,
    and syntax errors require a diagnostic.

    Trailing commas in argument lists and/or parameter lists
    could be accepted as an extension, even without giving a
    diagnostic as I read the C standard, but implementations
    are certainly within their rights to reject them.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Michael S on Wed Apr 9 20:38:03 2025
    On 2025-04-09, Michael S <already5chosen@yahoo.com> wrote:
    void foo(void) {
    bar(1,);
    }

    MSVC:
    comma.c(3): error C2059: syntax error: ')'

    In a language without preprocessing, it might be acceptable.

    If bar happens to be #define bar(x, y) then you
    are passing an empty argument for y.

    Or, suppose you have bar(ABC, DEF), where ABC and
    DEF are macros.

    DEF happens to expand to nothing, so we get bar(whatever,),
    and since bar takes one argument, it works silently.

    (Consider C++ where we can have bar(arg) and bar(arg, arg)
    overloads.)

    Just little bits of chaos that point to "bad idea".

    But is it (rejection) really required by the Standard? I don't know.

    Yes?

    6.5.3 Postfix operators

    6.5.3.1 General

    Syntax

    postfix-expression:
    primary-expression
    postfix-expression [ expression ]
    postfix-expression ( argument-expression-list_opt )
    postfix-expression . identifier
    postfix-expression -> identifier
    postfix-expression ++
    postfix-expression --
    compound-literal

    argument-expression-list:
    assignment-expression
    argument-expression-list , assignment-expression

    ^^ ^^^^^^^^^^^^^^^^^^^^^

    You see that last bit? The grammar for argument expression lists
    is essentially an expression grammar in which the comma is the
    one and only left-associative infix operator.

    After the comma, the assignment-expression operand is not
    optional, the same way that in an infix expression E1 + E2,
    the E2 is not optional.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Michael S on Wed Apr 9 13:52:15 2025
    Michael S <already5chosen@yahoo.com> writes:

    On Tue, 08 Apr 2025 23:12:13 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    Tim Rentsch <tr.17687@z991.linuxsc.com> writes:

    James Kuyper <jameskuyper@alumni.caltech.edu> writes:

    bart <bc@freeuk.com> writes:

    On 08/04/2025 22:46, Keith Thompson wrote:

    bart <bc@freeuk.com> writes:

    Apparently the author of the chart chose to include types
    that are

    defined by the core language, not by the library.

    So here you're finally admitteding they are a different rank.

    The core language and the library are equal in rank, both being
    different parts of any implementation of C.

    This statement isn't exactly right. Some parts of the standard
    library are available only in hosted implementations, and not in
    freestanding implementations.

    True. Also, freestanding implementations must support <stddef.h>
    and <stdint.h>, among several other headers.

    May be in some formal sense headers and library routines that are
    mandatory for freestanding implementations belong to the same rank as
    core language. But in practice there exists an obvious difference. In
    the first case, name clashes are avoidable (sometimes with toothless
    threat that they can happen in the future) and in the second case they
    are unavoidable.

    It's hard for me to make sense sense of this comment. The only
    library routines that are required in standard C are those
    documented as part of a section for one of the standard headers.
    For freestanding implementations in particular, there are only
    two names (va_copy and va_end) that might correspond to library
    functions, and if they do then the names are reserved for that
    purpose. Do you mean to suggest that user code defining either
    va_copy or va_end as a symbol with external linkage is
    unavoidable? Any user code that does so could be summarily
    rejected by the implementation. It's hard to imagine anyone
    writing user code wanting to define either of those names as a
    symbol with external linkage.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Michael S on Wed Apr 9 13:17:17 2025
    Michael S <already5chosen@yahoo.com> writes:

    On Wed, 9 Apr 2025 15:16:41 +0200
    David Brown <david.brown@hesbynett.no> wrote:

    On 08/04/2025 19:53, Michael S wrote:

    On Tue, 8 Apr 2025 13:39:14 +0200
    David Brown <david.brown@hesbynett.no> wrote:

    On 08/04/2025 12:54, Muttley@DastardlyHQ.org wrote:

    On Tue, 8 Apr 2025 10:29:13 +0200
    David Brown <david.brown@hesbynett.no> wibbled:

    On 07/04/2025 21:29, Richard Heathfield wrote:

    Is not it "20 milliards" in British English?

    Yes. The British use

    No we don't.

    1 - one
    10 - ten
    100 - hundred
    1 000 - thousand
    10 000 - myriad
    100 000 - pool
    1 000 000 - million
    1 000 000 000 - milliard

    Is this a late april fool?

    Absolutely no one in britain says myriad for 10K , pool (wtf?) for
    100K or milliard apart from maybe history of science professor and
    you'd probably be hard pressed to find many people who'd even
    heard of them in that context. The only reason I knew milliard is
    because I can speak (sort of) french and thats the french billion.

    "myriad" means 10,000, coming directly from the Greek. But the
    word is usually used to mean "a great many" or "more than you can
    count". (It's like the use of "40" in the Bible - I guess the
    ancient Greeks were better at counting than the ancient
    Canaanites.)

    In the Bible?
    Or, may be, in imprecise translations of the Bible that confuse the
    word ???? that means 10000 with remotely similar word ?????? that
    means 40 ?

    No, I simply mean that the number 40 is used many times in the Bible
    to mean "a large number", rather than for a specific number.

    Can you give me few example of use of number 40 in the meaning "a
    large number"?

    The very first appearance of 40 as as individual number (rather than
    the part of 840) is in duration of the rain that caused flood (40 days
    and 40 nights). I think, in this case it was meant literally.
    In drier parts of Mesopotamia even 40 minutes of intense rain can cause dangerous flood. The same in Negev desert. After 40 hours of intense continuous rain very serious flood in lower places is pretty much
    guaranteed. So, in opinion of people that live in such areas, 40 days
    would be more than sufficient for The Flood of Noah. The author of the
    text probably thought that he is overestimating a duration of the rain.

    It is my fervent hope that everyone involved in this fascinating
    discussion can find a more appropriate place to carry on the
    conversation than comp.lang.c.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to James Kuyper on Wed Apr 9 21:32:32 2025
    On 2025-04-09, James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
    bart <bc@freeuk.com> writes:
    On 08/04/2025 22:46, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    Apparently the author of the chart chose to include types that are
    defined by the core language, not by the library.

    So here you're finally admitteding they are a different rank.

    The core language and the library are equal in rank, both being
    different parts of any implementation of C.

    A feature is visible/available in every translation unit, such as the
    keyword "unsigned" is integrated into the language to a greater extent
    than something that is only defined when a certain header is included,
    like "size_t".

    If by "rank" we mean "depth of integration", then that is right.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to bart on Thu Apr 10 09:53:40 2025
    On 09/04/2025 21:11, bart wrote:
    On 09/04/2025 18:26, BGB wrote:
    On 4/9/2025 8:01 AM, David Brown wrote:
    On 09/04/2025 11:49, Michael S wrote:
    On Fri, 4 Apr 2025 02:57:10 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Thu, 3 Apr 2025 21:48:40 +0100, bart wrote:

    Commas are overwhelmingly used to separate list elements in
    programming languages.

    Not just separate, but terminate.

    I disagree. I am in favor of optional trailing commas rather than
    mandatory ones.

    I am certainly in favour of them for things like initialiser lists
    and enum declarations.


    All the reasonable languages allow
    trailing commas.

    Are your sure that C Standard does not allow trailing commas?
    That is, they are obviously legal in initializer lists.
    All compilers that I tried reject trailing comma in function calls.

    ...
    But is it (rejection) really required by the Standard? I don't know.



    Yes.  The syntax (in 6.5.2p1) is :

    postfix-expression:
         ...
         postfix-expression ( argument-expression-list opt )
         ...

    argument-expression-list :
         argument-expression
         argument-expression-list , argument-expression



    I don't think it is unreasonable to suggest that it might be nice to
    allow a trailing comma, at least in variadic function calls, but the
    syntax of C does not allow it.


    Yeah, pretty much.


    It might have also been interesting if C allowed optional named
    arguments:
    int foo(int x=3, int y=4)
    {
       return x+y;
    }

    foo() => 7
    foo(.y=2) => 5

    Likely would be following any fixed arguments (if present), and likely
    (for sake of implementation sanity) named arguments and varargs being
    mutually exclusive (alternative being that named arguments precede
    varargs if both are used).

    Well, at least ".y=val" as "y: val" likely wouldn't go over well even
    if it is what several other languages with this feature used (well or,
    "y=val", which is used in some others).

    In the most likely case, the named argument form would be transformed
    into the equivalent fixed argument form at compile time.
       So: "foo(.y=2)" would be functionally equivalent to "foo(3,2)".

    There are all sorts of problems in adding this to C. For example, this
    is legal:

      void F(int a, float b, char* c);
      void F(int c, float a, char* b);
      void F(int b, float c, char* a) {}

    The sets of parameter names are all different (and that's in the same
    file!); which is the official set?

    C has had flexibility here for all sorts of reasons. But if named
    parameters were to be added to the language without significant extra
    syntax, then this particular issue could be solved in at least two very
    simple ways. Either say that named parameter syntax can only be used if
    all of the function's declarations in the translation unit have
    consistent naming, or say that the last declaration in scope is the one
    used. (My guess would be that the later, with compilers offering
    warnings about the former.)

    Of course that lets someone declare "void f(int a, int b);" in one file
    and "void f(int b, int a);" in a different one - but that does not
    noticeably change the kind of mixups already available to the
    undisciplined programmer, and it is completely eliminated by the
    standard practice of using shared headers for declarations.



    Another is to do with defining default values (essential if named
    arguments are to be fully used). First, similar thing to the above:

      void F(int a = x + y);
      void F(int a = DEFAULT);


    Default arguments are most certainly not essential to make named
    parameters useful. They /can/ be a nice thing to have, but they are
    merely icing on the cake. Still, there is an obvious and C-friendly way
    to handle this too - the default values must be constant expressions.

    A much clearer issue with a named parameter syntax like this is that
    something like "foo(b = 1, a = 2);" is already valid in C and means
    something significantly different. You'd need a different syntax.

    Fundamental matters such as this are best decided early in the design of
    a language, rather than bolted on afterwards. Named parameters is
    something I like in languages, but it's not easy to add to established languages.

    Still, the C++ crowd regularly try to figure out how named parameters
    could be added to C++. I think they will figure it out eventually. C++
    adds a number of extra complications here that C does not have, but once
    they have a decent solution, C could probably adopt it. Let C++ pave
    the way on new concepts, and C can copy the bits that suit once C++ has
    done the field testing - that's part of the C standard committee
    philosophy, and a good way to handle these things.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to David Brown on Thu Apr 10 11:37:30 2025
    On Thu, 10 Apr 2025 09:53:40 +0200
    David Brown <david.brown@hesbynett.no> wrote:


    Still, the C++ crowd regularly try to figure out how named parameters
    could be added to C++. I think they will figure it out eventually.
    C++ adds a number of extra complications here that C does not have,
    but once they have a decent solution, C could probably adopt it. Let
    C++ pave the way on new concepts, and C can copy the bits that suit
    once C++ has done the field testing - that's part of the C standard
    committee philosophy, and a good way to handle these things.


    I think that it's not mere "extra complications". Adding named
    parameters to C++ is massively more complicated than adding them to C.
    So, IMHO, if C waits for C++ then it will wait forever.
    Not that I care. Named parameters are pretty low on my wish list.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Tim Rentsch on Thu Apr 10 11:42:52 2025
    On Wed, 09 Apr 2025 13:14:55 -0700
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    Michael S <already5chosen@yahoo.com> writes:

    On Fri, 4 Apr 2025 02:57:10 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Thu, 3 Apr 2025 21:48:40 +0100, bart wrote:

    Commas are overwhelmingly used to separate list elements in
    programming languages.

    Not just separate, but terminate.

    I disagree. I am in favor of optional trailing commas rather than mandatory ones.

    All the reasonable languages allow
    trailing commas.

    Are your sure that C Standard does not allow trailing commas?
    That is, they are obviously legal in initializer lists.
    All compilers that I tried reject trailing comma in function calls.

    For example

    void bar(int);
    void foo(void) {
    bar(1,);
    }

    MSVC:
    comma.c(3): error C2059: syntax error: ')'

    clang:
    comma.c:3:9: error: expected expression
    3 | bar(1,);
    | ^

    gcc:
    comma.c: In function 'foo':
    comma.c:3:9: error: expected expression before ')' token
    3 | bar(1,);
    | ^
    comma.c:3:3: error: too many arguments to function 'bar'
    3 | bar(1,);
    | ^~~
    comma.c:1:6: note: declared here
    1 | void bar(int);
    | ^~~

    But is it (rejection) really required by the Standard? I don't
    know.

    It is required in the sense that it is a syntax error,
    and syntax errors require a diagnostic.

    Trailing commas in argument lists and/or parameter lists
    could be accepted as an extension, even without giving a
    diagnostic as I read the C standard, but implementations
    are certainly within their rights to reject them.

    I have no doubts that implementations have full rights to reject them.
    The question was about possibility to accept them and especially about possibility to accept without diagnostics.
    So, it seems, there is no consensus about it among few posters that
    read the relevant part of the standard.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Tim Rentsch on Thu Apr 10 11:50:04 2025
    On Wed, 09 Apr 2025 13:52:15 -0700
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    Michael S <already5chosen@yahoo.com> writes:

    On Tue, 08 Apr 2025 23:12:13 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    Tim Rentsch <tr.17687@z991.linuxsc.com> writes:

    James Kuyper <jameskuyper@alumni.caltech.edu> writes:

    bart <bc@freeuk.com> writes:

    On 08/04/2025 22:46, Keith Thompson wrote:

    bart <bc@freeuk.com> writes:

    Apparently the author of the chart chose to include types
    that are

    defined by the core language, not by the library.

    So here you're finally admitteding they are a different rank.

    The core language and the library are equal in rank, both being
    different parts of any implementation of C.

    This statement isn't exactly right. Some parts of the standard
    library are available only in hosted implementations, and not in
    freestanding implementations.

    True. Also, freestanding implementations must support <stddef.h>
    and <stdint.h>, among several other headers.

    May be in some formal sense headers and library routines that are
    mandatory for freestanding implementations belong to the same rank
    as core language. But in practice there exists an obvious
    difference. In the first case, name clashes are avoidable
    (sometimes with toothless threat that they can happen in the
    future) and in the second case they are unavoidable.

    It's hard for me to make sense sense of this comment. The only
    library routines that are required in standard C are those
    documented as part of a section for one of the standard headers.
    For freestanding implementations in particular, there are only
    two names (va_copy and va_end) that might correspond to library
    functions, and if they do then the names are reserved for that
    purpose. Do you mean to suggest that user code defining either
    va_copy or va_end as a symbol with external linkage is
    unavoidable? Any user code that does so could be summarily
    rejected by the implementation. It's hard to imagine anyone
    writing user code wanting to define either of those names as a
    symbol with external linkage.

    I merely wanted to say that it is pretty easy to write a legal, if not necessarily sensible, code that uses variable named 'memcpy' and
    function named 'size_t'. OTOH, you can't name you variable 'break' or 'continue'. Or even 'bool', if you happen to use C23 compiler.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Thu Apr 10 10:07:38 2025
    On Thu, 10 Apr 2025 09:53:40 +0200
    David Brown <david.brown@hesbynett.no> wibbled:
    On 09/04/2025 21:11, bart wrote:
    The sets of parameter names are all different (and that's in the same
    file!); which is the official set?

    C has had flexibility here for all sorts of reasons. But if named
    parameters were to be added to the language without significant extra
    syntax, then this particular issue could be solved in at least two very >simple ways. Either say that named parameter syntax can only be used if

    Anyone who really wants named parameters at function calling can already do this in C99:

    struct st
    {
    int a;
    int b;
    int c;
    };


    void func(struct st s)
    {
    }


    int main()
    {
    func((struct st){ .a = 1, .b = 2, .c = 3 });
    return 0;
    }

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Muttley@DastardlyHQ.org on Thu Apr 10 12:08:58 2025
    On 10/04/2025 11:07, Muttley@DastardlyHQ.org wrote:
    On Thu, 10 Apr 2025 09:53:40 +0200
    David Brown <david.brown@hesbynett.no> wibbled:
    On 09/04/2025 21:11, bart wrote:
    The sets of parameter names are all different (and that's in the same
    file!); which is the official set?

    C has had flexibility here for all sorts of reasons. But if named
    parameters were to be added to the language without significant extra
    syntax, then this particular issue could be solved in at least two very
    simple ways. Either say that named parameter syntax can only be used if

    Anyone who really wants named parameters at function calling can already do this in C99:

    struct st
    {
    int a;
    int b;
    int c;
    };


    void func(struct st s)
    {
    }


    int main()
    {
    func((struct st){ .a = 1, .b = 2, .c = 3 });
    return 0;
    }


    Ha, ha, ha!

    Those aren't named parameters. It would be a dreadful solution anyway:

    * Each function now needs an accompanying struct

    * The function header does not list the parameter names or types

    * Inside the function, each parameter name needs to be qualified (s.a etc)

    * All fields can be omitted (presumably to default to all-zeros) which
    may not be what is desired

    * When structs are passed by-value as is the case here, it can mean
    copying the struct, an extra overhead

    * It can also mean construction an argument list in memory, rather than
    passing arguments efficiently in registers

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to David Brown on Thu Apr 10 12:42:05 2025
    On 10/04/2025 08:53, David Brown wrote:
    On 09/04/2025 21:11, bart wrote:
    On 09/04/2025 18:26, BGB wrote:

    It might have also been interesting if C allowed optional named
    arguments:
    int foo(int x=3, int y=4)
    {
       return x+y;
    }

    foo() => 7
    foo(.y=2) => 5

    Likely would be following any fixed arguments (if present), and
    likely (for sake of implementation sanity) named arguments and
    varargs being mutually exclusive (alternative being that named
    arguments precede varargs if both are used).

    Well, at least ".y=val" as "y: val" likely wouldn't go over well even
    if it is what several other languages with this feature used (well
    or, "y=val", which is used in some others).

    In the most likely case, the named argument form would be transformed
    into the equivalent fixed argument form at compile time.
       So: "foo(.y=2)" would be functionally equivalent to "foo(3,2)".

    There are all sorts of problems in adding this to C. For example, this
    is legal:

       void F(int a, float b, char* c);
       void F(int c, float a, char* b);
       void F(int b, float c, char* a) {}

    The sets of parameter names are all different (and that's in the same
    file!); which is the official set?

    C has had flexibility here for all sorts of reasons.  But if named parameters were to be added to the language without significant extra
    syntax, then this particular issue could be solved in at least two very simple ways.  Either say that named parameter syntax can only be used if
    all of the function's declarations in the translation unit have
    consistent naming, or say that the last declaration in scope is the one used.  (My guess would be that the later, with compilers offering
    warnings about the former.)

    Of course that lets someone declare "void f(int a, int b);" in one file
    and "void f(int b, int a);" in a different one - but that does not
    noticeably change the kind of mixups already available to the
    undisciplined programmer, and it is completely eliminated by the
    standard practice of using shared headers for declarations.



    Another is to do with defining default values (essential if named
    arguments are to be fully used). First, similar thing to the above:

       void F(int a = x + y);
       void F(int a = DEFAULT);


    Default arguments are most certainly not essential to make named
    parameters useful.

    Then the advantage is minimal. They are useful when there are lots of parameters, where only a few are essential, and the rest are various
    options.

    Here's a simple example where the function officially takes 4
    parameters, but I provide only what is relevant:

    MessageBoxA(message:"Hello")

    It produces this:

    https://github.com/sal55/langs/blob/master/hello.png

    One of the defaults is for the pop-up caption, as can be seen. Otherwise
    you can add 'caption:"Title"' for example.


    They /can/ be a nice thing to have, but they are
    merely icing on the cake.  Still, there is an obvious and C-friendly way
    to handle this too - the default values must be constant expressions.

    Well, the most common default value is 0. But do you mean actual
    literals, or can you use macro or enum names?

    Because it is those name resolutions that are the problem, not whether
    the result is a compile-time constant expression.


    A much clearer issue with a named parameter syntax like this is that something like "foo(b = 1, a = 2);" is already valid in C and means
    something significantly different.  You'd need a different syntax.

    Not really; the above is inside a formal parameter list, where '=' has
    no special meaning.

    It is in an actual function call where using '=' is troublesome.

    (And where I use ':', but I have also used '=': 'a = 10' is a named
    argument and I'm passing 10, but '(a = 10)' is a positional argument
    where I'm passing 1 or 0.)

    Anyway in C you'd probably use '.a = 10' to align it with struct
    initialisers, also that's a bit cluttery.

    Fundamental matters such as this are best decided early in the design of
    a language, rather than bolted on afterwards.

    The funny thing is that my MessageBox example is a C function exported
    by WinAPI, and I was able to superimpose keyword arguments on top. Since
    I have to write my own bindings to such functions anyway.

    The MS docs for WinAPI do tend to show function declarations with fully
    named parameters, which also seem to be retained in gcc's windows.h (but
    not in my cut-down one). But it would need defaults added to make it useful:

    HWND CreateWindowExA(
    [in] DWORD dwExStyle,
    [in, optional] LPCSTR lpClassName,
    [in, optional] LPCSTR lpWindowName,
    [in] DWORD dwStyle,
    [in] int X,
    [in] int Y,
    [in] int nWidth,
    [in] int nHeight,
    [in, optional] HWND hWndParent,
    [in, optional] HMENU hMenu,
    [in, optional] HINSTANCE hInstance,
    [in, optional] LPVOID lpParam
    );

    Here, some optional args are indicated, but default values could be
    applied to several more, enough that a minimal invocation might need no arguments at all:

    hwnd = CreateWindowExA();

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to I never on Thu Apr 10 12:48:40 2025
    On Thu, 10 Apr 2025 12:08:58 +0100
    bart <bc@freeuk.com> wibbled:
    On 10/04/2025 11:07, Muttley@DastardlyHQ.org wrote:
    On Thu, 10 Apr 2025 09:53:40 +0200
    David Brown <david.brown@hesbynett.no> wibbled:
    On 09/04/2025 21:11, bart wrote:
    The sets of parameter names are all different (and that's in the same
    file!); which is the official set?

    C has had flexibility here for all sorts of reasons. But if named
    parameters were to be added to the language without significant extra
    syntax, then this particular issue could be solved in at least two very
    simple ways. Either say that named parameter syntax can only be used if

    Anyone who really wants named parameters at function calling can already do >> this in C99:

    struct st
    {
    int a;
    int b;
    int c;
    };


    void func(struct st s)
    {
    }


    int main()
    {
    func((struct st){ .a = 1, .b = 2, .c = 3 });
    return 0;
    }


    Ha, ha, ha!

    Those aren't named parameters. It would be a dreadful solution anyway:

    I never said they were , but they're the best you can do in C right now.

    * Each function now needs an accompanying struct

    Thanks sherlock.

    Its obviously not the perfect solution but at the point of call it lets
    you know which struct vars are being set to what.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to bart on Thu Apr 10 15:06:40 2025
    On 10/04/2025 13:42, bart wrote:
    On 10/04/2025 08:53, David Brown wrote:
    On 09/04/2025 21:11, bart wrote:
    On 09/04/2025 18:26, BGB wrote:

    It might have also been interesting if C allowed optional named
    arguments:
    int foo(int x=3, int y=4)
    {
       return x+y;
    }

    foo() => 7
    foo(.y=2) => 5

    Likely would be following any fixed arguments (if present), and
    likely (for sake of implementation sanity) named arguments and
    varargs being mutually exclusive (alternative being that named
    arguments precede varargs if both are used).

    Well, at least ".y=val" as "y: val" likely wouldn't go over well
    even if it is what several other languages with this feature used
    (well or, "y=val", which is used in some others).

    In the most likely case, the named argument form would be
    transformed into the equivalent fixed argument form at compile time.
       So: "foo(.y=2)" would be functionally equivalent to "foo(3,2)".

    There are all sorts of problems in adding this to C. For example,
    this is legal:

       void F(int a, float b, char* c);
       void F(int c, float a, char* b);
       void F(int b, float c, char* a) {}

    The sets of parameter names are all different (and that's in the same
    file!); which is the official set?

    C has had flexibility here for all sorts of reasons.  But if named
    parameters were to be added to the language without significant extra
    syntax, then this particular issue could be solved in at least two
    very simple ways.  Either say that named parameter syntax can only be
    used if all of the function's declarations in the translation unit
    have consistent naming, or say that the last declaration in scope is
    the one used.  (My guess would be that the later, with compilers
    offering warnings about the former.)

    Of course that lets someone declare "void f(int a, int b);" in one
    file and "void f(int b, int a);" in a different one - but that does
    not noticeably change the kind of mixups already available to the
    undisciplined programmer, and it is completely eliminated by the
    standard practice of using shared headers for declarations.



    Another is to do with defining default values (essential if named
    arguments are to be fully used). First, similar thing to the above:

       void F(int a = x + y);
       void F(int a = DEFAULT);


    Default arguments are most certainly not essential to make named
    parameters useful.

    Then the advantage is minimal. They are useful when there are lots of parameters, where only a few are essential, and the rest are various
    options.

    That is one use-case, yes. Generally, functions with large numbers of parameters are frowned upon anyway - there are typically better ways to
    handle such things.

    Where named parameters shine is when you have a few parameters that have
    the same type. "void control_leds(bool red, bool green, bool blue);".
    There are a variety of ways you can make a function like this in a
    clearer or safer way in C, but it requires a fair amount of extra
    boilerplate code (to define enum types for simple clarity, or struct
    types for greater safety). Named parameters would make such functions
    safer and clearer in a simple way.


    They /can/ be a nice thing to have, but they are merely icing on the
    cake.  Still, there is an obvious and C-friendly way to handle this
    too - the default values must be constant expressions.

    Well, the most common default value is 0. But do you mean actual
    literals, or can you use macro or enum names?

    I mean actual constant expressions, as C defines them. That includes
    constants (now called "literals" in C23), constant expressions (such as
    "2 * 10"), enumeration constants, and constexpr constants (in C23).
    Basically, things that you could use for initialisation of a variable at
    file scope.


    Because it is those name resolutions that are the problem, not whether
    the result is a compile-time constant expression.


    I don't see that at all.


    A much clearer issue with a named parameter syntax like this is that
    something like "foo(b = 1, a = 2);" is already valid in C and means
    something significantly different.  You'd need a different syntax.

    Not really; the above is inside a formal parameter list, where '=' has
    no special meaning.

    That is exactly the point - "=" has no special meaning inside a function
    call. It is an assignment operator:

    int a = 10;
    int b = 20;
    int c = foo(b = 1, a = 2);

    means the same (ignoring possible sequencing and ordering issues) as :

    int a = 10;
    int b = 20;
    b = 1;
    a = 2;
    int c = foo(b, a);



    It is in an actual function call where using '=' is troublesome.

    Yes - that's what we have been talking about. Named parameters are used
    at the call site, not the declaration site.


    Anyway in C you'd probably use '.a = 10' to align it with struct initialisers, also that's a bit cluttery.

    That does seem the most likely choice.


    Fundamental matters such as this are best decided early in the design
    of a language, rather than bolted on afterwards.

    The funny thing is that my MessageBox example is a C function exported
    by WinAPI, and I was able to superimpose keyword arguments on top. Since
    I have to write my own bindings to such functions anyway.

    The MS docs for WinAPI do tend to show function declarations with fully
    named parameters, which also seem to be retained in gcc's windows.h (but
    not in my cut-down one).

    gcc does not have a "windows.h". You are conflating gcc with some
    windows packaging of gcc with additional tools, libraries and headers.

    Personally, I think it is always good to give clear parameter names in
    function declarations. There are reasons for not doing so (such as the possibility of some user defining a macro "lpClassName" before including
    the header file), but generally it keeps things clearer. It is also particularly useful for library headers if they are used to generate
    interfaces for other languages.

    But it would need defaults added to make it
    useful:

    Strangely, many people have been able to write code using the MS API
    without named parameters or defaults.


    HWND CreateWindowExA(
      [in]           DWORD     dwExStyle,
      [in, optional] LPCSTR    lpClassName,
      [in, optional] LPCSTR    lpWindowName,
      [in]           DWORD     dwStyle,
      [in]           int       X,
      [in]           int       Y,
      [in]           int       nWidth,
      [in]           int       nHeight,
      [in, optional] HWND      hWndParent,
      [in, optional] HMENU     hMenu,
      [in, optional] HINSTANCE hInstance,
      [in, optional] LPVOID    lpParam
    );


    Let's not pretend that MS's API's are good examples of clear design!
    (And please don't bother picking other non-MS examples that are the same
    or worse.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to David Brown on Thu Apr 10 15:29:45 2025
    On 10/04/2025 14:06, David Brown wrote:
    On 10/04/2025 13:42, bart wrote:
    On 10/04/2025 08:53, David Brown wrote:
    On 09/04/2025 21:11, bart wrote:
    On 09/04/2025 18:26, BGB wrote:

    It might have also been interesting if C allowed optional named
    arguments:
    int foo(int x=3, int y=4)
    {
       return x+y;
    }

    foo() => 7
    foo(.y=2) => 5

    Likely would be following any fixed arguments (if present), and
    likely (for sake of implementation sanity) named arguments and
    varargs being mutually exclusive (alternative being that named
    arguments precede varargs if both are used).

    Well, at least ".y=val" as "y: val" likely wouldn't go over well
    even if it is what several other languages with this feature used
    (well or, "y=val", which is used in some others).

    In the most likely case, the named argument form would be
    transformed into the equivalent fixed argument form at compile time. >>>>>    So: "foo(.y=2)" would be functionally equivalent to "foo(3,2)".

    There are all sorts of problems in adding this to C. For example,
    this is legal:

       void F(int a, float b, char* c);
       void F(int c, float a, char* b);
       void F(int b, float c, char* a) {}

    The sets of parameter names are all different (and that's in the
    same file!); which is the official set?

    C has had flexibility here for all sorts of reasons.  But if named
    parameters were to be added to the language without significant extra
    syntax, then this particular issue could be solved in at least two
    very simple ways.  Either say that named parameter syntax can only be
    used if all of the function's declarations in the translation unit
    have consistent naming, or say that the last declaration in scope is
    the one used.  (My guess would be that the later, with compilers
    offering warnings about the former.)

    Of course that lets someone declare "void f(int a, int b);" in one
    file and "void f(int b, int a);" in a different one - but that does
    not noticeably change the kind of mixups already available to the
    undisciplined programmer, and it is completely eliminated by the
    standard practice of using shared headers for declarations.



    Another is to do with defining default values (essential if named
    arguments are to be fully used). First, similar thing to the above:

       void F(int a = x + y);
       void F(int a = DEFAULT);


    Default arguments are most certainly not essential to make named
    parameters useful.

    Then the advantage is minimal. They are useful when there are lots of
    parameters, where only a few are essential, and the rest are various
    options.

    That is one use-case, yes.  Generally, functions with large numbers of parameters are frowned upon anyway - there are typically better ways to handle such things.

    Where named parameters shine is when you have a few parameters that have
    the same type.  "void control_leds(bool red, bool green, bool blue);".
    There are a variety of ways you can make a function like this in a
    clearer or safer way in C, but it requires a fair amount of extra
    boilerplate code (to define enum types for simple clarity, or struct
    types for greater safety).  Named parameters would make such functions
    safer and clearer in a simple way.


    They /can/ be a nice thing to have, but they are merely icing on the
    cake.  Still, there is an obvious and C-friendly way to handle this
    too - the default values must be constant expressions.

    Well, the most common default value is 0. But do you mean actual
    literals, or can you use macro or enum names?

    I mean actual constant expressions, as C defines them.  That includes constants (now called "literals" in C23), constant expressions (such as
    "2 * 10"), enumeration constants, and constexpr constants (in C23). Basically, things that you could use for initialisation of a variable at
    file scope.


    Because it is those name resolutions that are the problem, not whether
    the result is a compile-time constant expression.


    I don't see that at all.

    It probably wouldn't be too much of a problem in C, since outside of a function, there is only one scope anyway. But it can be illustrated like
    this:

    enum {x=100};
    void F(int a = x);

    int main(void) {
    enum {x=200};
    void F(int a = x);

    F();
    }

    What default value would be used for this call, 100 or 200? Or could
    there actually be two possible defaults for the same function?

    Declaring functions inside another is uncommon. But you can do similar
    things at file scope with #define and #undef.

    Or maybe the default value uses names defined in a header, but a
    different translation unit could use a different header, or it might
    just have a different expression anyway.

    (I would disallow this:

    void F(int a, int b = a)

    where the default value for 'b' is the parameter 'a'. That would be
    ill-defined and awkward to implement, plus you could have parameter
    defaulting to each other.)


    Not really; the above is inside a formal parameter list, where '=' has
    no special meaning.

    That is exactly the point - "=" has no special meaning inside a function call.

    But that wasn't a function call! So you can use '=' in declaration, and
    perhaps '.' and '=' in a call:

    void F(a = 0);

    F(.a = 77);


    Fundamental matters such as this are best decided early in the design
    of a language, rather than bolted on afterwards.

    The funny thing is that my MessageBox example is a C function exported
    by WinAPI, and I was able to superimpose keyword arguments on top.
    Since I have to write my own bindings to such functions anyway.

    The MS docs for WinAPI do tend to show function declarations with
    fully named parameters, which also seem to be retained in gcc's
    windows.h (but not in my cut-down one).

    gcc does not have a "windows.h".  You are conflating gcc with some
    windows packaging of gcc with additional tools, libraries and headers.

    Huh? Do you really want to go down that path of analysing exactly what
    gcc is and isn't? 'gcc' must be the most famous C compiler on the planet!

    Yes we all know that 'gcc' /now/ stands for 'gnu compiler collection' or something, and that it is a driver program for a number of utilities.
    But this is a C group which has informally mentioned 'gcc' for decades
    across tens of thousands of posts, but you had to bring it up now?

    Any viable C compiler that targets Windows, gcc included, needs to
    provide windows.h.


    But it would need defaults added to make it useful:

    Strangely, many people have been able to write code using the MS API
    without named parameters or defaults.

    Yes, and we know what such code looks like, with long chains of
    mysterious arguments, many of which are zeros or NULLS:

    hwnd = CreateWindowEx(
    0,
    szAppName,
    "Hello, world!",
    WS_OVERLAPPEDWINDOW|WS_VISIBLE,
    300,
    100,
    400,
    400,
    NULL,
    NULL,
    0,
    NULL);

    Even without named arguments, just default values, but allowing trailing arguments only to be omitted, those last 4 arguments can be dropped.

    (BTW I swapped those first two NULLs around; I guess you didn't notice!)


    Let's not pretend that MS's API's are good examples of clear design!
    (And please don't bother picking other non-MS examples that are the same
    or worse.)

    We all have to use libraries that other people have designed.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to bart on Thu Apr 10 16:55:05 2025
    On 10/04/2025 16:29, bart wrote:
    On 10/04/2025 14:06, David Brown wrote:
    On 10/04/2025 13:42, bart wrote:
    On 10/04/2025 08:53, David Brown wrote:
    On 09/04/2025 21:11, bart wrote:
    On 09/04/2025 18:26, BGB wrote:

    It might have also been interesting if C allowed optional named
    arguments:
    int foo(int x=3, int y=4)
    {
       return x+y;
    }

    foo() => 7
    foo(.y=2) => 5

    Likely would be following any fixed arguments (if present), and
    likely (for sake of implementation sanity) named arguments and
    varargs being mutually exclusive (alternative being that named
    arguments precede varargs if both are used).

    Well, at least ".y=val" as "y: val" likely wouldn't go over well
    even if it is what several other languages with this feature used
    (well or, "y=val", which is used in some others).

    In the most likely case, the named argument form would be
    transformed into the equivalent fixed argument form at compile time. >>>>>>    So: "foo(.y=2)" would be functionally equivalent to "foo(3,2)". >>>>>
    There are all sorts of problems in adding this to C. For example,
    this is legal:

       void F(int a, float b, char* c);
       void F(int c, float a, char* b);
       void F(int b, float c, char* a) {}

    The sets of parameter names are all different (and that's in the
    same file!); which is the official set?

    C has had flexibility here for all sorts of reasons.  But if named
    parameters were to be added to the language without significant
    extra syntax, then this particular issue could be solved in at least
    two very simple ways.  Either say that named parameter syntax can
    only be used if all of the function's declarations in the
    translation unit have consistent naming, or say that the last
    declaration in scope is the one used.  (My guess would be that the
    later, with compilers offering warnings about the former.)

    Of course that lets someone declare "void f(int a, int b);" in one
    file and "void f(int b, int a);" in a different one - but that does
    not noticeably change the kind of mixups already available to the
    undisciplined programmer, and it is completely eliminated by the
    standard practice of using shared headers for declarations.



    Another is to do with defining default values (essential if named
    arguments are to be fully used). First, similar thing to the above:

       void F(int a = x + y);
       void F(int a = DEFAULT);


    Default arguments are most certainly not essential to make named
    parameters useful.

    Then the advantage is minimal. They are useful when there are lots of
    parameters, where only a few are essential, and the rest are various
    options.

    That is one use-case, yes.  Generally, functions with large numbers of
    parameters are frowned upon anyway - there are typically better ways
    to handle such things.

    Where named parameters shine is when you have a few parameters that
    have the same type.  "void control_leds(bool red, bool green, bool
    blue);". There are a variety of ways you can make a function like this
    in a clearer or safer way in C, but it requires a fair amount of extra
    boilerplate code (to define enum types for simple clarity, or struct
    types for greater safety).  Named parameters would make such functions
    safer and clearer in a simple way.


    They /can/ be a nice thing to have, but they are merely icing on the
    cake.  Still, there is an obvious and C-friendly way to handle this
    too - the default values must be constant expressions.

    Well, the most common default value is 0. But do you mean actual
    literals, or can you use macro or enum names?

    I mean actual constant expressions, as C defines them.  That includes
    constants (now called "literals" in C23), constant expressions (such
    as "2 * 10"), enumeration constants, and constexpr constants (in C23).
    Basically, things that you could use for initialisation of a variable
    at file scope.


    Because it is those name resolutions that are the problem, not
    whether the result is a compile-time constant expression.


    I don't see that at all.

    It probably wouldn't be too much of a problem in C, since outside of a function, there is only one scope anyway. But it can be illustrated like this:

      enum {x=100};
      void F(int a = x);

      int main(void) {
          enum {x=200};
          void F(int a = x);

          F();
      }

    What default value would be used for this call, 100 or 200? Or could
    there actually be two possible defaults for the same function?

    Defaults for function parameters would, I think, be local to the
    declaration. They are not part of the function definition, but let the
    call site fill in the blanks. So I'd say that the default used by F
    here would be 100 outside of main(), and 200 inside the scope of the
    overriding declaration. (I'd also say the same if there were changes to
    the parameter names.)


    Declaring functions inside another is uncommon.

    Sure. And declaration of any external symbols outside of shared headers
    is usually discouraged by coding standards.

    But you can do similar
    things at file scope with #define and #undef.


    No, that happens at an earlier stage in the processing.

    Or maybe the default value uses names defined in a header, but a
    different translation unit could use a different header, or it might
    just have a different expression anyway.

    (I would disallow this:

       void F(int a, int b = a)

    where the default value for 'b' is the parameter 'a'. That would be ill-defined and awkward to implement, plus you could have parameter defaulting to each other.)


    Yes. If defaults have to be constant expressions, this would not be
    allowed.

    Of course, there are always going to be people playing silly buggers:

    enum { a = 10 };
    void F(int a, int b = a);

    The compiler should resolve these according to the scope rules for
    function declarations.


    Not really; the above is inside a formal parameter list, where '='
    has no special meaning.

    That is exactly the point - "=" has no special meaning inside a
    function call.

    But that wasn't a function call! So you can use '=' in declaration, and perhaps '.' and '=' in  a call:

       void F(a = 0);

       F(.a = 77);


    Ah, you mean just using "=" inside a declaration to give a default
    parameter? That's not bad.


    Fundamental matters such as this are best decided early in the
    design of a language, rather than bolted on afterwards.

    The funny thing is that my MessageBox example is a C function
    exported by WinAPI, and I was able to superimpose keyword arguments
    on top. Since I have to write my own bindings to such functions anyway.

    The MS docs for WinAPI do tend to show function declarations with
    fully named parameters, which also seem to be retained in gcc's
    windows.h (but not in my cut-down one).

    gcc does not have a "windows.h".  You are conflating gcc with some
    windows packaging of gcc with additional tools, libraries and headers.

    Huh? Do you really want to go down that path of analysing exactly what
    gcc is and isn't? 'gcc' must be the most famous C compiler on the planet!


    Everyone else already knows gcc is a compiler. It certainly does not
    have a "windows.h" header. It is that simple.

    Yes we all know that 'gcc' /now/ stands for 'gnu compiler collection' or something, and that it is a driver program for a number of utilities.
    But this is a C group which has informally mentioned 'gcc' for decades
    across tens of thousands of posts, but you had to bring it up now?

    Any viable C compiler that targets Windows, gcc included, needs to
    provide windows.h.

    A C /implementation/ targeting Windows is likely to have some
    Windows-specific headers packaged with it. But it is not part of gcc,
    any more than "netfilter.h" is part of gcc despite being on any Linux
    system with gcc installed. C compilers generally only have a few
    standard headers - roughly, the ones matching the "freestanding"
    execution environment where you have type declarations but few or no
    function declarations.

    Go to the gcc website, download a release tarball, and search for
    "windows.h". You won't find it. It is not part of gcc.

    If, on the other hand, you get a package with gcc ported to Windows
    along with a library, some headers, other tools like an assembler and
    linker, then there is likely to be a "windows.h". But the version of
    that file and its contents will vary /hugely/ from package to package -
    msys, WSL, Cygwin, packaged with Code::Blocks, or whatever. Sometimes "windows.h" will be written specifically for the package or the library, sometimes it will be copied from MSVC, Borland C, or somewhere else.
    The same "windows.h" might be used with clang for Windows, tcc, or other compilers. Thus it makes no sense at all to talk about "gcc's windows.h".



    But it would need defaults added to make it useful:

    Strangely, many people have been able to write code using the MS API
    without named parameters or defaults.

    Yes, and we know what such code looks like, with long chains of
    mysterious arguments, many of which are zeros or NULLS:

        hwnd = CreateWindowEx(
            0,
            szAppName,
            "Hello, world!",
            WS_OVERLAPPEDWINDOW|WS_VISIBLE,
            300,
            100,
            400,
            400,
            NULL,
            NULL,
            0,
            NULL);

    Even without named arguments, just default values, but allowing trailing arguments only to be omitted, those last 4 arguments can be dropped.

    It is up to the C programmer to write this in a clear and maintainable
    way. I agree that named parameters could make that task easier.


    (BTW I swapped those first two NULLs around; I guess you didn't notice!)


    Let's not pretend that MS's API's are good examples of clear design!
    (And please don't bother picking other non-MS examples that are the
    same or worse.)

    We all have to use libraries that other people have designed.



    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to Michael S on Fri Apr 11 12:27:59 2025
    On 4/10/25 04:50, Michael S wrote:
    ...
    I merely wanted to say that it is pretty easy to write a legal, if not necessarily sensible, code that uses variable named 'memcpy' and
    function named 'size_t'. OTOH, you can't name you variable 'break' or 'continue'. Or even 'bool', if you happen to use C23 compiler.

    Yes, the rules for reserved identifiers are different for the keywords
    that are part of the language syntax, than for the identifiers that
    identify parts of the standard library. Lots of other things are
    different between them, too. However, they are still both parts of a
    conforming implementation of C, one covered by clause 6, and the other
    by clause 7.
    Also, note that all identifiers from the standard library that have
    external linkage are reserved for use as identifiers with external
    linkage. memcpy has external linkage, so you cannot define such a
    variable with external linkage. size_t is a typedef, which has no linkage.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Alexis on Fri Apr 11 09:34:58 2025
    Alexis <flexibeast@gmail.com> writes:

    Thought people here might be interested in this image on Jens Gustedt's
    blog, which translates section 6.2.5, "Types", of the C23 standard
    into a graph of inclusions:

    https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23-basic-types/

    That diagram gets an F for presentation.

    I can't help feeling that the information could have been
    presented in a form that is simpler and easier to take in.

    Disclaimer: the outline below is meant to convey all the
    information that is present in the diagram (and perhaps a tiny
    bit more), but I'm not sure I got everything, and I probably
    didn't. YGWYPF.

    Disclaimer 2: the notation used is meant to be self-explanatory.
    Don't blame me if it isn't. :/

    Scalar
    Pointer
    [regular]
    nullptr_t
    Arithmetic (basic)
    Complex (floating)
    _Complex float
    _Complex double
    _Complex long double
    Real
    Real floating (floating)
    decimal floating
    _Decimal32
    _Decimal64
    _Decimal128
    [plain]
    float
    double
    long double
    Integer
    Enumeration (! basic)
    Standard
    char (char) (promotes)
    Standard Signed Integer (signed integer)
    signed char (char) (promotes)
    signed short (promotes)
    signed int
    signed long
    signed long long
    [unadorned int]
    Standard Unsigned Integer (unsigned integer)
    _Bool
    unsigned char (char)
    unsigned short (promotes)
    unsigned int
    unsigned long
    unsigned long long
    Extended
    Extended Signed (signed integer)
    Extended Unigned (unsigned integer)
    Bit-precise integers
    Bit-precise signed integer [* widths]
    Bit-precise unsigned integer [* widths]
    unsigned _BitInt[1]

    Incidentally, [unadorned int] is meant to reflect the difference
    between using 'int' and 'signed int' for the type of a bitfield.
    I don't know if that distinction still exists in C23.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Alexis on Fri Apr 11 09:48:11 2025
    Alexis <flexibeast@gmail.com> writes:

    Thought people here might be interested in this image on Jens Gustedt's
    blog, which translates section 6.2.5, "Types", of the C23 standard
    into a graph of inclusions:

    https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23-basic-types/

    By the way, regarding the question of why types like size_t are not
    in the diagram, there is a simple explanation. All the types shown
    in the diagram are guaranteed to be distinct.[*] Types like size_t,
    ptrdiff_t, and so forth, are not new types, but simply different
    names for a type already represented in the diagram.

    [*] This statement assumes that a bit-precise type whose width
    matches one of the standard integer types is still a distinct type.
    I don't know if C23 actually follows that rule.

    Editorial comment: my understanding is that there is an asymmetry
    regarding the bit-precise types, in that there is an unsigned
    bit-precise type of width 1, but not a signed bit-precise type of
    width 1. Assuming that is so, IMO it is a galactically stupid
    omission: a signed bit-precise integer of width 1 would very
    naturally hold the two values 0 and -1, which is a useful type to
    have in some circumstances, and symmetry would be preserved.
    Someone didn't have their Wheaties that morning when that decision
    was made.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Keith Thompson on Sat Apr 12 05:42:00 2025
    On Wed, 09 Apr 2025 14:56:47 -0700, Keith Thompson wrote:

    A trailing comma in an argument list in a function call ...

    I wasn’t talking about argument lists, but yes, they can be useful there
    as well.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Michael S on Sat Apr 12 05:43:22 2025
    On Thu, 10 Apr 2025 11:37:30 +0300, Michael S wrote:

    So, IMHO, if C waits for C++ then it will wait forever.

    Seems like C is already committed to avoiding incompatibilities with C++,
    if the decision on thousands separators in numbers is anything to go by.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to bart on Sat Apr 12 05:44:52 2025
    On Thu, 10 Apr 2025 12:08:58 +0100, bart wrote:

    * Each function now needs an accompanying struct

    * The function header does not list the parameter names or types

    Both are housekeeping aspects which could probably be handled with macros (waves hands airily).

    * When structs are passed by-value as is the case here, it can mean
    copying the struct, an extra overhead

    No more so than copying the argument list in the first place, surely.

    * It can also mean construction an argument list in memory, rather than passing arguments efficiently in registers

    That’s an implementation issue.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to Lawrence D'Oliveiro on Sat Apr 12 10:10:56 2025
    On 4/12/25 01:43, Lawrence D'Oliveiro wrote:
    On Thu, 10 Apr 2025 11:37:30 +0300, Michael S wrote:

    So, IMHO, if C waits for C++ then it will wait forever.

    Seems like C is already committed to avoiding incompatibilities with C++,
    if the decision on thousands separators in numbers is anything to go by.

    As a matter of official policy, the C and C++ committees are both
    committed to avoid creating gratuitous incompatibilities between the two languages. That simply means that if you are proposing a change to
    either language that would create an incompatibility, you need to have a sufficiently good justification, and your proposal is more likely to be approved if you can redesign it to avoid the incompatibility. It also
    means that changes to make them more compatible need less justification
    than ones that don't.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Lawrence D'Oliveiro on Sat Apr 12 17:21:45 2025
    On 12/04/2025 07:43, Lawrence D'Oliveiro wrote:
    On Thu, 10 Apr 2025 11:37:30 +0300, Michael S wrote:

    So, IMHO, if C waits for C++ then it will wait forever.

    Seems like C is already committed to avoiding incompatibilities with C++,
    if the decision on thousands separators in numbers is anything to go by.

    It has been committed to avoiding /unnecessary/ incompatibilities with
    C++ since C++ perhaps C99. The C and C++ committees will not introduce
    similar but incompatible features to each other unless there is very
    good reason to do so. The fact that some people think 1_000 is nicer
    than 1'000 is not nearly enough to pick a different type of digit
    separator for C.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Janis Papanagnou on Mon Apr 14 04:33:32 2025
    On Mon, 7 Apr 2025 21:49:02 +0200, Janis Papanagnou wrote:

    A better unit is, IMO, a second resolution (which at least is a basic physical unit) and a separate integer for sub-seconds.

    I worked out that an integer of a little over 200 bits is sufficient to represent the age of the known Universe in units of the Planck interval (5.39e-44 seconds). Therefore, rounding to something more even, 256 bits
    should be more than enough to measure any physically conceivable time down
    to that resolution.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Michael S on Mon Apr 14 01:24:49 2025
    Michael S <already5chosen@yahoo.com> writes:

    On Wed, 09 Apr 2025 13:52:15 -0700
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    Michael S <already5chosen@yahoo.com> writes:

    [...distinction between hosted implementions and
    freestanding implementations...]

    May be in some formal sense headers and library routines that are
    mandatory for freestanding implementations belong to the same rank
    as core language. But in practice there exists an obvious
    difference. In the first case, name clashes are avoidable
    (sometimes with toothless threat that they can happen in the
    future) and in the second case they are unavoidable.

    It's hard for me to make sense sense of this comment. The only
    library routines that are required in standard C are those
    documented as part of a section for one of the standard headers.
    For freestanding implementations in particular, there are only
    two names (va_copy and va_end) that might correspond to library
    functions, and if they do then the names are reserved for that
    purpose. Do you mean to suggest that user code defining either
    va_copy or va_end as a symbol with external linkage is
    unavoidable? Any user code that does so could be summarily
    rejected by the implementation. It's hard to imagine anyone
    writing user code wanting to define either of those names as a
    symbol with external linkage.

    I merely wanted to say that it is pretty easy to write a legal, if
    not necessarily sensible, code that uses variable named 'memcpy'
    and function named 'size_t'. OTOH, you can't name you variable
    'break' or 'continue'. Or even 'bool', if you happen to use C23
    compiler.

    I sort of agree with you (even if in practice it isn't hard to
    avoid using identifiers like memcpy or size_t). I was confused
    because this problem doesn't have much to do with whether an
    implementation is freestanding or not. There are different kinds
    of identifiers, and the different kinds have different properties
    about where they may or may not be used. Do you really have a
    problem avoiding identifiers defined in this or that library
    header, either for all headers or just those headers required for
    freestanding implementations?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to All on Mon Apr 14 01:46:54 2025
    Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:

    [considering ways of importing all headers defined as part
    of the standard library]

    My point is that, as far as I'm aware, nobody has implemented
    "implicitly include all the standard headers", either as a compiler
    option or as a wrapper script. I'm sure somebody has (I could do
    it in a few minutes), but it's just not something that programmers
    appear to want.

    Of course part of the motivation for *not* wanting this is that
    it results in non-portable code, and if it were standardized that
    wouldn't be an issue.

    And if it were standardized, <assert.h> would raise some issues,
    since NDEBUG needs to be defined or not defined before including it.

    Not really a problem, since if a different choice for NDEBUG were
    desired then it could be #define'd or #undef'ed, as appropriate,
    followed by another #include <assert.h>.

    That said, it's hard to imagine many people wanting such a thing.
    It's a very rare translation unit that needs or even wants access
    to symbols defined in every header in the standard library. And
    it flies in the face of the common practice of #include'ing only
    those headers that are actually needed in each translation unit.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Michael S on Mon Apr 14 01:59:24 2025
    Michael S <already5chosen@yahoo.com> writes:

    On Wed, 09 Apr 2025 13:14:55 -0700
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    [may trailing commas in argument lists be accepted, or
    must they be rejected?]

    It is required in the sense that it is a syntax error,
    and syntax errors require a diagnostic.

    Trailing commas in argument lists and/or parameter lists
    could be accepted as an extension, even without giving a
    diagnostic as I read the C standard, but implementations
    are certainly within their rights to reject them.

    I have no doubts that implementations have full rights to reject
    them. The question was about possibility to accept them and
    especially about possibility to accept without diagnostics.
    So, it seems, there is no consensus about it among few posters
    that read the relevant part of the standard.

    I don't think anyone should care about that. If there were any
    significant demand for allowing such trailing commas then someone
    would implement it, and people would use it even if in some
    technical sense it meant that an implementation supporting it
    would be nonconforming. Besides, the opinions of people posting
    in comp.lang.c carry zero weight; the only opinions that matter
    are those of peole on the ISO C committee, and the major compiler
    writers, and none of those people bother posting here.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Keith Thompson on Mon Apr 14 02:10:16 2025
    Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:

    Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
    [...]

    Trailing commas in argument lists and/or parameter lists
    could be accepted as an extension, even without giving a
    diagnostic as I read the C standard, but implementations
    are certainly within their rights to reject them.

    I believe a diagnotic is required.

    C17 5.1.1.3:

    A conforming implementation shall produce at least one
    diagnostic message (identified in an implementation-defined
    manner) if a preprocessing translation unit or translation
    unit contains a violation of any syntax rule or constraint,
    even if the behavior is also explicitly specified as undefined
    or implementation-defined.

    A trailing comma on an argument or parameter list is a violation
    of a syntax rule.

    I believe a diagnostic is not required, because the C standard
    explicitly allows extensions. If such diagnostics were required
    even for constructions that are part of extensions, then there is no
    reason to allow extensions, because whatever behavior is desired
    could be done anyway, under the freedom granted by undefined
    behavior. It would be stupid to explicitly grant permission to do
    something if it could be done anyway without the permission. And
    the people who wrote the C standard are not stupid.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Tim Rentsch on Mon Apr 14 12:55:29 2025
    On Mon, 14 Apr 2025 01:24:49 -0700
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    about where they may or may not be used. Do you really have a
    problem avoiding identifiers defined in this or that library
    header, either for all headers or just those headers required for freestanding implementations?

    I don't know. In order to know I'd have to include all standard headers
    into all of my C files
    But I would guess that for headers required for freestanding
    implementations I would have no problems.
    But that's me. People with less experience or with lesser tendency to
    recollect unimportant things (most likely at cost of reduced
    reliability of the memory store for more important things) can have
    different experience.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Tim Rentsch on Mon Apr 14 12:44:26 2025
    On Mon, 14 Apr 2025 01:59:24 -0700
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    Michael S <already5chosen@yahoo.com> writes:

    On Wed, 09 Apr 2025 13:14:55 -0700
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    [may trailing commas in argument lists be accepted, or
    must they be rejected?]

    It is required in the sense that it is a syntax error,
    and syntax errors require a diagnostic.

    Trailing commas in argument lists and/or parameter lists
    could be accepted as an extension, even without giving a
    diagnostic as I read the C standard, but implementations
    are certainly within their rights to reject them.

    I have no doubts that implementations have full rights to reject
    them. The question was about possibility to accept them and
    especially about possibility to accept without diagnostics.
    So, it seems, there is no consensus about it among few posters
    that read the relevant part of the standard.

    I don't think anyone should care about that. If there were any
    significant demand for allowing such trailing commas then someone
    would implement it, and people would use it even if in some
    technical sense it meant that an implementation supporting it
    would be nonconforming.

    Personally, I'd use this feature if it would be standard. I find the way
    I write prints with long list of arguments (and I do it regularly), with
    comma as first non-space character on each line, less then ideal
    aesthetically.
    But if it would be non-standard feature supported by both gcc and
    clang I would hesitate.

    Besides, the opinions of people posting
    in comp.lang.c carry zero weight; the only opinions that matter
    are those of peole on the ISO C committee, and the major compiler
    writers, and none of those people bother posting here.

    My impression was that Philipp Klaus Krause that posts here, if
    infrequently, is a member of WG14.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Janis Papanagnou on Mon Apr 14 05:46:21 2025
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:

    On 03.04.2025 06:06, Tim Rentsch wrote:

    Kaz Kylheku <643-408-1753@kylheku.com> writes:

    [some symbols are defined in more than one header]

    (In my opinion, things would be better if headers were not allowed
    to behave as if they include other headers, or provide identifiers
    also given in other heards. Not in ISO C, and not in POSIX.
    Every identifier should be declared in exactly one home header,
    and no other header should provide that definition. [...])

    Not always practical. A good example is the type size_t. If a
    function takes an argument of type size_t, then the symbol size_t
    should be defined, no matter which header the function is being
    declared in. Similarly for NULL for any function that has defined
    behavior on some cases of arguments that include NULL. No doubt
    there are other compelling examples.

    I think that all that's said above (by Kaz and you) is basically
    correct.

    An interesting statement, considering that Kaz's comment and my
    comment are mutually contradictory.

    Obviously [to me] it is that 'size_t' and 'NULL' are so fundamental
    entities (a standard type and a standard pointer constant literal)
    that such items should have been inherent part of the "C" language,
    and not #include'd.

    You haven't really given any reason why you think so. You're just
    substituting one subjective property ("fundamental") for another
    (should be part of the language proper). In either case you're
    doing nothing more than saying "I think it should be this way".

    I am guided by Tony Hoare's advice about programming languages: a
    programming language should include only those elements that will be
    used by every (nontrivial) program written in the language. Neither
    size_t nor NULL is needed to pass this test. Furthermore, in those
    cases where they are needed, they come along with no effort by
    virtue of being part of the header(s) used by the program. Since
    they are not always necessary, and since no effort is needed to make
    use of them in those situations where they are useful, there is no
    reason to have them be part of the core language.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to candycanearter07@candycanearter07.n on Mon Apr 14 17:46:08 2025
    On 2025-04-14, candycanearter07 <candycanearter07@candycanearter07.nomail.afraid> wrote:
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote at 04:33 this Monday (GMT):
    I worked out that an integer of a little over 200 bits is sufficient to
    represent the age of the known Universe in units of the Planck interval
    (5.39e-44 seconds). Therefore, rounding to something more even, 256 bits
    should be more than enough to measure any physically conceivable time down >> to that resolution.

    The problem then becomes storing that size.

    In a twist of verbal irony, his time here is measured by *Plonck* Intervals.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From candycanearter07@21:1/5 to Lawrence D'Oliveiro on Mon Apr 14 17:40:04 2025
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote at 04:33 this Monday (GMT):
    On Mon, 7 Apr 2025 21:49:02 +0200, Janis Papanagnou wrote:

    A better unit is, IMO, a second resolution (which at least is a basic
    physical unit) and a separate integer for sub-seconds.

    I worked out that an integer of a little over 200 bits is sufficient to represent the age of the known Universe in units of the Planck interval (5.39e-44 seconds). Therefore, rounding to something more even, 256 bits should be more than enough to measure any physically conceivable time down
    to that resolution.


    The problem then becomes storing that size.
    --
    user <candycane> is generated from /dev/urandom

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to BGB on Mon Apr 14 22:33:13 2025
    On Mon, 14 Apr 2025 13:36:07 -0500, BGB wrote:

    On 4/14/2025 12:40 PM, candycanearter07 wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> wrote at 04:33 this Monday (GMT):

    I worked out that an integer of a little over 200 bits is sufficient
    to represent the age of the known Universe in units of the Planck
    interval (5.39e-44 seconds). Therefore, rounding to something more
    even, 256 bits should be more than enough to measure any physically
    conceivable time down to that resolution.

    The problem then becomes storing that size.

    More practical is storing the time in microseconds.

    Relative to what epoch?

    I figured that it would be hard to find an epoch less arbitrary than the
    Big Bang ...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Keith Thompson on Mon Apr 14 23:41:16 2025
    On Mon, 14 Apr 2025 15:56:56 -0700, Keith Thompson wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:

    On Mon, 14 Apr 2025 13:36:07 -0500, BGB wrote:

    On 4/14/2025 12:40 PM, candycanearter07 wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> wrote at 04:33 this Monday
    (GMT):
    I worked out that an integer of a little over 200 bits is sufficient >>>>> to represent the age of the known Universe in units of the Planck
    interval (5.39e-44 seconds). Therefore, rounding to something more
    even, 256 bits should be more than enough to measure any physically
    conceivable time down to that resolution.

    The problem then becomes storing that size.

    More practical is storing the time in microseconds.

    Relative to what epoch?

    I figured that it would be hard to find an epoch less arbitrary than
    the Big Bang ...

    Why??

    That would not be practical or useful. The timing of the Big Bang is
    not known with great precision ...

    Neither is that of some fictional religious entity.

    So we pick some value close to where we think it is. And then discover in
    the future that it was some few million years before or after that point.
    No biggie.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to James Kuyper on Tue Apr 15 04:11:04 2025
    On Mon, 14 Apr 2025 23:25:26 -0400, James Kuyper wrote:

    On 4/14/25 19:41, Lawrence D'Oliveiro wrote:

    On Mon, 14 Apr 2025 15:56:56 -0700, Keith Thompson wrote:
    ...
    That would not be practical or useful. The timing of the Big Bang is
    not known with great precision ...

    Neither is that of some fictional religious entity.

    Not true. While his divinity is fictional, there might have been a
    person who was the inspiration for those stories. Whether or not he was
    real, the stories of his life are only consistent with a very specific
    time period ...

    Unfortunately, whoever threw in references to historical details to try to
    make the stories seem more plausible didn’t try very hard to keep them consistent.

    Remember that there was no “Year 1”. It was a few centuries before
    somebody decided something like “let’s call this year 615 A.D., and number backwards and forwards from there”.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Chris M. Thomasson on Tue Apr 15 04:14:11 2025
    On Mon, 14 Apr 2025 18:46:22 -0700, Chris M. Thomasson wrote:

    Humm... Is the "Big Bang' nothing more than a hyper large and rather
    local explosion?

    “Local” does seem to be right. The usual assumption is that the entire Universe arose from a single “bang”, but there are troublesome little bits of evidence that can only be explained by resorting to multiple “bangs”.

    But given that our “big bang” was the single most important event to
    happen in our particular neck of the multiverse woods, it makes sense to reference our timekeeping from that ...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to Lawrence D'Oliveiro on Mon Apr 14 23:25:26 2025
    On 4/14/25 19:41, Lawrence D'Oliveiro wrote:
    On Mon, 14 Apr 2025 15:56:56 -0700, Keith Thompson wrote:
    ...
    That would not be practical or useful. The timing of the Big Bang is
    not known with great precision ...

    Neither is that of some fictional religious entity.

    Not true. While his divinity is fictional, there might have been a
    person who was the inspiration for those stories. Whether or not he was
    real, the stories of his life are only consistent with a very specific
    time period, which narrows the time period of his (possibly fictional)
    birth to within just a few years. The uncertainty in the timing of the
    Big Bang is currently about 59 million years.

    On 1977-01-01, international time keepers started correcting for the
    fact that different atomic clocks measured time at different speed
    because they were at different altitudes. As a result, that date is
    epoch used in Barycentric Coordinate Time (TCB), Geocentric Coordinate
    Time (TCG), and Terrestrial Time (TT). I would therefore favor that
    epoch over any other that I can think of.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to BGB on Tue Apr 15 04:15:50 2025
    On Mon, 14 Apr 2025 19:43:04 -0500, BGB wrote:

    On 4/14/2025 5:33 PM, Lawrence D'Oliveiro wrote:

    I figured that it would be hard to find an epoch less arbitrary than
    the Big Bang ...

    But, we don't really need it.

    If so, could probably extend to 128 bits, maybe go to nanoseconds or picoseconds.

    The reason why I chose the Planck interval as the time unit is that
    quantum physics says that’s the smallest possible time interval that makes any physical sense. So there shouldn’t be any need to measure time more accurately than that.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to Kaz Kylheku on Tue Apr 15 09:41:48 2025
    On 14.04.2025 19:46, Kaz Kylheku wrote:
    On 2025-04-14, candycanearter07 <candycanearter07@candycanearter07.nomail.afraid> wrote:
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote at 04:33 this Monday (GMT):
    I worked out that an integer of a little over 200 bits is sufficient to
    represent the age of the known Universe in units of the Planck interval
    (5.39e-44 seconds). Therefore, rounding to something more even, 256 bits >>> should be more than enough to measure any physically conceivable time down >>> to that resolution.

    The problem then becomes storing that size.

    In a twist of verbal irony, his time here is measured by *Plonck* Intervals.

    LOL! - YMMD.

    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to Lawrence D'Oliveiro on Tue Apr 15 10:06:55 2025
    On 4/15/25 00:11, Lawrence D'Oliveiro wrote:
    On Mon, 14 Apr 2025 23:25:26 -0400, James Kuyper wrote:

    On 4/14/25 19:41, Lawrence D'Oliveiro wrote:

    On Mon, 14 Apr 2025 15:56:56 -0700, Keith Thompson wrote:
    ...
    That would not be practical or useful. The timing of the Big Bang is
    not known with great precision ...

    Neither is that of some fictional religious entity.

    Not true. While his divinity is fictional, there might have been a
    person who was the inspiration for those stories. Whether or not he was
    real, the stories of his life are only consistent with a very specific
    time period ...

    Unfortunately, whoever threw in references to historical details to try to make the stories seem more plausible didn’t try very hard to keep them consistent.

    That's why there's a range of possible dates, rather than one specific
    date. Note that such inconsistencies can be expected, even if he's real.
    Most historical figures of his era who were not of high rank had poorly recorded births.

    Remember that there was no “Year 1”. It was a few centuries before somebody decided something like “let’s call this year 615 A.D., and number
    backwards and forwards from there”.

    No, Dionysius Exiguus didn't just randomly decide which year it was, he
    did his best to determine how many years it had been since the birth of
    Christ. The method he used to reach that conclusion are unknown, and are inconsistent with the range of dates currently considered reasonable by experts. If Jesus was a real person, the current best guess as to the
    date of his birth is somewhere between 6 and 4 BCE. See <https://en.wikipedia.org/wiki/Date_of_the_birth_of_Jesus> for more detail.

    The point is, the uncertainty in the date of his birth, whether
    fictional or real, is far less than the 59 million year uncertainty in
    the date of the Big Bang. In order for it to be comparably uncertain, we
    would have to be unsure whether he was incarnated in the Mesozoic or
    Cenozoic eras.

    Are you uncertain as to whether or not King Herod ruled during the
    Cretaceous?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to BGB on Tue Apr 15 14:08:07 2025
    BGB <cr88192@gmail.com> writes:
    On 4/14/2025 10:25 PM, James Kuyper wrote:
    On 4/14/25 19:41, Lawrence D'Oliveiro wrote:
    On Mon, 14 Apr 2025 15:56:56 -0700, Keith Thompson wrote:
    ...
    That would not be practical or useful. The timing of the Big Bang is
    not known with great precision ...

    Neither is that of some fictional religious entity.

    Not true. While his divinity is fictional, there might have been a
    person who was the inspiration for those stories. Whether or not he was
    real, the stories of his life are only consistent with a very specific
    time period, which narrows the time period of his (possibly fictional)
    birth to within just a few years. The uncertainty in the timing of the
    Big Bang is currently about 59 million years.


    He was a real person,

    There is no contemporaneous evidence; it was all written 70-100 years
    later.

    And we know from hard experience just how much a story can morph in
    a decade, much less a century.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to BGB on Tue Apr 15 14:10:42 2025
    BGB <cr88192@gmail.com> writes:
    On 4/14/2025 5:33 PM, Lawrence D'Oliveiro wrote:
    On Mon, 14 Apr 2025 13:36:07 -0500, BGB wrote:

    On 4/14/2025 12:40 PM, candycanearter07 wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> wrote at 04:33 this Monday (GMT): >>>>
    I worked out that an integer of a little over 200 bits is sufficient >>>>> to represent the age of the known Universe in units of the Planck
    interval (5.39e-44 seconds). Therefore, rounding to something more
    even, 256 bits should be more than enough to measure any physically
    conceivable time down to that resolution.

    The problem then becomes storing that size.

    More practical is storing the time in microseconds.

    Relative to what epoch?


    Probably still Jan 1 1970...

    Technically, it depends on the timezone:

    $ date --date="@0"
    Wed Dec 31 16:00:00 PST 1969

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to Chris M. Thomasson on Tue Apr 15 10:19:02 2025
    On Mon, 14 Apr 2025 18:46:22 -0700, Chris M. Thomasson wrote:

    Humm... Is the "Big Bang' nothing more than a hyper large and rather
    local explosion?

    No, as cosmology is currently understood, it is meaningless to talk
    about space or time before the Big Bang. The Big Bang is the event that
    starts both time and space. That makes it very different from any normal explosion. At the moment of the Big Bang, the entire universe was
    infinitely small, so literally everything was "local".

    An analogy I like is to think of the surface of a sphere, with space corresponding to longitude, and time corresponding to latitude. Asking
    about what happened before the Big Bang is a meaningful as asking what's
    south of 90S latitude.
    This analogy implies a universe that ends at the equivalent of latitude
    90N, but you could just as well use a hyperboloid of revolution as an
    analogy to a universe that starts but never ends. I think spheres are
    easier for most people to think about.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to James Kuyper on Tue Apr 15 14:28:27 2025
    On 2025-04-15, James Kuyper <jameskuyper@alumni.caltech.edu> wrote:
    On Mon, 14 Apr 2025 18:46:22 -0700, Chris M. Thomasson wrote:

    Humm... Is the "Big Bang' nothing more than a hyper large and rather
    local explosion?

    No, as cosmology is currently understood, it is meaningless to talk
    about space or time before the Big Bang.

    But you're doing it now, and I perceive meaning in the sentence.

    The Big Bang is the event that
    starts both time and space. That makes it very different from any normal explosion. At the moment of the Big Bang, the entire universe was
    infinitely small, so literally everything was "local".

    Then they refactored it with globals, and here we are. World wars
    famines, disasters, ...

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to BGB on Tue Apr 15 19:22:47 2025
    On 15/04/2025 07:40, BGB wrote:
    On 4/14/2025 11:15 PM, Lawrence D'Oliveiro wrote:
    On Mon, 14 Apr 2025 19:43:04 -0500, BGB wrote:

    On 4/14/2025 5:33 PM, Lawrence D'Oliveiro wrote:

    I figured that it would be hard to find an epoch less arbitrary than
    the Big Bang ...

    But, we don't really need it.

    If so, could probably extend to 128 bits, maybe go to nanoseconds or
    picoseconds.

    The reason why I chose the Planck interval as the time unit is that
    quantum physics says that’s the smallest possible time interval that
    makes
    any physical sense. So there shouldn’t be any need to measure time more
    accurately than that.

    Quantum mechanics, the current theory, is not complete. Physicists are
    aware of many limitations. So while Plank time is the smallest
    meaningful time interval as far as we currently know, and we know of no
    reason to suspect that smaller times would be meaningful, it would be presumptuous to assume that we will never know of smaller time intervals.


    Practically, picoseconds are likely the smallest unit of time that
    people could practically measure or hope to make much use of.

    The fastest laser pulses so far are timed at 12 attosecond accuracies -
    100,000 as accurate as a picosecond. Some subatomic particle lifetimes
    are measured in rontoseconds - 10 ^ -27 seconds. Picoseconds are
    certainly fast enough for most people, but certainly not remotely fast
    enough for high-speed or high-energy physics.


    While femtoseconds exist, given in that unit of time light can only
    travel a very short distance, and likely no practical clock could be
    built (for similar reasons), not worth bothering with (*).

    Physicists have measured times a thousand millionth of a femtosecond.
    It is not easy, of course, but not impossible.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to BGB on Tue Apr 15 19:10:01 2025
    BGB <cr88192@gmail.com> writes:
    On 4/15/2025 12:22 PM, David Brown wrote:

    I am not saying that the smaller times don't exist, but that there is no >point in wasting bits encoding times more accurate than can be used by a >computer running at a few GHz, with clock speeds that will likely never >exceed a few GHz.

    This sets the practical limit mostly in nanosecond territory.

    If you try to close timing on any reasonably speed processor design
    you're talking 10s of picoseconds for a 3Ghz design target.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to BGB on Tue Apr 15 22:56:51 2025
    On Tue, 15 Apr 2025 00:40:48 -0500, BGB wrote:

    Practically, picoseconds are likely the smallest unit of time that
    people could practically measure or hope to make much use of.

    “10¯¹² seconds ought to be enough for anybody.”

    The lessons of software backward-compatibility baggage teach us that we
    need to think a bit beyond present-day technological limitations.

    Planck units are so small as to be essentially useless for any
    practical measurement.

    And as far as we know, that will always be true.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to BGB on Tue Apr 15 18:57:02 2025
    On 4/15/25 13:29, BGB wrote:
    On 4/15/2025 9:08 AM, Scott Lurndal wrote:
    BGB <cr88192@gmail.com> writes:
    He was a real person,
    ...

    This is not the appropriate forum for a discussion the historicity of
    Jesus. The only vaguely relevant issue is that the date of his birth,
    whether he's divine, human, or fictional, is not know with any great
    accuracy, but is known with far greater precision than the date of the
    Big Bang. Please take the other aspects of this discussion somewhere
    else, where you'll find people more interested in the discussion and
    more competent to participate in it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to James Kuyper on Tue Apr 15 22:58:19 2025
    On Tue, 15 Apr 2025 10:06:55 -0400, James Kuyper wrote:

    On 4/15/25 00:11, Lawrence D'Oliveiro wrote:

    Unfortunately, whoever threw in references to historical details to try
    to make the stories seem more plausible didn’t try very hard to keep
    them consistent.

    That's why there's a range of possible dates ...

    No, there is no date that fits the claimed historical references.

    Remember that there was no “Year 1”. It was a few centuries before
    somebody decided something like “let’s call this year 615 A.D., and
    number backwards and forwards from there”.

    No, Dionysius Exiguus didn't just randomly decide which year it was, he
    did his best to determine how many years it had been since the birth of Christ. The method he used to reach that conclusion are unknown ...

    So how do you know he “did his best”?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to BGB on Tue Apr 15 23:00:09 2025
    On Tue, 15 Apr 2025 00:00:45 -0500, BGB wrote:

    He was a real person ...

    Who was?

    The prophesy said “he shall be called Emmanuel”. Nobody by that name appeared.

    Then again, it is very well possibly he could reappear again in the not
    too distant future, and if so, better to not be on his bad side.

    Each religion says their sky fairy is the only true sky fairy, all other
    sky fairies are false.

    They can’t all be right on the first point, they can all be right on the second.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to James Kuyper on Tue Apr 15 23:10:52 2025
    On Tue, 15 Apr 2025 10:19:02 -0400, James Kuyper wrote:

    No, as cosmology is currently understood, it is meaningless to talk
    about space or time before the Big Bang.

    That’s one theory. But there are bits of evidence that don’t quite fit.

    Exhibit Number One: the smoothness of the cosmic microwave background radiation. Basically, if you look at the size of the Universe at any given point versus its age at that point, there is never enough time for random irregularities in the energy density to smooth themselves out over that distance.

    The “inflation” field was postulated to try to get around this. But that requires two assumptions: one, the inflation field turned on very early in
    the formation of our Universe, to suddenly expand it, much faster than
    light, to something much larger than a single atomic radius (like how
    blowing up a balloon smooths out any wrinkles in its skin). Two, the field
    then turned off at some point soon afterwards, we don’t know why or how.

    Because, if the field didn’t turn off, then it would keep on acting, and
    keep on creating new baby Universes, each with their own Big Bang,
    spawning off the parent one (and each baby in turn spawning off its own
    babies, and so on), right through to the present day.

    Another hypothesis to try to explain the smoothness of the CBE is that the Universe is actually older than the Big Bang, so what we are seeing is the accumulated effect from the current Bang and at least one other Bang
    before that. Possibility a whole endless series of Bangs.

    So you see, whichever way you try to explain away the available evidence,
    it seems to lead towards the idea of multiple Big Bangs.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to BGB on Tue Apr 15 23:01:00 2025
    On Tue, 15 Apr 2025 12:29:04 -0500, BGB wrote:

    From what I had read, both the Romans and Jewish Rabbi's had secondary written accounts about him, although in a less positive light, and
    lacking in terms of the more supernatural elements (and from different vantage points).

    No point in them writing about someone that didn't exist.

    Lots of people existed at that time and place. Doesn’t mean they were
    talking about the same person.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to Keith Thompson on Tue Apr 15 20:53:52 2025
    On 4/15/25 18:56, Keith Thompson wrote:
    ...
    The uncertainty in the timing of January 1, 1970, where 1970 is a
    year number in the current almost universally accepted Gregorian
    calendar, is essentially zero.

    Modern Cesium clock are accurate to about 1 ns/day.That's an effect
    large enough that we can measure it, but cannot correct for it. We know
    that the clocks disagree with each other, but the closest we can do to correcting for that instability is to average over 450 different clock;
    the average is 10 times more stable than the individual clocks.

    Note: the precision of cesium clocks has improved log-linearly since the
    1950s. They're 6 orders of magnitude better in 2008 than they were in
    1950. Who knows how much longer that will continue to be true?

    ... Same for any other less commonly
    used chosen epoch. The fact that the number 1970 is arbitrary
    is not a problem for software. In fact it's an advantage, since
    there's no uncertainty in the presence of any new information.

    I agree, which is why I identified that epoch as the one I preferred
    over both of those.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to Lawrence D'Oliveiro on Tue Apr 15 21:02:34 2025
    On 4/15/25 18:58, Lawrence D'Oliveiro wrote:
    On Tue, 15 Apr 2025 10:06:55 -0400, James Kuyper wrote:

    On 4/15/25 00:11, Lawrence D'Oliveiro wrote:

    Unfortunately, whoever threw in references to historical details to try
    to make the stories seem more plausible didn’t try very hard to keep
    them consistent.

    That's why there's a range of possible dates ...

    No, there is no date that fits the claimed historical references.

    That's the norm, not the exception, for events that obscure which
    happened that long ago. A historian who insisted that every date be
    consistent with every possible source before he could talk about an
    event that supposedly happened on that date would have very little to
    talk about prior to about 1800.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Keith Thompson on Wed Apr 16 02:11:05 2025
    On 2025-04-16, Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Tue, 15 Apr 2025 12:29:04 -0500, BGB wrote:
    From what I had read, both the Romans and Jewish Rabbi's had secondary
    written accounts about him, although in a less positive light, and
    lacking in terms of the more supernatural elements (and from different
    vantage points).

    No point in them writing about someone that didn't exist.

    Lots of people existed at that time and place. Doesn’t mean they were
    talking about the same person.

    THIS IS NOT THE PLACE FOR A RELIGIOUS DEBATE.

    Please stop.

    Indeed, please stop, or else invite Rick Hodgkin for a balanced view.

    /duck

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to BGB on Wed Apr 16 07:41:17 2025
    On Tue, 15 Apr 2025 23:48:26 -0500, BGB wrote:

    On 4/15/2025 5:56 PM, Lawrence D'Oliveiro wrote:

    On Tue, 15 Apr 2025 00:40:48 -0500, BGB wrote:

    Practically, picoseconds are likely the smallest unit of time that
    people could practically measure or hope to make much use of.

    “10¯¹² seconds ought to be enough for anybody.”

    The lessons of software backward-compatibility baggage teach us that we
    need to think a bit beyond present-day technological limitations.

    In all likelihood, computers will not get much faster (in terms of clock speeds) than they are already.

    That’s not the issue, though.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to James Kuyper on Wed Apr 16 07:42:53 2025
    On Tue, 15 Apr 2025 21:02:34 -0400, James Kuyper wrote:

    On 4/15/25 18:58, Lawrence D'Oliveiro wrote:

    No, there is no date that fits the claimed historical references.

    That's the norm, not the exception, for events that obscure which
    happened that long ago.

    On the contrary, that is an unusual characteristic that is a reason for
    casting suspicion on these particular texts.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From candycanearter07@21:1/5 to Keith Thompson on Wed Apr 16 14:00:03 2025
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote at 23:42 this Tuesday (GMT):
    BGB <cr88192@gmail.com> writes:
    On 4/15/2025 9:10 AM, Scott Lurndal wrote:
    BGB <cr88192@gmail.com> writes:
    On 4/14/2025 5:33 PM, Lawrence D'Oliveiro wrote:
    On Mon, 14 Apr 2025 13:36:07 -0500, BGB wrote:

    On 4/14/2025 12:40 PM, candycanearter07 wrote:
    [snip]
    Relative to what epoch?


    Probably still Jan 1 1970...
    Technically, it depends on the timezone:

    Technically, it does not.

    POSIX defines the epoch as follows:

    Historically, the origin of UNIX system time was referred to as
    "00:00:00 GMT, January 1, 1970". Greenwich Mean Time is actually not
    a term acknowledged by the international standards community;
    therefore, this term, "Epoch", is used to abbreviate the reference
    to the actual standard, Coordinated Universal Time.

    The epoch is a specified moment in time. That moment can be
    expressed as midnight UTC Jan 1 1970, as 4PM PST Dec 31 1969,
    or (time_t)0. GMT/UTC is just a convenient way to specify it.

    You could also be GoLang and use MST January 2 2006 at 3:04:05 PM.
    (1/2 03:04:05 PM 2006 GMT-7)

    $ date --date="@0"
    Wed Dec 31 16:00:00 PST 1969

    Yes, the date command by default uses the local time zone by default.

    Well, and however much error there is from decades worth of leap
    seconds, etc...

    Yes, leap seconds are an issue (and would be for any of the proposed alternatives).

    But, yeah, better if one had a notion of time that merely measured
    absolute seconds since the epoch without any particular ties to the
    Earth's rotation or orbit around the sun. Whether or not its "date"
    matches exactly with the official Calendar date being secondary.

    That's called TAI; it ignores leap seconds. See clock_gettime()
    (defined by POSIX, not by ISO C). (Not all systems accurately record
    the number of leap seconds, currently 37.)

    Most systems don't use TAI for the system clock, because matching civil
    time is generally considered more important than counting leap seconds.

    [...]


    Datetime is a nightmare, this is why we use a simple seconds-since-X
    system.
    --
    user <candycane> is generated from /dev/urandom

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Keith Thompson on Wed Apr 16 20:04:02 2025
    Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:
    candycanearter07 <candycanearter07@candycanearter07.nomail.afraid>
    writes:
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote at 23:42 this Tuesday (GMT):
    [...]
    The epoch is a specified moment in time. That moment can be
    expressed as midnight UTC Jan 1 1970, as 4PM PST Dec 31 1969,
    or (time_t)0. GMT/UTC is just a convenient way to specify it.

    You could also be GoLang and use MST January 2 2006 at 3:04:05 PM.
    (1/2 03:04:05 PM 2006 GMT-7)

    That's not an epoch. It's a reference time used in documentation,
    chosen because all the fields have unique values. It means that results
    of converting a time to the dozen or so supported layouts can be easily
    read.

    [...]

    Datetime is a nightmare, this is why we use a simple seconds-since-X
    system.

    Indeed. That makes it a slightly less unpleasant nightmare.

    Back in the mainframe days, it was common to use julian dates
    as they were both concise (5 BCD digits/20 bits) and sortable.

    YYDDD

    If time was neeeded, it was seconds since midnight in a reference
    timezone.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Heathfield@21:1/5 to BGB on Wed Apr 16 23:13:58 2025
    On 16/04/2025 22:10, BGB wrote:

    <snip>

    One shorthand is to assume a year is 365.25 days (31557600
    seconds), and then base everything else off this (initially
    ignoring things like leap-years, etc, just assume that the number
    of days per year is fractional).

    Then, say, 2629800 seconds per month, ...

    For some other calculations, one can assume an integer number of
    days (365), just that each day is 0.07% longer.

    For date/time calculations, one could then "guess" the date, and
    jitter it back/forth as needed until it was consistent with the
    calendar math.

    Estimate and subtract the year, estimate and subtract the month,
    then the day. Then if we have landed on the wrong day, adjust
    until it fits.

    Not really sure if there was a more standard way to do this.

    Half a lifetime ago I found an algorithm on the Web and turned it
    into these two functions:

    long tojul(int yp,int mp,int dp) {long a=(14-mp)/12,y=yp+4800-a,m=mp+12*a-3,jdn=dp+(153*m+2)/5+365*y+y/4-y/100+y/400-32045;
    return jdn;}
    void fromjul(int *yp,int *mp,int *dp,long jdn){long y=4716,j=1401,m=2,n=12,r=4,p=1461,v=3,u=5,s=153,w=2,b=274277,c=-38,f=jdn+j+(((4*jdn+b)/146097)*3)/4+c,e=r*f+v,g=(e%p)/r,h=u*g+w;*dp=(h%s)/u+1;*mp=((h/s+m)%n)+1;*yp=e/p-y+(n+m-*mp)/n;}

    long jd = tojul(2025, 4, 16); /* gives 2460782 */

    fromjul(&y, &m, &d, 2460782); /* gives 2025, 4, 16) */

    Day of week: take the Julian date % 7, then 0 is Monday, 1 is
    Tuesday and so on:

    $ expr `./juldate 16/4/2025` % 7
    2

    It's not perfect, but its flaws have never affected me, so I've
    never had to find the time to fix them. Here are some problems:

    1) No account is taken of the 11-day shift in September 1752. If
    you care, for 2/9/1752 and prior you should add 11 days to the
    Julian date before using it in calculations.

    2) In the real world there was no year 0. Theoretically JD1 is 1
    Jan 4713BC, but it's out by a year for year 0, and another 11
    days for Sept 1752.

    If you draw a line after the inception of the Calendar (New
    Style), it's fine. If you don't, it should be easy enough to wrap.

    I have used these functions for over 25 years now and have always
    found them to be very reliable.

    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Heathfield@21:1/5 to Keith Thompson on Thu Apr 17 01:05:07 2025
    On 17/04/2025 00:31, Keith Thompson wrote:

    <lots of good stuff snipped>

    Like the Julian day number, it's
    useful for computing the number of days between dates.

    Indeed. <time.h> can do it, of course, but I find it a trifle
    clumsy for the purpose.

    Date-heavy loops are the other win:

    for(day = today(); day < enddate; day++)
    {
    fromjul(&y, &m, &d, day);
    if(day % 7 < SATURDAY) /* skip weekends */
    {
    fprintf(fp, "%02d/%02d/%04d,%d,%s\n", d, m, y, foo, bar);
    }
    } and so on.

    I suppose <time.h> can do that too, but I can't help thinking
    that it would be far less elegant... not that I've ever bothered
    to find out.

    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Richard Heathfield on Thu Apr 17 01:26:39 2025
    On 2025-04-17, Richard Heathfield <rjh@cpax.org.uk> wrote:
    On 17/04/2025 00:31, Keith Thompson wrote:

    <lots of good stuff snipped>

    Like the Julian day number, it's
    useful for computing the number of days between dates.

    Indeed. <time.h> can do it, of course, but I find it a trifle
    clumsy for the purpose.

    <sys/time.h> won't give me time(),
    But difftime() makes lovers feel,
    Like they've got something real.

    "time_t -- clock() of the heart", by cc.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Chris M. Thomasson on Thu Apr 17 03:03:45 2025
    On Tue, 15 Apr 2025 12:17:53 -0700, Chris M. Thomasson wrote:

    Well. Humm... Actually, sometimes I ponder on _if_ the big bang was the result of a star going hyper-nova in our "parent" universe.

    No single star could be big enough to contain all the matter in our
    Universe.

    If the idea is true then our universe has children if it's own? It
    creates a sort of infinite fractal cosmic tree in a sense. I don't know
    if it's true, but fun to think about... Fair enough?

    An infinite Universe could not have a nonzero mass density. But it could
    have a fractal mass distribution, with a Hausdorff dimension less than 3,
    so in effect a zero mass density overall. That could work.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to James Kuyper on Thu Apr 17 17:56:29 2025
    On 16/04/2025 02:53, James Kuyper wrote:
    On 4/15/25 18:56, Keith Thompson wrote:
    ...
    The uncertainty in the timing of January 1, 1970, where 1970 is a
    year number in the current almost universally accepted Gregorian
    calendar, is essentially zero.

    Modern Cesium clock are accurate to about 1 ns/day.That's an effect
    large enough that we can measure it, but cannot correct for it. We know
    that the clocks disagree with each other, but the closest we can do to correcting for that instability is to average over 450 different clock;
    the average is 10 times more stable than the individual clocks.

    Note: the precision of cesium clocks has improved log-linearly since the 1950s. They're 6 orders of magnitude better in 2008 than they were in
    1950. Who knows how much longer that will continue to be true?


    I don't think cesium is still the current standard for the highest
    precision atomic clocks. But anyway, the newest breakthrough is thorium nuclear clocks, which IIRC are 5 orders of magnitude more stable than
    cesium clocks. (And probably 5 orders of magnitude more expensive...)

    ... Same for any other less commonly
    used chosen epoch. The fact that the number 1970 is arbitrary
    is not a problem for software. In fact it's an advantage, since
    there's no uncertainty in the presence of any new information.

    I agree, which is why I identified that epoch as the one I preferred
    over both of those.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Richard Heathfield on Thu Apr 17 23:18:54 2025
    On Wed, 16 Apr 2025 23:13:58 +0100, Richard Heathfield wrote:

    1) No account is taken of the 11-day shift in September 1752.

    root@debian10:~ # ncal -s IT 10 1582
    October 1582
    Su 17 24 31
    Mo 1 18 25
    Tu 2 19 26
    We 3 20 27
    Th 4 21 28
    Fr 15 22 29
    Sa 16 23 30

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to BGB on Thu Apr 17 23:16:06 2025
    On Wed, 16 Apr 2025 16:10:41 -0500, BGB wrote:

    One shorthand is to assume a year is 365.25 days (31557600 seconds), and
    then base everything else off this ...

    That’s how the Julian calendar works. It’s why Orthodox Christians are now celebrating Christmas in January.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to bart on Fri Apr 18 02:39:19 2025
    On Mon, 7 Apr 2025 22:46:49 +0100, bart wrote:

    (In source code, it would also be useful to use 1e9 or 1e12,
    unfortunately those normally yield floating point values.

    Tried Python:

    >>> type(1e9)
    <class 'float'>
    >>> round(1e9)
    1000000000
    >>> round(1e12)
    1000000000000

    However:

    >>> round(1e24)
    999999999999999983222784

    So I tried:

    >>> import decimal
    >>> decimal.Decimal("1e24")
    Decimal('1E+24')
    >>> int(decimal.Decimal("1e24"))
    1000000000000000000000000

    which is more like it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Lawrence D'Oliveiro on Fri Apr 18 12:49:56 2025
    On 18/04/2025 03:39, Lawrence D'Oliveiro wrote:
    On Mon, 7 Apr 2025 22:46:49 +0100, bart wrote:

    (In source code, it would also be useful to use 1e9 or 1e12,
    unfortunately those normally yield floating point values.

    Tried Python:

    >>> type(1e9)
    <class 'float'>
    >>> round(1e9)
    1000000000
    >>> round(1e12)
    1000000000000

    However:

    >>> round(1e24)
    999999999999999983222784

    So I tried:

    >>> import decimal
    >>> decimal.Decimal("1e24")
    Decimal('1E+24')
    >>> int(decimal.Decimal("1e24"))
    1000000000000000000000000

    which is more like it.

    The idea behind writing 1e12 for example was for something that was
    compact, quick to type, and easy to grasp. This:

    int(decimal.Decimal("1e24"))

    seems to lack all of those. Besides:

    import decimal

    def fn(n):
    for i in range(100_000_000):
    a = int(decimal.Decimal(n))
    print(a)

    def fn2(n):
    for i in range(100_000_000):
    a = round(n)
    print(a)

    def fn3(n):
    for i in range(100_000_000):
    a = n
    print(a)

    fn("1e24") # 50 seconds (loop overheads excluded)
    fn2(1e15) # 13 seconds
    fn3(1e15) # 0.6 seconds

    You don't want it to be much slower either: it should not affect
    performance. (Timings for CPython.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to BGB on Fri Apr 18 20:03:48 2025
    On 15/04/2025 19:54, BGB wrote:
    On 4/15/2025 12:22 PM, David Brown wrote:
    On 15/04/2025 07:40, BGB wrote:
    On 4/14/2025 11:15 PM, Lawrence D'Oliveiro wrote:
    On Mon, 14 Apr 2025 19:43:04 -0500, BGB wrote:

    On 4/14/2025 5:33 PM, Lawrence D'Oliveiro wrote:

    I figured that it would be hard to find an epoch less arbitrary than >>>>>> the Big Bang ...

    But, we don't really need it.

    If so, could probably extend to 128 bits, maybe go to nanoseconds or >>>>> picoseconds.

    The reason why I chose the Planck interval as the time unit is that
    quantum physics says that’s the smallest possible time interval that >>>> makes
    any physical sense. So there shouldn’t be any need to measure time more >>>> accurately than that.

    Quantum mechanics, the current theory, is not complete.  Physicists
    are aware of many limitations.  So while Plank time is the smallest
    meaningful time interval as far as we currently know, and we know of
    no reason to suspect that smaller times would be meaningful, it would
    be presumptuous to assume that we will never know of smaller time
    intervals.


    Practically, picoseconds are likely the smallest unit of time that
    people could practically measure or hope to make much use of.

    The fastest laser pulses so far are timed at 12 attosecond accuracies
    - 100,000 as accurate as a picosecond.  Some subatomic particle
    lifetimes are measured in rontoseconds - 10 ^ -27 seconds.
    Picoseconds are certainly fast enough for most people, but certainly
    not remotely fast enough for high-speed or high-energy physics.


    While femtoseconds exist, given in that unit of time light can only
    travel a very short distance, and likely no practical clock could be
    built (for similar reasons), not worth bothering with (*).

    Physicists have measured times a thousand millionth of a femtosecond.
    It is not easy, of course, but not impossible.


    I am not saying that the smaller times don't exist, but that there is no point in wasting bits encoding times more accurate than can be used by a computer running at a few GHz, with clock speeds that will likely never exceed a few GHz.

    This sets the practical limit mostly in nanosecond territory.


    The datasheets of some of the components we use measure timings in
    picoseconds - things like inter-channel skew on memory devices,
    rise/fall times, or delays on FPGA pins and internal parts can be given
    as a small number of picoseconds.


    But, for many uses, even nanosecond is overkill. Like, even if a clock-
    cycle is less than 1ns, random things like L1 cache misses, etc, will
    throw in enough noise to make the lower end of the nanosecond range effectively unusable.

    And, things like context switches are more in the area of around a microsecond or so. So, the only way one is going to have controlled
    delays smaller than this is using delay-loops or NOP slides.

    But, also not much point in having clock times much smaller than what
    the CPU could effectively act on. And, program logic decisions are
    unlikely to be able to be much more accurate than around 100ns or so
    (say, several hundred clock cycles).


    I have parts of microcontroller designs reacting a lot faster than that.
    You use hardware, or at least hardware assistance, rather than pure
    software.

    ...

    You could express time as a 64-bit value in nanoseconds, and, it would
    roll over in a few centuries.


    Meanwhile, a microsecond is big enough for computers to effectively
    operate based on them, small enough to be accurate for most real-world
    tasks.


    Basically, all you are saying is that different timing resolution and
    ranges are needed for different things, and 64-bit would not cover it
    all. 128-bit could cover pretty much everything outside specialist
    physics use, and 64-bit with appropriate scale is fine for most purposes.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to BGB on Fri Apr 18 20:10:12 2025
    On 16/04/2025 06:48, BGB wrote:
    On 4/15/2025 5:56 PM, Lawrence D'Oliveiro wrote:
    On Tue, 15 Apr 2025 00:40:48 -0500, BGB wrote:

    Practically, picoseconds are likely the smallest unit of time that
    people could practically measure or hope to make much use of.

    “10¯¹² seconds ought to be enough for anybody.”

    The lessons of software backward-compatibility baggage teach us that we
    need to think a bit beyond present-day technological limitations.


    In all likelihood, computers will not get much faster (in terms of clock speeds) than they are already.


    If things were able to get much faster (without melting) then more fundamental rethinking would be needed about how things work, as clock
    pulses could no longer be used for global synchronization, and (going further) an inability to pass signals through metal wires.


    Lots of communication goes several orders of magnitude faster than 3
    GHz. Fast processors, and especially switch chips, have clocks that are
    far too fast for global synchronisation. At 3 GHz, a clock signal can
    travel about 6 cm per clock pulse - you can't have a 3 GHz global clock
    on a big chip. This issue has been solved long ago, using transmission
    lines and appropriate buffered and registered communication between
    parts of chips.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to bart on Sat Apr 19 00:16:55 2025
    On Fri, 18 Apr 2025 12:49:56 +0100, bart wrote:

    The idea behind writing 1e12 for example was for something that was
    compact, quick to type, and easy to grasp. This:

    int(decimal.Decimal("1e24"))

    seems to lack all of those.

    I would agree with that. But remember, it can be abbreviated†:

    D = decimal.Decimal

    then you can just change the above expression to

    int(D("1e24"))

    I suppose this is why C++ has introduced user-defined literals. No doubt
    the Python folks will find some way to provide something similar at some
    point ...

    †Why is “abbreviated” such a long word?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to David Brown on Sat Apr 19 09:46:59 2025
    On 17.04.2025 17:56, David Brown wrote:
    On 16/04/2025 02:53, James Kuyper wrote:
    On 4/15/25 18:56, Keith Thompson wrote:
    ...
    The uncertainty in the timing of January 1, 1970, where 1970 is a
    year number in the current almost universally accepted Gregorian
    calendar, is essentially zero.

    Modern Cesium clock are accurate to about 1 ns/day.That's an effect
    large enough that we can measure it, but cannot correct for it. We know
    that the clocks disagree with each other, but the closest we can do to
    correcting for that instability is to average over 450 different clock;
    the average is 10 times more stable than the individual clocks.

    Note: the precision of cesium clocks has improved log-linearly since the
    1950s. They're 6 orders of magnitude better in 2008 than they were in
    1950. Who knows how much longer that will continue to be true?


    I don't think cesium is still the current standard for the highest
    precision atomic clocks.

    Well, the "Cesium _fountain_" atomic clocks are still amongst
    the most precise and they are in use in the world wide net of
    atomic clocks that are interconnected to measure TAI.[*] And
    the standard second is _defined_ on Caesium based transitions.

    But anyway, the newest breakthrough is thorium
    nuclear clocks, which IIRC are 5 orders of magnitude more stable than
    cesium clocks. (And probably 5 orders of magnitude more expensive...)

    I've not heard of Thorium based clocks. But I've heard of
    "optical clocks" that are developed to get more precise and
    more stable versions of atomic clock times.

    Janis

    [*] https://www.ptb.de/cms/index.php?eID=tx_cms_showpic&file=277826&md5=7fb5fb394664810269e3e2d5204bb50950e98b4c&parameters%5B0%5D=eyJ3aWR0aCI6IjkwMG0iLCJoZWlnaHQiOiI3MDBtIiwiYm9keVRhZyI6Ijxib2R5&parameters%5B1%5D=
    IHN0eWxlPVwibWFyZ2luOjA7IGJhY2tncm91bmQ6I2ZmZjtcIj4iLCJ3cmFwIjoi&parameters%5B2%5D=PGEgaHJlZj1cImphdmFzY3JpcHQ6Y2xvc2UoKTtcIj4gfCA8XC9hPiJ9

    Shorter link in German: https://www.ptb.de/cms/fileadmin/_processed_/csm_2022-08_TAI._deutsch_7a19aa286d.jpg

    [...]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Janis Papanagnou on Sat Apr 19 17:15:42 2025
    On 19/04/2025 09:46, Janis Papanagnou wrote:
    On 17.04.2025 17:56, David Brown wrote:
    On 16/04/2025 02:53, James Kuyper wrote:
    On 4/15/25 18:56, Keith Thompson wrote:
    ...
    The uncertainty in the timing of January 1, 1970, where 1970 is a
    year number in the current almost universally accepted Gregorian
    calendar, is essentially zero.

    Modern Cesium clock are accurate to about 1 ns/day.That's an effect
    large enough that we can measure it, but cannot correct for it. We know
    that the clocks disagree with each other, but the closest we can do to
    correcting for that instability is to average over 450 different clock;
    the average is 10 times more stable than the individual clocks.

    Note: the precision of cesium clocks has improved log-linearly since the >>> 1950s. They're 6 orders of magnitude better in 2008 than they were in
    1950. Who knows how much longer that will continue to be true?


    I don't think cesium is still the current standard for the highest
    precision atomic clocks.

    Well, the "Cesium _fountain_" atomic clocks are still amongst
    the most precise and they are in use in the world wide net of
    atomic clocks that are interconnected to measure TAI.[*] And
    the standard second is _defined_ on Caesium based transitions.


    Caesium fountain clocks are old school, but still used. Rubidium is
    popular because it is cheaper, and very high stability atomic clocks use aluminium or strontium. Caesium is still the basis for the current
    definition of the second, but that will change in the next decade or so
    as accuracy of timekeeping has moved well beyond the original caesium
    standard.

    But anyway, the newest breakthrough is thorium
    nuclear clocks, which IIRC are 5 orders of magnitude more stable than
    cesium clocks. (And probably 5 orders of magnitude more expensive...)

    I've not heard of Thorium based clocks. But I've heard of
    "optical clocks" that are developed to get more precise and
    more stable versions of atomic clock times.


    It was only last year that a good measurement of the resonant
    frequencies of the Thorium 229 nucleus was achieved - the science bit is
    done, now the engineering bit needs to be finished to get a practical
    nuclear clock.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to David Brown on Sat Apr 19 23:15:46 2025
    On Sat, 19 Apr 2025 17:15:42 +0200
    David Brown <david.brown@hesbynett.no> wrote:

    On 19/04/2025 09:46, Janis Papanagnou wrote:
    On 17.04.2025 17:56, David Brown wrote:
    On 16/04/2025 02:53, James Kuyper wrote:
    On 4/15/25 18:56, Keith Thompson wrote:
    ...
    The uncertainty in the timing of January 1, 1970, where 1970 is a
    year number in the current almost universally accepted Gregorian
    calendar, is essentially zero.

    Modern Cesium clock are accurate to about 1 ns/day.That's an
    effect large enough that we can measure it, but cannot correct
    for it. We know that the clocks disagree with each other, but the
    closest we can do to correcting for that instability is to
    average over 450 different clock; the average is 10 times more
    stable than the individual clocks.

    Note: the precision of cesium clocks has improved log-linearly
    since the 1950s. They're 6 orders of magnitude better in 2008
    than they were in 1950. Who knows how much longer that will
    continue to be true?

    I don't think cesium is still the current standard for the highest
    precision atomic clocks.

    Well, the "Cesium _fountain_" atomic clocks are still amongst
    the most precise and they are in use in the world wide net of
    atomic clocks that are interconnected to measure TAI.[*] And
    the standard second is _defined_ on Caesium based transitions.


    Caesium fountain clocks are old school, but still used. Rubidium is
    popular because it is cheaper, and very high stability atomic clocks
    use aluminium or strontium. Caesium is still the basis for the
    current definition of the second, but that will change in the next
    decade or so as accuracy of timekeeping has moved well beyond the
    original caesium standard.

    But anyway, the newest breakthrough is thorium
    nuclear clocks, which IIRC are 5 orders of magnitude more stable
    than cesium clocks. (And probably 5 orders of magnitude more
    expensive...)

    I've not heard of Thorium based clocks. But I've heard of
    "optical clocks" that are developed to get more precise and
    more stable versions of atomic clock times.


    It was only last year that a good measurement of the resonant
    frequencies of the Thorium 229 nucleus was achieved - the science bit
    is done, now the engineering bit needs to be finished to get a
    practical nuclear clock.




    Record my prediction: it's not going to happen.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Michael S on Mon Apr 21 20:34:20 2025
    On 19/04/2025 22:15, Michael S wrote:
    On Sat, 19 Apr 2025 17:15:42 +0200
    David Brown <david.brown@hesbynett.no> wrote:

    On 19/04/2025 09:46, Janis Papanagnou wrote:
    On 17.04.2025 17:56, David Brown wrote:

    But anyway, the newest breakthrough is thorium
    nuclear clocks, which IIRC are 5 orders of magnitude more stable
    than cesium clocks. (And probably 5 orders of magnitude more
    expensive...)

    I've not heard of Thorium based clocks. But I've heard of
    "optical clocks" that are developed to get more precise and
    more stable versions of atomic clock times.


    It was only last year that a good measurement of the resonant
    frequencies of the Thorium 229 nucleus was achieved - the science bit
    is done, now the engineering bit needs to be finished to get a
    practical nuclear clock.




    Record my prediction: it's not going to happen.



    I don't know enough about Thorium 229 nuclear resonances to be able to
    predict one way or the other. Do you have a good reason or reference
    for your thoughts here?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Keith Thompson on Tue Apr 22 01:07:27 2025
    On Mon, 21 Apr 2025 14:28:30 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    David Brown <david.brown@hesbynett.no> writes:
    [...]
    I don't know enough about Thorium 229 nuclear resonances to be able
    to predict one way or the other. Do you have a good reason or
    reference for your thoughts here?

    Can you PLEASE take this somewhere else? (Or drop it, I don't care.)

    Don't read anything into the fact that I replied to one particular participant in the thread.


    There are two types of usenet groups:
    - groups that suffer from significat amount of OT discussions
    - dead

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to David Brown on Tue Apr 22 00:28:41 2025
    On Mon, 21 Apr 2025 20:34:20 +0200
    David Brown <david.brown@hesbynett.no> wrote:

    On 19/04/2025 22:15, Michael S wrote:
    On Sat, 19 Apr 2025 17:15:42 +0200
    David Brown <david.brown@hesbynett.no> wrote:

    On 19/04/2025 09:46, Janis Papanagnou wrote:
    On 17.04.2025 17:56, David Brown wrote:

    But anyway, the newest breakthrough is thorium
    nuclear clocks, which IIRC are 5 orders of magnitude more stable
    than cesium clocks. (And probably 5 orders of magnitude more
    expensive...)

    I've not heard of Thorium based clocks. But I've heard of
    "optical clocks" that are developed to get more precise and
    more stable versions of atomic clock times.


    It was only last year that a good measurement of the resonant
    frequencies of the Thorium 229 nucleus was achieved - the science
    bit is done, now the engineering bit needs to be finished to get a
    practical nuclear clock.




    Record my prediction: it's not going to happen.



    I don't know enough about Thorium 229 nuclear resonances to be able
    to predict one way or the other. Do you have a good reason or
    reference for your thoughts here?


    Michael's principle.
    If you don't know what it means, search comp.arch archives.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From candycanearter07@21:1/5 to Michael S on Tue Apr 22 19:30:03 2025
    Michael S <already5chosen@yahoo.com> wrote at 22:07 this Monday (GMT):
    On Mon, 21 Apr 2025 14:28:30 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    David Brown <david.brown@hesbynett.no> writes:
    [...]
    I don't know enough about Thorium 229 nuclear resonances to be able
    to predict one way or the other. Do you have a good reason or
    reference for your thoughts here?

    Can you PLEASE take this somewhere else? (Or drop it, I don't care.)

    Don't read anything into the fact that I replied to one particular
    participant in the thread.


    There are two types of usenet groups:
    - groups that suffer from significat amount of OT discussions
    - dead


    From what I've seen, comp.ibm.pc.action and rec.arts.comics.creative
    have plenty of on topic threads. Not that I mind OT threads much.
    --
    user <candycane> is generated from /dev/urandom

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Michael S on Sun Apr 27 12:05:16 2025
    Michael S <already5chosen@yahoo.com> writes:

    On Wed, 02 Apr 2025 16:59:59 +1100
    Alexis <flexibeast@gmail.com> wrote:

    Thought people here might be interested in this image on Jens
    Gustedt's blog, which translates section 6.2.5, "Types", of the C23
    standard into a graph of inclusions:

    https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23-basic-types/

    That's a little disappointing.
    IMHO, C23 should have added optional types _Binary32, _Binary64,
    _Binary128 and _Binary256 that designate their IEEE-754 namesakes.
    Plus, a mandatory requirement that if compiler supports any of IEEE-754 binary types then they have to be accessible by above-mentioned names.

    I see where you're coming from, but I disagree with the suggested
    addition; it simultaneously does too much and not enough. If
    someone wants some capability along these lines, the first step
    should be to understand what the underlying need is, and then to
    figure out how to accommodate that need. The addition described
    above creates more problems than it solves.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Heathfield@21:1/5 to Lawrence D'Oliveiro on Mon Apr 28 07:52:10 2025
    On 18/04/2025 00:18, Lawrence D'Oliveiro wrote:
    On Wed, 16 Apr 2025 23:13:58 +0100, Richard Heathfield wrote:

    1) No account is taken of the 11-day shift in September 1752.

    root@debian10:~ # ncal -s IT 10 1582
    October 1582
    Su 17 24 31
    Mo 1 18 25
    Tu 2 19 26
    We 3 20 27
    Th 4 21 28
    Fr 15 22 29
    Sa 16 23 30


    $ ncal 9 1752
    September 1752
    Mo 18 25
    Tu 1 19 26
    We 2 20 27
    Th 14 21 28
    Fr 15 22 29
    Sa 16 23 30
    Su 17 24

    So what?

    The fact that ncal knows about the 11-day shift doesn't mean that
    the code I posted knows about it. In the code I posted, no
    account is taken of the 11-day shift in September 1752.

    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Tim Rentsch on Mon Apr 28 16:27:38 2025
    On Sun, 27 Apr 2025 12:05:16 -0700
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    Michael S <already5chosen@yahoo.com> writes:

    On Wed, 02 Apr 2025 16:59:59 +1100
    Alexis <flexibeast@gmail.com> wrote:

    Thought people here might be interested in this image on Jens
    Gustedt's blog, which translates section 6.2.5, "Types", of the C23
    standard into a graph of inclusions:

    https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23-basic-types/ >>

    That's a little disappointing.
    IMHO, C23 should have added optional types _Binary32, _Binary64,
    _Binary128 and _Binary256 that designate their IEEE-754 namesakes.
    Plus, a mandatory requirement that if compiler supports any of
    IEEE-754 binary types then they have to be accessible by
    above-mentioned names.

    I see where you're coming from,

    I suppose, you know it because you followed my failed attempt to improve
    speed and cross-platform consistency of gcc IEEE binary128 arithmetic.
    Granted, in this case absence of common name for the type was much
    smaller obstacle than general indifference of gcc maintainers.
    So, yes, on the "producer" side the problem of absence of common name
    was annoying but could be regarded as minor.

    Apart from being a "producer", quite often I am on the other side,
    wearing a hat of consumer of extended precision types. When in this
    role, I feel that the relative weight of inconsistent type names is
    rather significant. I'd guess that it is even more significant for
    people whose work, unlike mine, is routinely multi-platform. I would
    not be surprised if for many of them it ends up as main reason to
    refrain completely from use IEEE binary128 in their software; even when
    it causes complications to their work and when the type is
    readily available, under different names, on all platforms they care
    about.

    but I disagree with the suggested
    addition; it simultaneously does too much and not enough. If
    someone wants some capability along these lines, the first step
    should be to understand what the underlying need is, and then to
    figure out how to accommodate that need. The addition described
    above creates more problems than it solves.

    IMHO, a need for a common name for IEEE binary128 exists for quite some
    time. For IEEE binary256 the real need didn't emerge yet. But it will
    emerge in the hopefully near future.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to Scott Lurndal on Tue Apr 29 02:10:11 2025
    [ Just noticed this post while catching up in my backlog, so I'm not
    sure my questions/comments have already been addressed elsewhere. ]

    On 16.04.2025 22:04, Scott Lurndal wrote:
    [...]

    Back in the mainframe days, it was common to use julian dates
    as they were both concise (5 BCD digits/20 bits) and sortable.

    YYDDD

    If time was neeeded, it was seconds since midnight in a reference
    timezone.

    I don't quite understand the rationale behind all that said above.

    "YYDDD" was used without century information? How is that useful?
    (I assume it's just the popular laziness that later lead to all the
    Y2k chaos activities.)

    And "seconds since midnight" where taken despite the Julian Dates
    have a day start at high noon (12:00)? [*]

    Janis

    [*] I recall that e.g. SunOS also had that wrong and assumed start at
    midnight. Folks generally don't seem to be aware of that difference.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to Janis Papanagnou on Mon Apr 28 23:34:53 2025
    On 4/28/25 20:10, Janis Papanagnou wrote:
    [ Just noticed this post while catching up in my backlog, so I'm not
    sure my questions/comments have already been addressed elsewhere. ]

    On 16.04.2025 22:04, Scott Lurndal wrote:
    [...]

    Back in the mainframe days, it was common to use julian dates
    as they were both concise (5 BCD digits/20 bits) and sortable.

    YYDDD

    If time was neeeded, it was seconds since midnight in a reference
    timezone.

    I don't quite understand the rationale behind all that said above.

    "YYDDD" was used without century information? How is that useful?
    (I assume it's just the popular laziness that later lead to all the
    Y2k chaos activities.)

    And "seconds since midnight" where taken despite the Julian Dates
    have a day start at high noon (12:00)? [*]

    Strictly speaking, "Julian Day" is the number of days since Jan 01 4713
    BCE at Noon (a date that was chosen because it simplifies conversion
    between several different ancient calendar systems. It starts at Noon
    because it was devised for use by astronomers, who are generally awake
    at midnight and asleep at Noon (especially in ancient times).

    Informally speaking, "Julian Day" is commonly used to refer to any
    system for designating dates that include a "day of year" component, as
    does the above example. Most of these start at midnight, not Noon.

    There's no use being a purist about this (that would be my preference
    too) - the informal meaning is quite common, probably more common than
    the "correct" one.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From candycanearter07@21:1/5 to Janis Papanagnou on Tue Apr 29 05:10:05 2025
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> wrote at 00:10 this Tuesday (GMT):
    [ Just noticed this post while catching up in my backlog, so I'm not
    sure my questions/comments have already been addressed elsewhere. ]

    On 16.04.2025 22:04, Scott Lurndal wrote:
    [...]

    Back in the mainframe days, it was common to use julian dates
    as they were both concise (5 BCD digits/20 bits) and sortable.

    YYDDD

    If time was neeeded, it was seconds since midnight in a reference
    timezone.

    I don't quite understand the rationale behind all that said above.

    "YYDDD" was used without century information? How is that useful?
    (I assume it's just the popular laziness that later lead to all the
    Y2k chaos activities.)
    [snip]


    I believe the current rule for software is to consider "39" the cutoff,
    ie 39 is considered 2039, and 40 is considered 1940. I agree though,
    removing the century is a bad idea for anything that is supposed to be
    kept for a length of time.
    --
    user <candycane> is generated from /dev/urandom

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to All on Tue Apr 29 01:24:20 2025
    On 4/29/25 01:10, candycanearter07 wrote:
    ...
    I believe the current rule for software is to consider "39" the cutoff,
    ie 39 is considered 2039, and 40 is considered 1940. I agree though,
    removing the century is a bad idea for anything that is supposed to be
    kept for a length of time.

    I sincerely doubt that there is any unique current rule for interpreting two-digit year numbers - just a wide variety of different rules used by different people for different purposes. That's part of the reason why
    it's a bad idea to rely upon such rules.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to Keith Thompson on Tue Apr 29 08:37:31 2025
    On 29.04.2025 04:20, Keith Thompson wrote:
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
    [...]

    [*] I recall that e.g. SunOS also had that wrong and assumed start at
    midnight. Folks generally don't seem to be aware of that difference.

    I don't recall SunOS using any kind of Julian days/dates for anything
    at the system level, though some programs might. [...]

    I only faintly recall that it was either some screen lock displaying (also/optionally?) Julian Dates or a screen clock with that property.

    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to James Kuyper on Tue Apr 29 08:44:39 2025
    On 29.04.2025 05:34, James Kuyper wrote:
    On 4/28/25 20:10, Janis Papanagnou wrote:
    [...]

    And "seconds since midnight" where taken despite the Julian Dates
    have a day start at high noon (12:00)? [*]

    Strictly speaking, "Julian Day" is the number of days since Jan 01 4713
    BCE at Noon (a date that was chosen because it simplifies conversion
    between several different ancient calendar systems. It starts at Noon
    because it was devised for use by astronomers, who are generally awake
    at midnight and asleep at Noon (especially in ancient times).

    Informally speaking, "Julian Day" is commonly used to refer to any
    system for designating dates that include a "day of year" component, as
    does the above example. Most of these start at midnight, not Noon.

    There's no use being a purist about this (that would be my preference
    too) - the informal meaning is quite common, probably more common than
    the "correct" one.

    In my language domain we have terms like "Modified Julian Date"
    (MJD) that relate to the Julian Date according to the formula
    MJD = JD - 2'400'000.5
    But as Keith mentioned the terms seem not be used consistently.

    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Keith Thompson on Tue Apr 29 13:14:55 2025
    Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
    [ Just noticed this post while catching up in my backlog, so I'm not
    sure my questions/comments have already been addressed elsewhere. ]

    On 16.04.2025 22:04, Scott Lurndal wrote:
    [...]

    Back in the mainframe days, it was common to use julian dates
    as they were both concise (5 BCD digits/20 bits) and sortable.

    YYDDD

    If time was neeeded, it was seconds since midnight in a reference
    timezone.

    I don't quite understand the rationale behind all that said above.

    "YYDDD" was used without century information? How is that useful?
    (I assume it's just the popular laziness that later lead to all the
    Y2k chaos activities.)

    Yes, it was felt that saving storage (perhaps in the form of columns
    on punch cards) was more important than supporting dates after 1999.

    Especially important in the 60s and 70s.

    By the 1985, we were already planning for Y2K at Burroughs. The
    solution settled upon was that two-digit years less than 60 referred
    to the 21st century, and the rest referred to the 20th.


    One relic of this is the tm_year member of struct tm in <time.h>,
    which holds number of years since 1900. It was (I'm fairly sure)
    originally just a 2-digit year number.

    And "seconds since midnight" where taken despite the Julian Dates
    have a day start at high noon (12:00)? [*]

    The Julian day number used by astronomers does start at noon,
    specifically at noon, Universal Time, Monday, January 1, 4713 BC
    in the proleptic Julian calendar. As I write this, the current
    Julian date is 2460794.591939746.

    Outside of astronomy, the word Julian is (mis)used for just about
    anything that counts days rather than months and days. A date
    expressed in the form YYDDD (or YYYYDDD, or YYYYDDD) almost certainly
    refers to a calendar day, starting and ending at midnight in some
    time zone. See also the tm_yday member of struct tm, which counts
    days since January 1 of the specified year.

    Yes, that was the case for Burroughs. The MCP had a 'midnight'
    rollover function that updated the current date and reset the
    time to 0 (all stored in BCD).

    The unix style 'seconds since epoch' was much less fraught.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Heathfield@21:1/5 to BGB on Tue Apr 29 19:25:00 2025
    On 29/04/2025 19:02, BGB wrote:
    On 4/29/2025 12:24 AM, James Kuyper wrote:
    On 4/29/25 01:10, candycanearter07 wrote:
    ...
    I believe the current rule for software is to consider "39"
    the cutoff,
    ie 39 is considered 2039, and 40 is considered 1940. I agree
    though,
    removing the century is a bad idea for anything that is
    supposed to be
    kept for a length of time.

    I sincerely doubt that there is any unique current rule for
    interpreting
    two-digit year numbers - just a wide variety of different rules
    used by
    different people for different purposes. That's part of the
    reason why
    it's a bad idea to rely upon such rules.

    Could always argue for a compromise, say, 1 signed byte year.
      Say: 1872 to 2127, if origin is 2000.
    Could also be 2 digits if expressed in hexadecimal.

    Or, maybe 1612 BC to 5612 AD if the year were 2 digits in Base 85.
      Or, 48 BC to 4048 AD with Base 64.

    Or we could argue for any of a thousand other ideas for rules...
    and more besides. As a very wise man recently said: "That's part
    of the reason why it's a bad idea to rely upon such rules."

    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to BGB on Tue Apr 29 19:00:53 2025
    BGB <cr88192@gmail.com> writes:
    On 4/29/2025 12:24 AM, James Kuyper wrote:
    On 4/29/25 01:10, candycanearter07 wrote:
    ...
    I believe the current rule for software is to consider "39" the cutoff,
    ie 39 is considered 2039, and 40 is considered 1940. I agree though,
    removing the century is a bad idea for anything that is supposed to be
    kept for a length of time.

    I sincerely doubt that there is any unique current rule for interpreting
    two-digit year numbers - just a wide variety of different rules used by
    different people for different purposes. That's part of the reason why
    it's a bad idea to rely upon such rules.

    Could always argue for a compromise, say, 1 signed byte year.
    Say: 1872 to 2127, if origin is 2000.
    Could also be 2 digits if expressed in hexadecimal.

    All of which would be non-starters. Undigits (a-f)
    in a BCD value would cause traps in many cases, breaking
    existing applications.

    Not to mention that with BCD, hardware could create a
    printable value by simple prefixing each digit with the
    correct zone digit (0xF for EBCDIC, 0x3 for ASCII). That
    breaks when the BCD value has undigits.

    The Burroughs medium systems "MVN" and "MVA" instructions
    would automatically add/remove the zone digit based on
    each operands type (and fill or truncate when source and
    destination lengths differ).

    Then there were tricks like:

    01 MASTER-TBL. 104 CARD 1 17856
    05 MAROW OCCURS 126 TIMES. 105 CARD 1 17856
    10 MACOL PIC X OCCURS 126 TIMES. 106 CARD 1 17856


    1200-INITIALIZE-GALAXY. 460 CARD 1 61272
    MOVE SPACES TO MASTER-TBL. 461 CARD 1 61272
    01 061272 MVA 10B304 404040 217856 CODE
    01 061290 MVW 127937 217856 217860 CODE
    ^ ^ ^
    Insn Operand1 Operand2

    Which moves a three byte EBCIDC (404040) constant in the first operand field to the first word
    of the MASTER-TBL at address 17856 padding with with a trailing blank. The next instruction (MVW - move 16-bit words) smears those four blank characters across the entire
    MASTER_TBL, basically initalizing the entire table with blanks using an overlapping
    move (Length of the move is 7937 words to fill the entire table).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Kaz Kylheku on Wed Apr 30 08:12:40 2025
    Kaz Kylheku <643-408-1753@kylheku.com> writes:

    On 2025-04-07, Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    antispam@fricas.org (Waldek Hebisch) writes:

    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    [...]

    Not always practical. A good example is the type size_t. If a
    function takes an argument of type size_t, then the symbol size_t
    should be defined, no matter which header the function is being
    declared in.

    Why? One can use a type without a name for such type.

    Convenience and existing practice. Sure, an implementation of
    <string.h> could provide a declaration of memcpy() without making
    size_t visible, but what would be the point?

    Ther eis a point to such a discipline; you get ultra squeaky clean
    modules whose header files define only their contribution to
    the program, and do not transitively reveal any of the identifiers
    from their dependencies.

    That's a circular argument. If a header is designed so it doesn't
    define type names like size_t, then #include'ing it won't define
    those names. It is equally true that if a header is designed so
    it does define such type names then #include'ing it will define
    those names.

    Incidentally, calling it "squeaky clean" is meaningless; just
    more circular reasoning.

    In large programs, this clean practice can can help prevent
    clashes.

    That doesn't apply to headers defined by the ISO C standard,
    which is the topic under discussion.

    [...]

    Using memcpy as an example, it could be declared as

    void *memcpy(void * restrict d, const void * restrict s,
    __size_t size);

    size_t is not revealed, but a private type __size_t.

    To get __size_t, some private header is included <sys/priv_types.h>
    or whatever.

    The <stddef.h> header just includes that one and typedefs __size_t
    size_t (if it were to work that way).

    A system vendor which provides many API's and has the privilege of
    being able to use the __* space could do things like this.

    An implicit logical fallacy. Just because something /can/ be done
    doesn't mean it /should/ be done.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Waldek Hebisch on Wed Apr 30 08:37:54 2025
    antispam@fricas.org (Waldek Hebisch) writes:

    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    Kaz Kylheku <643-408-1753@kylheku.com> writes:

    [some symbols are defined in more than one header]

    (In my opinion, things would be better if headers were not allowed
    to behave as if they include other headers, or provide identifiers
    also given in other heards. Not in ISO C, and not in POSIX.
    Every identifier should be declared in exactly one home header,
    and no other header should provide that definition. [...])

    Not always practical. A good example is the type size_t. If a
    function takes an argument of type size_t, then the symbol size_t
    should be defined, no matter which header the function is being
    declared in.

    Why?

    Because, in many cases or most cases, when a particular interface is
    used, there will be a need to declare variables of a type used in the
    interface (either a parameter type or the return type), or to use
    such types in other ways (casting, for example). Not defining those
    type names makes more work for the client.

    One can use a type without a name for such type.

    What you mean is it's possible to produce values of such types
    without needing to use the name of the type. In some cases that's
    true, but often there are other needs that are not addressed by
    that possibility, as mentioned in the last paragraph. Furthermore
    it doesn't cover cases of derived types, such as size_t *.

    Similarly for NULL for any function that has defined
    behavior on some cases of arguments that include NULL.

    Why? There are many ways to produce null pointers.

    The macro NULL is used for purposes of documentation. Of course a
    simple 0 (perhaps casted to an appropriate type) could be used
    instead, but that defeats the purpose of using NULL.

    And fact that
    a function had defined behavior for null pointers does not mean
    that users will need null pointers.

    No guarantee, but there is a reasonable likelihood. It's more
    convenient for the client if such things are always provided.

    No doubt
    there are other compelling examples.

    Do not look compelling at all.

    Clearly the people who wrote the ISO C standard feel differently.
    I agree with their judgment. And I haven't seen any argument
    that a different judgment should prevail, not counting the silly
    circular argument offered by another poster.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Waldek Hebisch on Wed Apr 30 09:45:51 2025
    antispam@fricas.org (Waldek Hebisch) writes:

    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    antispam@fricas.org (Waldek Hebisch) writes:

    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    [...]

    Not always practical. A good example is the type size_t. If a
    function takes an argument of type size_t, then the symbol size_t
    should be defined, no matter which header the function is being
    declared in.

    Why? One can use a type without a name for such type.

    Convenience and existing practice. Sure, an implementation of
    <string.h> could provide a declaration of memcpy() without making
    size_t visible, but what would be the point?

    Cleanliness of definitions? Consistency?

    Convenience of clients of the interface: less work and very
    little downside.

    Existing practice: module systems have been around since the mid
    1970s. Typically a module interface also makes available names of
    all types used by functions in the interface.

    Fragment that you
    replaced by [...] contained a proposal:

    Every identifier should be declared in exactly one home header,
    and no other header should provide that definition.

    That would be pretty clean and consistent rule: if you need some
    standard symbol, then you should include corresponding header.

    That is a simple rule, but not the only simple rule. The C standard
    library consistently follows the simple rule that headers define
    names for all the types used by the interface, so just #include'ing
    the header means all types used by those interface functions are
    available. As far as clients are concerned, that rule is simpler.

    Tim claimed that this in not practical. Clearly C standard
    changed previous practice about headers,

    I believe that's not true, but certainly it is not /clearly/ true.
    A lot of time passed between 1978, when K&R was published, and 1989,
    when the first C standard was ratified. No doubt the C standard
    unified different practices among many existing C implementations,
    but it is highly likely that some of them anticipated the rules that
    would be ratified in the C standard.

    so existing practice is _not_ a practical problem with adapting
    such proposal.

    Common practice for module definitions, across many languages, is to
    make available names for the types used by interfaces that are
    supplied by the module.

    With current standard and practice one frequently needs symbols
    from several headers,

    Yes but there is a key difference. Different headers correspond to
    major areas of interface. It is natural, for example to expect the
    functions of <stdio.h> and <string.h> to be in separate headers.
    That expectation does not extend to names like size_t, which plays a
    support role rather than a primary role.

    so "convenience" is also not a practival problem with such
    proposal.

    Nonsense. Having to do a separate #include for size_t would be a
    pain in the ass.

    People not interested in clean name space can
    define private "all.h" which includes all standard C headers
    and possibly other things that they need, so for them overhead
    is close to zero.

    This point of view strikes me as the tail wagging the dog. The ISO
    C standard made a decision to make a few very commonly used names
    available from several different headers. I'm not aware of any
    practical problems that have been caused by that decision, and I
    don't remember hearing anyone object to it until just very recently
    by just a few people. The idea that any name defined in the
    standard library should be available in only one header gives the
    impression of an intellectual suggestion untethered to any practical experience.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Tim Rentsch on Wed Apr 30 17:41:21 2025
    Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
    antispam@fricas.org (Waldek Hebisch) writes:


    I believe that's not true, but certainly it is not /clearly/ true.
    A lot of time passed between 1978, when K&R was published, and 1989,
    when the first C standard was ratified. No doubt the C standard
    unified different practices among many existing C implementations,
    but it is highly likely that some of them anticipated the rules that
    would be ratified in the C standard.

    Looking at SVID 3rd edition (1989), size_t did not yet exist, so in
    that particular case, there was no need to implicitly define
    it in any header file.

    For interfaces that require custom typedefs (for example, stat(2)),
    the SVID requires the programmer include <sys/types.h> before including <sys/stat.h>.

    When size_t was added there were existing interfaces where the
    argument was changed to require size_t/ssize_t. These interfaces
    did not, at the time, require the programmer to include <sys/types.h> or <stddef.h>
    in order to use the interface, for example in the SVID memory(BA_LIB)
    interface description, the programmer had been instructed that only
    <string.h> was required for the str* functions, and <memory.h> was
    required for the mem* functions - but the SVID noted at that time
    that the latter was deprecated - the pending ANSI C standard was
    to require only <string.h>.

    So, when the arguments of memcpy/strcpy were changed from int to
    size_t, they couldn't go back and require existing code to include
    e.g. <stddef.h> to get size_t; POSIX chose to note in the
    interface description that additional typedefs may be visible
    when <string.h> is included.

    "The <string.h> header shall define NULL and size_t as described in <stddef.h>."

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Richard Heathfield on Wed Apr 30 23:57:56 2025
    On Mon, 28 Apr 2025 07:52:10 +0100, Richard Heathfield wrote:

    The fact that ncal knows about the 11-day shift ...

    Note the calendar listing I posted had a 10-day shift.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to James Kuyper on Wed Apr 30 23:58:35 2025
    On Mon, 28 Apr 2025 23:34:53 -0400, James Kuyper wrote:

    Strictly speaking, "Julian Day" is the number of days since Jan 01 4713
    BCE at Noon ...

    In which time zone?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Muttley on Thu May 1 00:01:28 2025
    On Wed, 9 Apr 2025 09:01:34 -0000 (UTC), Muttley wrote:

    On Tue, 8 Apr 2025 20:53:45 +0300 Michael S <already5chosen@yahoo.com> wibbled:

    Content-Type: text/plain; charset=UTF-8
    Content-Transfer-Encoding: quoted-printable

    Any chance of using utf8 rather than whatever the hell encoding this is.

    I use Pan, which had no trouble making sense of that.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Heathfield@21:1/5 to Lawrence D'Oliveiro on Thu May 1 06:17:22 2025
    On 01/05/2025 00:57, Lawrence D'Oliveiro wrote:
    On Mon, 28 Apr 2025 07:52:10 +0100, Richard Heathfield wrote:

    The fact that ncal knows about the 11-day shift ...

    Note the calendar listing I posted had a 10-day shift.

    So it did. When you're right, you're right.

    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Michael S on Mon May 5 16:40:15 2025
    Michael S <already5chosen@yahoo.com> writes:

    On Mon, 21 Apr 2025 14:28:30 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    David Brown <david.brown@hesbynett.no> writes:
    [...]

    I don't know enough about Thorium 229 nuclear resonances to be able
    to predict one way or the other. Do you have a good reason or
    reference for your thoughts here?

    Can you PLEASE take this somewhere else? (Or drop it, I don't care.)

    Don't read anything into the fact that I replied to one particular
    participant in the thread.

    There are two types of usenet groups:
    - groups that suffer from significat amount of OT discussions
    - dead

    I'm not sure I agree with that dichotomy, but even if I did,
    there's a difference between an amount of OT traffic that is
    merely annoying and the overwhelming floods of OT traffic that
    take place far too often in comp.lang.c. Every reduction
    helps.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Michael S on Mon May 5 16:25:57 2025
    Michael S <already5chosen@yahoo.com> writes:

    On Mon, 14 Apr 2025 01:59:24 -0700
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    Michael S <already5chosen@yahoo.com> writes:

    On Wed, 09 Apr 2025 13:14:55 -0700
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    [may trailing commas in argument lists be accepted, or
    must they be rejected?]

    It is required in the sense that it is a syntax error,
    and syntax errors require a diagnostic.

    Trailing commas in argument lists and/or parameter lists
    could be accepted as an extension, even without giving a
    diagnostic as I read the C standard, but implementations
    are certainly within their rights to reject them.

    I have no doubts that implementations have full rights to reject
    them. The question was about possibility to accept them and
    especially about possibility to accept without diagnostics.
    So, it seems, there is no consensus about it among few posters
    that read the relevant part of the standard.

    I don't think anyone should care about that. If there were any
    significant demand for allowing such trailing commas then someone
    would implement it, and people would use it even if in some
    technical sense it meant that an implementation supporting it
    would be nonconforming.

    Personally, I'd use this feature if it would be standard.

    Me too.

    But if it would be non-standard feature supported by both gcc and
    clang I would hesitate.

    Me too.

    Besides, the opinions of people posting
    in comp.lang.c carry zero weight; the only opinions that matter
    are those of peole on the ISO C committee, and the major compiler
    writers, and none of those people bother posting here.

    My impression was that Philipp Klaus Krause that posts here, if
    infrequently, is a member of WG14.

    Do you know if he is a member, or is he just an interested
    participant?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Tim Rentsch on Tue May 6 11:26:39 2025
    On Mon, 05 May 2025 16:25:57 -0700
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    Michael S <already5chosen@yahoo.com> writes:


    My impression was that Philipp Klaus Krause that posts here, if infrequently, is a member of WG14.

    Do you know if he is a member, or is he just an interested
    participant?


    He appears in the picture named "WG14 members attending the Strasbourg
    meeting in-person for the finalization of C23". To me that sounds as
    sufficient proof.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Michael S on Tue May 6 06:56:43 2025
    Michael S <already5chosen@yahoo.com> writes:

    On Mon, 14 Apr 2025 01:24:49 -0700
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    about where they may or may not be used. Do you really have a
    problem avoiding identifiers defined in this or that library
    header, either for all headers or just those headers required for
    freestanding implementations?

    I don't know. In order to know I'd have to include all
    standard headers into all of my C files

    Let me ask the question differently. Have you ever run into an
    actual problem due to inadvertent collision with a reserved
    identifier?

    But I would guess that for headers required for freestanding
    implementations I would have no problems.

    There are a few freestanding-required headers that I use often
    enough that for practical purposes they can be considered as
    always having been #include'd. Those headers are <limits.h>,
    <stdarg.h>, and <stddef.h>.

    For headers more generally, <stdio.h>, <stdlib.h>, and <string.h>
    are the most prevalent; there are cases where these headers have
    not been #include'd, but those cases are the exception rather than
    the rule. All other headers I #include only on an as-needed basis,
    although the threshold is higher for some than for others. A good
    example is <errno.h>. I try to limit those .c files where <errno.h>
    has been #include'd, because of the rule about preprocessor symbols
    starting with E (which I treat as ruling out any macro starting with
    the letter E, even though the rule in the C standard might be
    somewhat different). For comparison, I'm okay with the rule that
    function names that start str[a-z], mem[a-z], or wcs[a-s] should be
    avoided everywhere (because of <stdlib.h> and <string.h>), and
    similarly for function names that start either is[a-z] or to[a-z]
    (because of <ctype.h>). A good resource for finding symbols to
    avoid is the section on Future library directions, which offers a
    nicely compact summary of the most significant reserved names.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Michael S on Tue May 6 15:06:50 2025
    Michael S <already5chosen@yahoo.com> writes:

    On Sun, 27 Apr 2025 12:05:16 -0700
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    Michael S <already5chosen@yahoo.com> writes:

    On Wed, 02 Apr 2025 16:59:59 +1100
    Alexis <flexibeast@gmail.com> wrote:

    Thought people here might be interested in this image on Jens
    Gustedt's blog, which translates section 6.2.5, "Types", of the C23
    standard into a graph of inclusions:

    https://gustedt.wordpress.com/2025/03/29/a-diagram-of-c23-basic-types/ >>>
    That's a little disappointing.
    IMHO, C23 should have added optional types _Binary32, _Binary64,
    _Binary128 and _Binary256 that designate their IEEE-754 namesakes.
    Plus, a mandatory requirement that if compiler supports any of
    IEEE-754 binary types then they have to be accessible by
    above-mentioned names.

    I see where you're coming from,

    I suppose, you know it because you followed my failed attempt to improve speed and cross-platform consistency of gcc IEEE binary128 arithmetic.

    Actually I didn't know about that. To me your posting upthread
    is enough to see where you're coming from (or at least I think
    it is).

    Granted, in this case absence of common name for the type was much
    smaller obstacle than general indifference of gcc maintainers.
    So, yes, on the "producer" side the problem of absence of common name
    was annoying but could be regarded as minor.

    Apart from being a "producer", quite often I am on the other side,
    wearing a hat of consumer of extended precision types. When in this
    role, I feel that the relative weight of inconsistent type names is
    rather significant. I'd guess that it is even more significant for
    people whose work, unlike mine, is routinely multi-platform. I would
    not be surprised if for many of them it ends up as main reason to
    refrain completely from use IEEE binary128 in their software; even when
    it causes complications to their work and when the type is
    readily available, under different names, on all platforms they care
    about.

    I acknowledge that people feel that there is a problem in need
    of being addressed. The question is not whether a problem
    exists but what exactly is the problem and how should it be
    addressed? For example, rather than tie a proposal to some
    future release of the ISO C standard, maybe the question should
    be addressed by Posix. It's hard to have a fruitful discussion
    about what the answer should be before people understand and
    agree what the problem is that needs to be addressed.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Scott Lurndal on Thu May 8 05:59:29 2025
    scott@slp53.sl.home (Scott Lurndal) writes:

    Tim Rentsch <tr.17687@z991.linuxsc.com> writes:

    antispam@fricas.org (Waldek Hebisch) writes:


    I believe that's not true, but certainly it is not /clearly/ true.
    A lot of time passed between 1978, when K&R was published, and
    1989, when the first C standard was ratified. No doubt the C
    standard unified different practices among many existing C
    implementations, but it is highly likely that some of them
    anticipated the rules that would be ratified in the C standard.

    Looking at SVID 3rd edition (1989), size_t did not yet exist, so in
    that particular case, there was no need to implicitly define it in
    any header file.

    For interfaces that require custom typedefs (for example, stat(2)),
    the SVID requires the programmer include <sys/types.h> before
    including <sys/stat.h>.

    When size_t was added there were existing interfaces where the
    argument was changed to require size_t/ssize_t. These interfaces
    did not, at the time, require the programmer to include
    <sys/types.h> or <stddef.h> in order to use the interface, for
    example in the SVID memory(BA_LIB) interface description, the
    programmer had been instructed that only <string.h> was required
    for the str* functions, and <memory.h> was required for the mem*
    functions - but the SVID noted at that time that the latter was
    deprecated - the pending ANSI C standard was to require only
    <string.h>.

    So, when the arguments of memcpy/strcpy were changed from int to
    size_t, they couldn't go back and require existing code to include
    e.g. <stddef.h> to get size_t; POSIX chose to note in the
    interface description that additional typedefs may be visible
    when <string.h> is included.

    "The <string.h> header shall define NULL and size_t as described
    in <stddef.h>."

    Thank you for this report. I wasn't paying attention to any
    standardization efforts in that time frame; it's good to have
    some actual data to look at.

    After digging into the history, I have the impression that SVID
    was hoping to be a leader in defining standard interfaces (which
    I think included C proper but was not limited to that, which
    makes sense as there was also Unix to consider). So it isn't
    surprising that size_t did not appear in SVID until the onoing
    ANSI effort completed, and the ANSI C standard was ratified. No
    doubt there were other organizations closer to the then-current
    C standardization effort (which had been going on for at least
    five years IIANM), and it seems likely that in some cases there
    were environments in use that anticipated the looming ANSI C
    standard.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Michael S on Thu May 8 06:08:03 2025
    Michael S <already5chosen@yahoo.com> writes:

    On Mon, 05 May 2025 16:25:57 -0700
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    Michael S <already5chosen@yahoo.com> writes:

    My impression was that Philipp Klaus Krause that posts here, if
    infrequently, is a member of WG14.

    Do you know if he is a member, or is he just an interested
    participant?

    He appears in the picture named "WG14 members attending the Strasbourg meeting in-person for the finalization of C23". To me that sounds as sufficient proof.

    Thank you for the additional information. I want to be clear
    that it was not my intention to express doubt; I was just
    looking for clarification.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Tim Rentsch on Thu May 8 13:42:40 2025
    Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
    scott@slp53.sl.home (Scott Lurndal) writes:


    After digging into the history, I have the impression that SVID
    was hoping to be a leader in defining standard interfaces (which

    SVID was AT&T's attempt to standardize unix interfaces. The last
    edition (third) was released in 1989, but earlier versions date
    to 1983.

    As to who, exactly, first proposed size_t, that I don't recall.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Waldek Hebisch@21:1/5 to Tim Rentsch on Thu May 8 14:09:43 2025
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
    Michael S <already5chosen@yahoo.com> writes:

    On Mon, 14 Apr 2025 01:24:49 -0700
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    about where they may or may not be used. Do you really have a
    problem avoiding identifiers defined in this or that library
    header, either for all headers or just those headers required for
    freestanding implementations?

    I don't know. In order to know I'd have to include all
    standard headers into all of my C files

    Let me ask the question differently. Have you ever run into an
    actual problem due to inadvertent collision with a reserved
    identifier?

    Not in my own code. But I remember an old piece of code whose
    author apparently thought that 'inline' is a perfect name for
    input line. Few days ago I had trouble compiling with gcc-15
    code which declares its own 'bool' type. The code is supposed to
    compile using a wide range of compilers, so I am still looking
    for "best" solution.

    --
    Waldek Hebisch

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Waldek Hebisch on Thu May 8 16:52:47 2025
    On 08/05/2025 16:09, Waldek Hebisch wrote:
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:
    Michael S <already5chosen@yahoo.com> writes:

    On Mon, 14 Apr 2025 01:24:49 -0700
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    about where they may or may not be used. Do you really have a
    problem avoiding identifiers defined in this or that library
    header, either for all headers or just those headers required for
    freestanding implementations?

    I don't know. In order to know I'd have to include all
    standard headers into all of my C files

    Let me ask the question differently. Have you ever run into an
    actual problem due to inadvertent collision with a reserved
    identifier?

    Not in my own code. But I remember an old piece of code whose
    author apparently thought that 'inline' is a perfect name for
    input line. Few days ago I had trouble compiling with gcc-15
    code which declares its own 'bool' type. The code is supposed to
    compile using a wide range of compilers, so I am still looking
    for "best" solution.


    gcc changed the default standard to "gnu23" in gcc15, from "gnu17" in
    earlier versions. Their policy is that unless you specify the standard explicitly, you get the latest C standard that is well-supported by gcc,
    along with the common gcc extensions. (New ISO C standards supersede
    older ones - thus "C" means "C23" at the moment.)

    And in C23, "bool", "true" and "false" are now keywords. The boolean
    type was renamed "bool" with "_Bool" being an alias for compatibility.
    C23 has more backwards incompatible changes than usual for C standards
    since C99. (Personally, I am happy to see such changes - people have
    had a generation to get used to C booleans - but I fully understand that
    does not apply to everyone or all code.)

    The "best" solution, IMHO, is that you should never use a C compiler for serious work without specifying the standard you are using. (And if the
    C compiler doesn't let you choose the standard, consider whether it is
    actually a suitable tool for serious work.) If you decide that you want
    C23, with or without gcc extensions, you will need to fix that ancient
    code to remove or rename its home-made "bool" type. If you want to keep
    the code unchanged, pick an appropriate older C standard. (You will
    also want to do that if you want to be able to compile the code with
    older gcc versions or other compilers that don't yet support C23.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Keith Thompson on Thu May 8 08:37:59 2025
    Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:

    Tim Rentsch <tr.17687@z991.linuxsc.com> writes:

    Michael S <already5chosen@yahoo.com> writes:

    On Mon, 14 Apr 2025 01:24:49 -0700
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    about where they may or may not be used. Do you really have a
    problem avoiding identifiers defined in this or that library
    header, either for all headers or just those headers required for
    freestanding implementations?

    I don't know. In order to know I'd have to include all
    standard headers into all of my C files

    Let me ask the question differently. Have you ever run into an
    actual problem due to inadvertent collision with a reserved
    identifier?

    I'm not Michael, but I was once mildly inconvienced because I
    defined a logging function called log(). The solution was trivial:
    I changed the name.

    Yes, I expect I have run into similar situations. What I was
    wondering about were problems where either the existence of
    the problem or what to do to fix it needed more than a minimal
    effort.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Scott Lurndal on Thu May 8 08:33:15 2025
    scott@slp53.sl.home (Scott Lurndal) writes:

    Tim Rentsch <tr.17687@z991.linuxsc.com> writes:

    scott@slp53.sl.home (Scott Lurndal) writes:
    [...]
    After digging into the history, I have the impression that SVID
    was hoping to be a leader in defining standard interfaces (which

    SVID was AT&T's attempt to standardize unix interfaces. The last
    edition (third) was released in 1989, but earlier versions date
    to 1983.

    As to who, exactly, first proposed size_t, that I don't recall.

    I've looked but haven't found any information on that.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Tim Rentsch on Thu May 8 15:48:07 2025
    Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
    Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:

    Tim Rentsch <tr.17687@z991.linuxsc.com> writes:

    Michael S <already5chosen@yahoo.com> writes:

    On Mon, 14 Apr 2025 01:24:49 -0700
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    about where they may or may not be used. Do you really have a
    problem avoiding identifiers defined in this or that library
    header, either for all headers or just those headers required for
    freestanding implementations?

    I don't know. In order to know I'd have to include all
    standard headers into all of my C files

    Let me ask the question differently. Have you ever run into an
    actual problem due to inadvertent collision with a reserved
    identifier?

    I'm not Michael, but I was once mildly inconvienced because I
    defined a logging function called log(). The solution was trivial:
    I changed the name.

    Yes, I expect I have run into similar situations. What I was
    wondering about were problems where either the existence of
    the problem or what to do to fix it needed more than a minimal
    effort.

    I recall running into issues using variables named 'index'
    when porting code to SVR4 when the BSD compatibility layer
    was present.

    https://man.freebsd.org/cgi/man.cgi?query=index&sektion=3

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Waldek Hebisch on Thu May 8 08:49:58 2025
    antispam@fricas.org (Waldek Hebisch) writes:

    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    Michael S <already5chosen@yahoo.com> writes:

    On Mon, 14 Apr 2025 01:24:49 -0700
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    about where they may or may not be used. Do you really have a
    problem avoiding identifiers defined in this or that library
    header, either for all headers or just those headers required for
    freestanding implementations?

    I don't know. In order to know I'd have to include all
    standard headers into all of my C files

    Let me ask the question differently. Have you ever run into an
    actual problem due to inadvertent collision with a reserved
    identifier?

    Not in my own code. But I remember an old piece of code whose
    author apparently thought that 'inline' is a perfect name for
    input line.

    Yeah. That falls into a different category, because 'inline' is a
    keyword rather than being defined in a standard header. In any case
    the problem is essentially impossible to miss, and straightforward
    to fix.

    Few days ago I had trouble compiling with gcc-15
    code which declares its own 'bool' type. The code is supposed to
    compile using a wide range of compilers, so I am still looking
    for "best" solution.

    When I had to deal with a similar problem in the past, my
    approach was something along these lines (obviously more
    cases can be added if needed to deal with unusual compilers
    or compilation options):


    #if defined bool
    #undef bool
    #endif

    #if !defined __STDC_VERSION__ || __STDC_VERSION__ < 199901L
    typedef unsigned char bool;

    #elif __STDC_VERSION__ < 201112L
    typedef _Bool bool;

    #elif __STDC_VERSION__ < 201710L
    typedef _Bool bool;

    #elif __STDC_VERSION__ < 202300L
    typedef _Bool bool;

    #else
    /* 'bool' is keyword in C23+ ... */

    #endif

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Scott Lurndal on Thu May 8 16:32:10 2025
    On 2025-05-08, Scott Lurndal <scott@slp53.sl.home> wrote:
    I recall running into issues using variables named 'index'
    when porting code to SVR4 when the BSD compatibility layer
    was present.

    I've also run into issues with clashes with BSD-specific
    functions on those systems.

    It's because the BSD people refuse to understand how feature selection
    macros are supposed to work.

    In BSD toolchains, if you don't specify any -D_WHATEVER=BLAH feature
    selection macro, then all identifiers are visible.

    When when features are specified, they *restrict* visibility.

    When you specify -D_FOO and -D_BAR, you get the *intersection*
    of FOO and BAR.

    That includes compiler dialect selection options. Under BSDs,
    if you specify, say, "-ansi" on your command line, you are
    not only getting a certain dialect from the compiler, but
    the BSD header files like <stdio.h> will hide POSIX functions
    like fdopen, even if you have a POSIX macro like -D_POSIX_SOURCE.
    The intersection of ANSI and POSIX does not contain fdopen.

    So what ends up happening on BSDs is that you end up revealing
    more identifiers than you need, which exposes clashes.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Scott Lurndal on Thu May 8 22:53:42 2025
    scott@slp53.sl.home (Scott Lurndal) writes:

    Tim Rentsch <tr.17687@z991.linuxsc.com> writes:

    Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:

    Tim Rentsch <tr.17687@z991.linuxsc.com> writes:

    Michael S <already5chosen@yahoo.com> writes:

    On Mon, 14 Apr 2025 01:24:49 -0700
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    about where they may or may not be used. Do you really have a
    problem avoiding identifiers defined in this or that library
    header, either for all headers or just those headers required
    for freestanding implementations?

    I don't know. In order to know I'd have to include all
    standard headers into all of my C files

    Let me ask the question differently. Have you ever run into an
    actual problem due to inadvertent collision with a reserved
    identifier?

    I'm not Michael, but I was once mildly inconvienced because I
    defined a logging function called log(). The solution was
    trivial: I changed the name.

    Yes, I expect I have run into similar situations. What I was
    wondering about were problems where either the existence of the
    problem or what to do to fix it needed more than a minimal
    effort.

    I recall running into issues using variables named 'index'
    when porting code to SVR4 when the BSD compatibility layer
    was present.

    https://man.freebsd.org/cgi/man.cgi?query=index&sektion=3

    I understand how that might be annoying. Did you have
    any trouble either discovering what the problem was or
    fixing it once you did, or both? I presume there was
    no difficulty in knowing that there /was/ a problem; is
    that not the case?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Michael S on Thu Jun 26 09:01:20 2025
    On Mon, 28 Apr 2025 16:27:38 +0300, Michael S wrote:

    IMHO, a need for a common name for IEEE binary128 exists for quite some
    time. For IEEE binary256 the real need didn't emerge yet. But it will
    emerge in the hopefully near future.

    A thought: the main advantage of binary types over decimal is supposed to
    be speed. Once you get up to larger precisions like that, the speed
    advantage becomes less clear, particularly since hardware support doesn’t seem forthcoming any time soon. There are already variable-precision
    decimal floating-point libraries available. And with such calculations, C
    no longer offers a great performance advantage over a higher-level
    language, so you might as well use the higher-level language.

    <https://docs.python.org/3/library/decimal.html>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Waldek Hebisch@21:1/5 to Lawrence D'Oliveiro on Thu Jun 26 12:51:19 2025
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Mon, 28 Apr 2025 16:27:38 +0300, Michael S wrote:

    IMHO, a need for a common name for IEEE binary128 exists for quite some
    time. For IEEE binary256 the real need didn't emerge yet. But it will
    emerge in the hopefully near future.

    A thought: the main advantage of binary types over decimal is supposed to
    be speed. Once you get up to larger precisions like that, the speed
    advantage becomes less clear, particularly since hardware support doesn’t seem forthcoming any time soon. There are already variable-precision
    decimal floating-point libraries available. And with such calculations, C
    no longer offers a great performance advantage over a higher-level
    language, so you might as well use the higher-level language.

    <https://docs.python.org/3/library/decimal.html>

    When working with such (low for me) precisions dynamic allocation
    of memory is major cost item, frequently more important than
    calculation. To avoid this cost one needs stack allocatation.
    That is one reason to make them built-in, as in this case
    compiler presumably knows about them and can better manage
    allocation and copies.

    Also, when using binary underlying representation decimal rounding
    is much more expensive than binary one, so with such representation
    cost of decimal computation is significantly higher. Without
    hardware support making representation decimal makes computations
    for all sizes much more expensive.

    Floating point computations naturally are approximate. In most
    cases exact details of rounding do not matter much. It basically
    that with round to even rule one get somewhat better error
    propagation and people want to have a fixed rule to get
    reproducible results. But insisting on decimal rounding
    normally is not needed. To put it differently, decimal floating
    point is a marketing stint by IBM. Bored coders may code
    decimal software libraries for various languages, but it
    does not mean that such libraries have much sense.

    --
    Waldek Hebisch

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Heathfield@21:1/5 to Lawrence D'Oliveiro on Thu Jun 26 13:27:29 2025
    On 26/06/2025 10:01, Lawrence D'Oliveiro wrote:
    C
    no longer offers a great performance advantage over a higher-level
    language, so you might as well use the higher-level language.

    Nothing is stopping you, but then comp.lang.c no longer offers
    you the facility to discuss your chosen language, so you might as
    well use the higher-level language's group.

    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Lawrence D'Oliveiro on Thu Jun 26 15:57:04 2025
    On 26/06/2025 11:01, Lawrence D'Oliveiro wrote:
    On Mon, 28 Apr 2025 16:27:38 +0300, Michael S wrote:

    IMHO, a need for a common name for IEEE binary128 exists for quite some
    time. For IEEE binary256 the real need didn't emerge yet. But it will
    emerge in the hopefully near future.

    A thought: the main advantage of binary types over decimal is supposed to
    be speed. Once you get up to larger precisions like that, the speed
    advantage becomes less clear, particularly since hardware support doesn’t seem forthcoming any time soon. There are already variable-precision
    decimal floating-point libraries available. And with such calculations, C
    no longer offers a great performance advantage over a higher-level
    language, so you might as well use the higher-level language.

    <https://docs.python.org/3/library/decimal.html>

    That is certainly a valid viewpoint. Much of this depends on what you
    are doing, how big the types are, what you are doing with them, how much
    of your code is calculations, and what other things you are doing.

    If you are doing lots of calculations with big numbers of various sizes,
    then Python code using numpy will often be faster than writing C code
    directly - you can concentrate on writing better algorithms instead of
    all the low-level memory management and bureaucracy you have in a lower
    level language. (Of course the hard work in libraries like numpy is
    done in code written in C, Fortran, C++, or other low-level languages.)

    But if you are using a type that is small enough to fit sensibly on the
    stack, and to have a fixed size (rather than arbitrary sized number
    types), then it is likely to be more efficient to define them as structs
    in C and use them directly. Depending on what you are doing with them,
    you might be better off using decimal-based types rather than
    binary-based types. (And at the risk of incurring Richard's wrath, I
    would suggest C++ is an even better language choice in such cases.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Heathfield@21:1/5 to David Brown on Thu Jun 26 16:10:55 2025
    On 26/06/2025 14:57, David Brown wrote:
    (And at the risk of incurring Richard's wrath, I would suggest
    C++ is an even better language choice in such cases.)

    As you know, David, I hate to agree with you...

    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    ...but operator overloading for the win.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Keith Thompson on Thu Jun 26 23:59:16 2025
    On Thu, 26 Jun 2025 12:31:32 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Mon, 28 Apr 2025 16:27:38 +0300, Michael S wrote:
    IMHO, a need for a common name for IEEE binary128 exists for quite
    some time. For IEEE binary256 the real need didn't emerge yet. But
    it will emerge in the hopefully near future.

    A thought: the main advantage of binary types over decimal is
    supposed to be speed. Once you get up to larger precisions like
    that, the speed advantage becomes less clear, particularly since
    hardware support doesn’t seem forthcoming any time soon. There are already variable-precision decimal floating-point libraries
    available. And with such calculations, C no longer offers a great performance advantage over a higher-level language, so you might as
    well use the higher-level language.

    <https://docs.python.org/3/library/decimal.html>

    I think there's an implicit assumption that, all else being equal,
    decimal is better than binary. That's true in some contexts,
    but not in all.


    My implicit assumption is that other sings being equal binary is
    better than anything else because it has the lowest variation in ULP to
    value ratio.
    The fact that other things being equal binary fp also tends to be
    faster is a nice secondary advantage. For example, it is easy to
    imagine hardware that implements S/360 style hex floating point as fast
    or a little faster than binary fp, but numerec properties of it are
    much worse then sane implementations of binary fp.

    Of course, historically there existed bad implementations of binary fp
    as weel, most notably on many CDC machines. But by now they are dead
    for eons.

    If you're performing calculations on physical quantities, decimal
    probably has no particular advantages, and binary is likely to be
    more efficient in both time and space.

    The advantagers of decimal show up if you're formatting a *lot*
    of numbers in human-readable form (but nobody has time to read a
    billion numbers), or if you're working with money. But for financial calculations, particularly compound interest, there are likely to
    be precise regulations about how to round results. A given decimal floating-point format might or might not satisfy those regulations.


    Exactly.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Michael S on Thu Jun 26 21:09:37 2025
    Michael S <already5chosen@yahoo.com> writes:
    On Thu, 26 Jun 2025 12:31:32 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Mon, 28 Apr 2025 16:27:38 +0300, Michael S wrote: =20
    IMHO, a need for a common name for IEEE binary128 exists for quite
    some time. For IEEE binary256 the real need didn't emerge yet. But
    it will emerge in the hopefully near future. =20

    A thought: the main advantage of binary types over decimal is
    supposed to be speed. Once you get up to larger precisions like
    that, the speed advantage becomes less clear, particularly since
    hardware support doesn=E2=80=99t seem forthcoming any time soon. There = >are
    already variable-precision decimal floating-point libraries
    available. And with such calculations, C no longer offers a great
    performance advantage over a higher-level language, so you might as
    well use the higher-level language.

    <https://docs.python.org/3/library/decimal.html> =20
    =20
    I think there's an implicit assumption that, all else being equal,
    decimal is better than binary. That's true in some contexts,
    but not in all.
    =20

    My implicit assumption is that other sings being equal binary is
    better than anything else because it has the lowest variation in ULP to
    value ratio.=20
    The fact that other things being equal binary fp also tends to be
    faster is a nice secondary advantage. For example, it is easy to
    imagine hardware that implements S/360 style hex floating point as fast
    or a little faster than binary fp, but numerec properties of it are
    much worse then sane implementations of binary fp.

    But not all decimal floating point implementations used "hex floating point".

    Burroughs medium systems had BCD floating point - one of the advantages
    was that it could exactly represent any floating point number that
    could be specified with a 100 digit mantissa and a 2 digit exponent.

    This was a memory-to-memory architecture, so no floating point registers
    to worry about.

    For financial calculations, a fixed point format (up to 100 digits) was
    used. Using an implicit decimal point, rounding was a matter of where
    the implicit decimal point was located in the up to 100 digit field;
    so do your calculations in mills and truncate the result field to the
    desired precision.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Waldek Hebisch on Thu Jun 26 23:58:39 2025
    On Thu, 26 Jun 2025 12:51:19 -0000 (UTC), Waldek Hebisch wrote:

    When working with such (low for me) precisions dynamic allocation of
    memory is major cost item, frequently more important than calculation.
    To avoid this cost one needs stack allocatation.

    What you may not realize is that, on current machines, there is about a
    100:1 speed difference between accessing CPU registers and accessing main memory.

    Whether that main memory access is doing “stack allocation” or “heap allocation” is going to make very little difference to this.

    Also, when using binary underlying representation decimal rounding
    is much more expensive than binary one, so with such representation
    cost of decimal computation is significantly higher.

    This may take more computation, but if the calculation time is dominated
    by memory access time to all those digits, how much difference is that
    going to make, really?

    Floating point computations naturally are approximate. In most cases
    exact details of rounding do not matter much.

    It often surprises you when they do. That’s why a handy rule of thumb is
    to test your calculation with all four IEEE 754 rounding modes, to ensure
    that the variation in the result remains minor. If it doesn’t ... then
    watch out.

    To put it differently, decimal floating point is a marketing stint by
    IBM.

    Not sure IBM has any marketing power left to inflict their own ideas on
    the computing industry. Decimal calculations just make sense because the results are less surprising to normal people.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Richard Heathfield on Fri Jun 27 00:39:29 2025
    On Thu, 26 Jun 2025 13:27:29 +0100, Richard Heathfield wrote:

    On 26/06/2025 10:01, Lawrence D'Oliveiro wrote:

    C no longer offers a great performance advantage over a higher-level
    language, so you might as well use the higher-level language.

    Nothing is stopping you, but then comp.lang.c no longer offers you the facility to discuss your chosen language, so you might as well use the higher-level language's group.

    Or conversely, if C is going to become more suitable for such high-
    precision calculations, it might need to become more Python-like.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Heathfield@21:1/5 to Lawrence D'Oliveiro on Fri Jun 27 02:40:58 2025
    On 27/06/2025 01:39, Lawrence D'Oliveiro wrote:
    [...]if C is going to become more suitable for such high-
    precision calculations, it might need to become more Python-like.

    C is not in search of a reason to exist. If you want Python, you
    know where to find it.

    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to Keith Thompson on Fri Jun 27 04:33:00 2025
    On 27.06.2025 02:10, Keith Thompson wrote:
    scott@slp53.sl.home (Scott Lurndal) writes:
    [...]
    But not all decimal floating point implementations used "hex floating point".

    Burroughs medium systems had BCD floating point - one of the advantages
    was that it could exactly represent any floating point number that
    could be specified with a 100 digit mantissa and a 2 digit exponent.

    BCD uses 4 bits to represent values from 0 to 9. That's about 83%
    efficent relative to pure binary. (And it still can't represent 1/3.)

    That's a problem of where your numbers stem from. "1/3" is a formula!
    A standard representation for a number may be "0.33" or "0.33333333",
    defined through the human-machine interface as text and representable
    (as depicted) as "exact number". The result of the formula "1/3" isn't representable as a finite decimal string. With a binary representation
    even a _finite_ [decimal] string might not be exactly representable in
    some cases; I've tried with 0.1 (for example). The fixed point decimal representation calculates that exact, but not the binary. I think that
    is the reason why especially in the financial sector using languages
    that are supporting the decimal encoding had been prevalent. (Don't
    know about contemporary financial systems.)

    Janis

    [...]

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Waldek Hebisch@21:1/5 to Lawrence D'Oliveiro on Fri Jun 27 03:51:21 2025
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Thu, 26 Jun 2025 12:51:19 -0000 (UTC), Waldek Hebisch wrote:

    When working with such (low for me) precisions dynamic allocation of
    memory is major cost item, frequently more important than calculation.
    To avoid this cost one needs stack allocatation.

    What you may not realize is that, on current machines, there is about a
    100:1 speed difference between accessing CPU registers and accessing main memory.

    Whether that main memory access is doing “stack allocation” or “heap allocation” is going to make very little difference to this.

    Did you measure things? CPU has caches and cache friendly code
    makes a difference. Avoiding dynamic allocation helps, that is
    measurable. Rational explanation is that stack allocated things
    do not move and have close to zero cost to manage. Moving stuff
    leads to cache misses.

    Also, when using binary underlying representation decimal rounding
    is much more expensive than binary one, so with such representation
    cost of decimal computation is significantly higher.

    This may take more computation, but if the calculation time is dominated
    by memory access time to all those digits, how much difference is that
    going to make, really?

    It makes a lot of difference for cache friendly code.

    Floating point computations naturally are approximate. In most cases
    exact details of rounding do not matter much.

    It often surprises you when they do. That’s why a handy rule of thumb is
    to test your calculation with all four IEEE 754 rounding modes, to ensure that the variation in the result remains minor. If it doesn’t ... then watch out.

    To put it differently, decimal floating point is a marketing stint by
    IBM.

    Not sure IBM has any marketing power left to inflict their own ideas on
    the computing industry. Decimal calculations just make sense because the results are less surprising to normal people.

    Inteligent people quickly realise that floating point arithmetic
    produces approximate results. With binary this realisation is
    slightly faster, this is a plus for binary. Once you realise that
    you should expect approximate results, cases when result happens
    to be exact are surprising.
    --
    Waldek Hebisch

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Waldek Hebisch on Fri Jun 27 13:44:25 2025
    On 27/06/2025 05:51, Waldek Hebisch wrote:
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Thu, 26 Jun 2025 12:51:19 -0000 (UTC), Waldek Hebisch wrote:

    When working with such (low for me) precisions dynamic allocation of
    memory is major cost item, frequently more important than calculation.
    To avoid this cost one needs stack allocatation.

    What you may not realize is that, on current machines, there is about a
    100:1 speed difference between accessing CPU registers and accessing main
    memory.

    Whether that main memory access is doing “stack allocation” or “heap >> allocation” is going to make very little difference to this.

    Did you measure things? CPU has caches and cache friendly code
    makes a difference. Avoiding dynamic allocation helps, that is
    measurable. Rational explanation is that stack allocated things
    do not move and have close to zero cost to manage. Moving stuff
    leads to cache misses.


    Yes. Main memory accesses are slow - access to memory in caches is a
    lot less slow, but still slower than registers. If you need to use
    dynamic memory, the allocator will have to access a lot of different
    memory locations to figure out where to allocate the memory. Most of
    those will be in cache (assuming you are doing a lot of dynamic
    allocations), but some might not be. And the memory you allocate in the
    end might force more cache allocations and deallocations.

    Stack space (near the top of the stack), on the other hand, is almost
    always in caches. So it is faster to access memory on the stack, as
    well as using far fewer instructions.

    You are of course correct to say that speeds need to be measured, but
    you are also correct that in general, stack data can be significantly
    more efficient than dynamic memory data - especially if that data is short-lived.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Scott Lurndal on Fri Jun 27 14:52:42 2025
    On Thu, 26 Jun 2025 21:09:37 GMT
    scott@slp53.sl.home (Scott Lurndal) wrote:

    Michael S <already5chosen@yahoo.com> writes:
    On Thu, 26 Jun 2025 12:31:32 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Mon, 28 Apr 2025 16:27:38 +0300, Michael S wrote: =20
    IMHO, a need for a common name for IEEE binary128 exists for
    quite some time. For IEEE binary256 the real need didn't emerge
    yet. But it will emerge in the hopefully near future. =20

    A thought: the main advantage of binary types over decimal is
    supposed to be speed. Once you get up to larger precisions like
    that, the speed advantage becomes less clear, particularly since
    hardware support doesn=E2=80=99t seem forthcoming any time soon.
    There =
    are
    already variable-precision decimal floating-point libraries
    available. And with such calculations, C no longer offers a great
    performance advantage over a higher-level language, so you might
    as well use the higher-level language.

    <https://docs.python.org/3/library/decimal.html> =20
    =20
    I think there's an implicit assumption that, all else being equal,
    decimal is better than binary. That's true in some contexts,
    but not in all.
    =20

    My implicit assumption is that other sings being equal binary is
    better than anything else because it has the lowest variation in ULP
    to value ratio.=20
    The fact that other things being equal binary fp also tends to be
    faster is a nice secondary advantage. For example, it is easy to
    imagine hardware that implements S/360 style hex floating point as
    fast or a little faster than binary fp, but numerec properties of it
    are much worse then sane implementations of binary fp.

    But not all decimal floating point implementations used "hex floating
    point".


    IBM's Hex floating point is not decimal. It's hex (base 16).

    Burroughs medium systems had BCD floating point - one of the
    advantages was that it could exactly represent any floating point
    number that could be specified with a 100 digit mantissa and a 2
    digit exponent.

    This was a memory-to-memory architecture, so no floating point
    registers to worry about.

    For financial calculations, a fixed point format (up to 100 digits)
    was used. Using an implicit decimal point, rounding was a matter of
    where the implicit decimal point was located in the up to 100 digit
    field; so do your calculations in mills and truncate the result field
    to the desired precision.


    For fix point, anything "decimal" is even less useful than in floating
    point. I can't find any good explanation for use of "decimal" things in
    some early computers except that their designers were, may be, good
    engineers, but 2nd rate thinkers.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Keith Thompson on Fri Jun 27 14:02:54 2025
    Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:
    scott@slp53.sl.home (Scott Lurndal) writes:
    [...]
    But not all decimal floating point implementations used "hex floating point".

    Burroughs medium systems had BCD floating point - one of the advantages
    was that it could exactly represent any floating point number that
    could be specified with a 100 digit mantissa and a 2 digit exponent.

    BCD uses 4 bits to represent values from 0 to 9. That's about 83%
    efficent relative to pure binary. (And it still can't represent 1/3.)

    Ah, but reading a BCD memory dump is a joy compared to a binary system :-)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Lawrence D'Oliveiro on Fri Jun 27 14:01:10 2025
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Thu, 26 Jun 2025 12:51:19 -0000 (UTC), Waldek Hebisch wrote:

    When working with such (low for me) precisions dynamic allocation of
    memory is major cost item, frequently more important than calculation.
    To avoid this cost one needs stack allocatation.

    What you may not realize is that, on current machines, there is about a
    100:1 speed difference between accessing CPU registers and accessing main >memory.

    Depends on whether you're accessing cache (3 or 4 cycle latency for L1),
    and at what cache level. Even a DRAM access can complete in less than
    100 ns.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to Michael S on Fri Jun 27 20:48:23 2025
    On 27.06.2025 13:52, Michael S wrote:
    On Thu, 26 Jun 2025 21:09:37 GMT
    scott@slp53.sl.home (Scott Lurndal) wrote:
    [..]

    For fix point, anything "decimal" is even less useful than in floating
    point. I can't find any good explanation for use of "decimal" things in
    some early computers [...]

    If not already obvious from the hints given in this thread you can
    search for the respective keywords.

    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Michael S on Sat Jun 28 23:59:11 2025
    On Fri, 27 Jun 2025 14:52:42 +0300, Michael S wrote:

    For fix point, anything "decimal" is even less useful than in floating
    point. I can't find any good explanation for use of "decimal" things in
    some early computers except that their designers were, may be, good engineers, but 2nd rate thinkers.

    IEEE-754 now includes decimal floating-point formats in addition to the
    older binary ones. I think this was originally a separate spec (IEEE-854),
    but it got rolled into the 2008 revision of IEEE-754.

    Many numeric experts scoffed at IEEE-754 when it first came out,
    particularly the features that reduced the surprise factor for less-expert users. Decimal arithmetic is more of the same.

    Safety-razor syndrome never quite goes away, does it ...

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to Keith Thompson on Sun Jun 29 05:03:30 2025
    [ Some technical troubles - in case this post appeared already 30
    minutes ago (I don't see it), please ignore this re-sent post. ]

    On 28.06.2025 02:56, Keith Thompson wrote:
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
    On 27.06.2025 02:10, Keith Thompson wrote:
    scott@slp53.sl.home (Scott Lurndal) writes:
    [...]
    But not all decimal floating point implementations used "hex
    floating point".

    Burroughs medium systems had BCD floating point - one of the advantages >>>> was that it could exactly represent any floating point number that
    could be specified with a 100 digit mantissa and a 2 digit exponent.

    BCD uses 4 bits to represent values from 0 to 9. That's about 83%
    efficent relative to pure binary. (And it still can't represent 1/3.)

    That's a problem of where your numbers stem from. "1/3" is a formula!

    1/3 is also a C expression with the value 0. But what I was
    referring to was the real number 1/3, the unique real number that
    yields one when multiplied by three.

    Yes, sure. That was also how I interpreted it; that you meant (in
    "C" parlance) 1.0/3.0.


    My point is that any choice of radix in a floating-point format
    means that there are going to be some useful real numbers you
    can't represent.

    Yes, sure. Sqrt(2.0) for example, or 'pi', or 'e', or your 1.0/3.0
    example. These numbers have in common that there's no finite length
    standard representation; you usually represent them as formulas (as
    in your example), or in computers as constants in abbreviated form.

    In numerics you have various places where errors appear in principle
    and accumulate. One of the errors is when transferred from (and to)
    external representation. Another one is when performing calculations
    with internally imprecise represented numbers.

    The point with decimal encoding addresses the lossless (and fast[*]) input/output of given [finite] numbers. Numbers that have been (and
    are) used e.g. in financial contexts (Billions of Euros and Cents).
    And you can also perform exact arithmetic in the typical operations
    (sum, multiply, subtract)[**] without errors.[***]

    Nowadays (with 64 bit integer arithmetic)[****] quasi as "standard"
    you could of course also use an integer-based fixed point arithmetic
    to handle large amounts with cent-value precision arithmetics (or
    similar finite numbers of real world entities).

    As an anecdotal add-on: There was once a fraud case where someone
    from the financial sector took all the (positive) sub-cent rounding
    factors from all transactions and accumulated them to transfer them
    to his own account. If you know how much money there's transferred
    you can imagine how fast you could get a multi-millionaire that way.

    [*] But that factor is probably and IMO not that important nowadays.

    [**] When you do statistics with division necessary, or things like
    compounded interest you cannot avoid rounding at some decimal place;
    but that are "local" effects. At this point numerics provides a lot
    more stuff (WRT errors and propagation) that has to be considered.

    [***] Try adding 10 millions of 10 cents values (0.10) in "C" using
    a binary 'float' type; you'll notice a fast drift away from the exact
    sum.
    c=9296503 f=1000000.062500 value reached with fewer terms c=10000001 f=1087937.125000 too large value at exact terms

    [****] Processor word sizes not that common, let alone guaranteed,
    in legacy systems.

    That's as true of decimal as it is of binary.
    (Trinary can represent 1/3, but can't represent 1/2.)

    Decimal can represent any number that can be exactly represented in
    binary *if* you have enough digits (because 10 is multiple of 2),
    and many numbers like 0.1 that can't be represented exactly in
    binary, but at a cost -- that is worth paying in some contexts.
    (Scaled integers might sometimes be a good alternative).

    I doubt that I'm saying anything you don't already know. I just
    wanted to clarify what I meant.

    Thanks. Yes.

    Please see my additions above also (mainly) just as clarification,
    especially in the light of some people despising the decimal format
    (and also the folks who invented it back then).

    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to Janis Papanagnou on Sat Jun 28 23:18:40 2025
    On 2025-06-28 23:03, Janis Papanagnou wrote:
    [ Some technical troubles - in case this post appeared already 30
    minutes ago (I don't see it), please ignore this re-sent post. ]

    On 28.06.2025 02:56, Keith Thompson wrote:
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
    On 27.06.2025 02:10, Keith Thompson wrote:
    ...
    BCD uses 4 bits to represent values from 0 to 9. That's about 83%
    efficent relative to pure binary. (And it still can't represent 1/3.)

    That's a problem of where your numbers stem from. "1/3" is a formula!

    1/3 is also a C expression with the value 0. But what I was
    referring to was the real number 1/3, the unique real number that
    yields one when multiplied by three.

    Yes, sure. That was also how I interpreted it; that you meant (in
    "C" parlance) 1.0/3.0.

    No, it is very much the point that the C expression 1.0/3.0 cannot have
    the value he's talking about (except in the unlikely event that
    FLT_RADIX is a multiple of 3).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Richard Heathfield on Sun Jun 29 04:44:47 2025
    On 2025-06-26, Richard Heathfield <rjh@cpax.org.uk> wrote:
    On 26/06/2025 10:01, Lawrence D'Oliveiro wrote:
    C
    no longer offers a great performance advantage over a higher-level
    language, so you might as well use the higher-level language.

    Nothing is stopping you, but then comp.lang.c no longer offers
    you the facility to discuss your chosen language, so you might as
    well use the higher-level language's group.

    Even a broken clock is right once or twice in a 24h period.

    He did say that this advantage was in the manipulation
    of multi-precision integers, like big decimals.

    Indeed, most of the time is spent int the math routines themselves, not in
    what dispatches them, Calculations written in C, using a certain bignum libarary won't be much faster than the same calculations in a higher level language, using the same bignum library.

    A higher level language may also have a compiler which does
    optimizations on the bignum code, such as CSE and constant folding,
    basically treating it the same like fixnum integers.

    C code consisting of calls into a bignum library will not be
    aggressively optimized. If you wastefully perform a calculation
    with constants that could be done at compile time, it almost
    certainly won't be.

    Example:

    (compile-toplevel '(expt 2 150))
    #<sys:vm-desc: a103620>
    (disassemble *1)
    data:
    0: 1427247692705959881058285969449495136382746624
    syms:
    code:
    0: 10000400 end d0
    instruction count:
    1
    #<sys:vm-desc: a103620>

    The compiled code just retrieves the bignum integer result from static data register d0. This is just from the compiler finding "expt" to be in a list of functions that are reducible at compile time over constant inputs; no special reasoning about large integers.

    But if you were to write the C code to initialize a bignum from 5, and one from 150, and then call the bignum exponentiation routine, I doubt you'd get the compiler to optimize all that away.

    Maybe with a sufficiently advanced link-time optimization ...

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Kaz Kylheku on Sun Jun 29 17:13:36 2025
    On Sun, 29 Jun 2025 04:44:47 -0000 (UTC)
    Kaz Kylheku <643-408-1753@kylheku.com> wrote:

    On 2025-06-26, Richard Heathfield <rjh@cpax.org.uk> wrote:
    On 26/06/2025 10:01, Lawrence D'Oliveiro wrote:
    C
    no longer offers a great performance advantage over a higher-level
    language, so you might as well use the higher-level language.

    Nothing is stopping you, but then comp.lang.c no longer offers
    you the facility to discuss your chosen language, so you might as
    well use the higher-level language's group.

    Even a broken clock is right once or twice in a 24h period.

    He did say that this advantage was in the manipulation
    of multi-precision integers, like big decimals.

    Indeed, most of the time is spent int the math routines themselves,
    not in what dispatches them, Calculations written in C, using a
    certain bignum libarary won't be much faster than the same
    calculations in a higher level language, using the same bignum
    library.


    I did few "native" python vs python+GMP vs C+GMP multiplication
    measurement ~6 months ago.
    See <20241225110505.00001733@yahoo.com> and followup
    The end result is that python always loses to C+GMP by significant
    margin except for case of python+GMP combo with absolutely huge
    numbers, much bigger ones than I would ever expect to use outside
    of benchmarks, where it comes close.
    "Native" python loses especially badly at bigger numbers because of
    less sophisticated algorithms.
    Python+GMP loses especially badly at smaller numbers. I do not know an
    exact reason, but would guess that it's somehow related to differences
    in memory management between python and GMP.

    A higher level language may also have a compiler which does
    optimizations on the bignum code, such as CSE and constant folding,
    basically treating it the same like fixnum integers.

    C code consisting of calls into a bignum library will not be
    aggressively optimized. If you wastefully perform a calculation
    with constants that could be done at compile time, it almost
    certainly won't be.

    Example:

    (compile-toplevel '(expt 2 150))
    #<sys:vm-desc: a103620>
    (disassemble *1)
    data:
    0: 1427247692705959881058285969449495136382746624
    syms:
    code:
    0: 10000400 end d0
    instruction count:
    1
    #<sys:vm-desc: a103620>

    The compiled code just retrieves the bignum integer result from
    static data register d0. This is just from the compiler finding
    "expt" to be in a list of functions that are reducible at compile
    time over constant inputs; no special reasoning about large integers.

    But if you were to write the C code to initialize a bignum from 5,
    and one from 150, and then call the bignum exponentiation routine, I
    doubt you'd get the compiler to optimize all that away.

    Maybe with a sufficiently advanced link-time optimization ...


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to Lawrence D'Oliveiro on Sun Jun 29 09:23:01 2025
    On 2025-06-28 19:59, Lawrence D'Oliveiro wrote:
    On Fri, 27 Jun 2025 14:52:42 +0300, Michael S wrote:

    For fix point, anything "decimal" is even less useful than in floating
    point. I can't find any good explanation for use of "decimal" things in
    some early computers except that their designers were, may be, good
    engineers, but 2nd rate thinkers.

    IEEE-754 now includes decimal floating-point formats in addition to the
    older binary ones. I think this was originally a separate spec (IEEE-854), but it got rolled into the 2008 revision of IEEE-754.

    It's somewhat more complicated than that. IEEE-784 is a
    radix-independent standard, otherwise equivalent to IEEE-754. Basically, IEEE-754 is "IEEE-784 with radix==2". A conforming implementation of any version of C could also have used "IEEE-784 with radix==10". However,
    the decimal floating point formats added to IEEE-754 in 2008 were not
    simply "IEEE-784 with radix==10", and therefore could not have been used
    as standard floating types in earlier versions of the C standard. See <https://en.wikipedia.org/wiki/Decimal_floating_point> for more details.
    There are real systems that implement these new formats in hardware. A
    lot of wording was added and changed in the C standard to allow these
    new formats to be used as C's new decimal floating types.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to James Kuyper on Sun Jun 29 20:48:13 2025
    On 29.06.2025 05:18, James Kuyper wrote:
    On 2025-06-28 23:03, Janis Papanagnou wrote:
    [ Some technical troubles - in case this post appeared already 30
    minutes ago (I don't see it), please ignore this re-sent post. ]

    On 28.06.2025 02:56, Keith Thompson wrote:
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
    On 27.06.2025 02:10, Keith Thompson wrote:
    ...
    BCD uses 4 bits to represent values from 0 to 9. That's about 83%
    efficent relative to pure binary. (And it still can't represent 1/3.) >>>>
    That's a problem of where your numbers stem from. "1/3" is a formula!

    1/3 is also a C expression with the value 0. But what I was
    referring to was the real number 1/3, the unique real number that
    yields one when multiplied by three.

    Yes, sure. That was also how I interpreted it; that you meant (in
    "C" parlance) 1.0/3.0.

    No, it is very much the point that the C expression 1.0/3.0 cannot have
    the value he's talking about [...]

    I was talking about the Real Value. Indicated by the formula '1/3'.
    When Keith spoke about that being '0' I refined it to '1.0/3.0' to
    address this misunderstanding. (That's all to say here about that.)

    (For the _main points_ I tried to express I refer you to the longer
    post I just posted in reply to Keith's post.)

    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to Keith Thompson on Sun Jun 29 20:40:34 2025
    On 29.06.2025 05:51, Keith Thompson wrote:
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
    On 28.06.2025 02:56, Keith Thompson wrote:
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
    On 27.06.2025 02:10, Keith Thompson wrote:
    scott@slp53.sl.home (Scott Lurndal) writes:
    [...]
    But not all decimal floating point implementations used "hex
    floating point".

    Burroughs medium systems had BCD floating point - one of the advantages >>>>>> was that it could exactly represent any floating point number that >>>>>> could be specified with a 100 digit mantissa and a 2 digit exponent. >>>>>
    BCD uses 4 bits to represent values from 0 to 9. That's about 83%
    efficent relative to pure binary. (And it still can't represent 1/3.) >>>>
    That's a problem of where your numbers stem from. "1/3" is a formula!

    1/3 is also a C expression with the value 0. But what I was
    referring to was the real number 1/3, the unique real number that
    yields one when multiplied by three.

    Yes, sure. That was also how I interpreted it; that you meant (in
    "C" parlance) 1.0/3.0.

    As mentioned elsethread, I was referring to the real value.

    Yes, me too, when I saw your original 1/3. - You *then* spoke about
    that being 0 in "C" (with integer division) I explained that I took
    it as what I still think was what you were saying with "1/3" being
    the real value, but (since to address your 1/3==0) I explained that
    I meant the real value (that you would get in "C" [approximately]
    by 1.0/3.0, which of course differs from the real math number).

    I guess we might have been talking cross-purpose.

    What I was trying to explain were different things on different
    levels.

    a) Errors on input/output conversion.

    the value 1.33 - BCD no errors, two's-complement binary w/ errors;
    the real value 1.333333... - generally an error (infinite string)
    0.10 - in BCD no errors, in binary errors;

    b) Errors in calculations.

    all exact internal representation of external quantities can be
    calculated correctly (with the previously presented conditions)
    in decimal; examples 0.10, 1.33, 1.33333333333333333333333, but
    *not* 1.33333333333333333333333... (the infinite form, whether
    expressed as depicted here with '...' or whether expressed as
    formula '1/3'.

    1.0/3.0 as a C expression yields a value of type double, typically 0.333333333333333314829616256247390992939472198486328125 or

    There are numbers that can be expressed accurately in binary; as
    0.5, 1.0, 2.0 (for example). Those can also be expressed accurately
    with decimal encoding.

    Other finite numbers/number-sequences can be expressed accurately
    with decimal encoding, as 0.1, 1.33 (for example), but only specific
    ones can be represented accurately with binary encoding.

    With infinite sequences of digits you will have problems with both
    internal representations (binary, decimal); as you see with specific
    real values as 'sqrt(2)', 'pi', 'e', '1/3' (for example) which are
    cut at some decimal place internally depending on supported "register
    width".

    [...]

    In numerics you have various places where errors appear in principle
    and accumulate. One of the errors is when transferred from (and to)
    external representation. Another one is when performing calculations
    with internally imprecise represented numbers.

    The point with decimal encoding addresses the lossless (and fast[*])
    input/output of given [finite] numbers. Numbers that have been (and
    are) used e.g. in financial contexts (Billions of Euros and Cents).
    And you can also perform exact arithmetic in the typical operations
    (sum, multiply, subtract)[**] without errors.[***]

    Which is convenient only because we happen to use decimal notation
    when writing numbers.

    But that exactly is the point! With decimal encoding you get an exact
    internal picture of the external representations of the numbers, if
    only because the external representations are finite. (The same holds
    for the output.) With binary encoding you have the first degradation
    during that I/O process. Decimal encoding, OTOH, is robust here.

    That's why it's so advantageous specifically for the financial sector.
    It would not be the best choice where a lot of internal calculations
    are done, as (for example) in calculating hydrodynamic processes.

    Later, when it comes to internal calculations, yet more deficiencies
    appear (with both encodings; but decimal is more robust in the basic operations, where in binary the previous errors contribute to further degradation).

    (I completely left out algorithmic error management here (numerics),
    because it applies in principle to all algorithms [mostly] independent
    of the encoding; this would go too far.)

    BTW, not only mainframes and the major programming languages used for
    financial software supported decimal encoding. Also pocket calculators
    did that. (For example, the BASIC programmable and interactive usable
    Sharp PC 1401 supported real numbers processing using decimal encoding
    (10 digits visible BCD, and 2 "hidden" digits for internal rounding, 2
    digits exponent, plus information for signs, etc., all in all 8 bytes; implemented with in-memory calculations, not done in registers.)

    Decimal encoding; it's fast, has good properties (WRT errors and error propagation), but requires more space (in case that matters).

    Janis


    [...]


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to Janis Papanagnou on Mon Jun 30 21:59:32 2025
    On 2025-06-29 14:48, Janis Papanagnou wrote:
    On 29.06.2025 05:18, James Kuyper wrote:
    On 2025-06-28 23:03, Janis Papanagnou wrote:
    [ Some technical troubles - in case this post appeared already 30
    minutes ago (I don't see it), please ignore this re-sent post. ]

    On 28.06.2025 02:56, Keith Thompson wrote:
    Janis Papanagnou <janis_papanagnou+ng@hotmail.com> writes:
    On 27.06.2025 02:10, Keith Thompson wrote:
    ...
    BCD uses 4 bits to represent values from 0 to 9. That's about 83%
    efficent relative to pure binary. (And it still can't represent 1/3.) >>>>>
    That's a problem of where your numbers stem from. "1/3" is a formula! >>>>
    1/3 is also a C expression with the value 0. But what I was
    referring to was the real number 1/3, the unique real number that
    yields one when multiplied by three.

    Yes, sure. That was also how I interpreted it; that you meant (in
    "C" parlance) 1.0/3.0.

    No, it is very much the point that the C expression 1.0/3.0 cannot have
    the value he's talking about [...]

    I was talking about the Real Value. Indicated by the formula '1/3'.
    When Keith spoke about that being '0' I refined it to '1.0/3.0' to
    address this misunderstanding. (That's all to say here about that.)

    The real number 1/3 has a different value from the C expression 1/3
    (which is 0), and also from the C expression 1.0/3.0 (unless FLT_RADIX
    is a multiple of 3). It only spreads confusion to refer to 1.0/3.0 as if
    it had the value that Keith was talking about.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Richard Heathfield on Tue Jul 15 19:41:51 2025
    On Fri, 27 Jun 2025 02:40:58 +0100, Richard Heathfield wrote:

    On 27/06/2025 01:39, Lawrence D'Oliveiro wrote:

    [...]if C is going to become more suitable for such high-
    precision calculations, it might need to become more Python-like.

    C is not in search of a reason to exist.

    Not in traditional fixed-precision arithmetic, anyway -- at least after it fully embraced IEEE 754.

    With higher-precision arithmetic, on the other hand, the traditional C paradigms may not be so suitable.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Heathfield@21:1/5 to Lawrence D'Oliveiro on Wed Jul 16 03:55:14 2025
    On 15/07/2025 20:41, Lawrence D'Oliveiro wrote:
    On Fri, 27 Jun 2025 02:40:58 +0100, Richard Heathfield wrote:

    On 27/06/2025 01:39, Lawrence D'Oliveiro wrote:

    [...]if C is going to become more suitable for such high-
    precision calculations, it might need to become more Python-like.

    C is not in search of a reason to exist.

    Not in traditional fixed-precision arithmetic, anyway -- at least after it fully embraced IEEE 754.

    With higher-precision arithmetic, on the other hand, the traditional C paradigms may not be so suitable.

    If you want something else, you know where to find it. There is
    no value in eroding the differences in all languages until only
    one universal language emerges. Vivat differentia.

    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Richard Heathfield on Sun Jul 20 00:16:56 2025
    On Wed, 16 Jul 2025 03:55:14 +0100, Richard Heathfield wrote:

    On 15/07/2025 20:41, Lawrence D'Oliveiro wrote:

    On Fri, 27 Jun 2025 02:40:58 +0100, Richard Heathfield wrote:

    On 27/06/2025 01:39, Lawrence D'Oliveiro wrote:

    [...]if C is going to become more suitable for such high-
    precision calculations, it might need to become more Python-like.

    C is not in search of a reason to exist.

    Not in traditional fixed-precision arithmetic, anyway -- at least
    after it fully embraced IEEE 754.

    With higher-precision arithmetic, on the other hand, the
    traditional C paradigms may not be so suitable.

    If you want something else, you know where to find it. There is no
    value in eroding the differences in all languages until only one
    universal language emerges. Vivat differentia.

    You sound as though you don’t want languages copying ideas from each
    other.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Heathfield@21:1/5 to Lawrence D'Oliveiro on Sun Jul 20 07:58:53 2025
    On 20/07/2025 01:16, Lawrence D'Oliveiro wrote:
    On Wed, 16 Jul 2025 03:55:14 +0100, Richard Heathfield wrote:

    On 15/07/2025 20:41, Lawrence D'Oliveiro wrote:

    On Fri, 27 Jun 2025 02:40:58 +0100, Richard Heathfield wrote:

    On 27/06/2025 01:39, Lawrence D'Oliveiro wrote:

    [...]if C is going to become more suitable for such high-
    precision calculations, it might need to become more Python-like.

    C is not in search of a reason to exist.

    Not in traditional fixed-precision arithmetic, anyway -- at least
    after it fully embraced IEEE 754.

    With higher-precision arithmetic, on the other hand, the
    traditional C paradigms may not be so suitable.

    If you want something else, you know where to find it. There is no
    value in eroding the differences in all languages until only one
    universal language emerges. Vivat differentia.

    You sound as though you don’t want languages copying ideas from each
    other.

    Good, because I don't.

    There's nothing wrong with new languages pinching ideas from old
    languages - that's creativity and progress, especially when those
    ideas are combined in new and interesting ways, and you can keep
    on adding those ideas right up until your second reference
    implementation goes public.

    But going the other way turns a programming language into a
    constantly moving target that it's impossible for more than a
    handful of people to master - the handful in question being those
    who decide what's in and what's out. This is bad for programmers'
    expertise and bad for the industry.

    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to Richard Heathfield on Sun Jul 20 11:28:54 2025
    On 20.07.2025 08:58, Richard Heathfield wrote:
    On 20/07/2025 01:16, Lawrence D'Oliveiro wrote:
    On Wed, 16 Jul 2025 03:55:14 +0100, Richard Heathfield wrote:
    [...]

    You sound as though you don’t want languages copying ideas from each
    other.

    Hmm.. - this is an interesting thought. In an instant reflex I'd agree
    with the advantage of picking "good" ideas from other languages. Upon reconsideration I have some doubts though; not only because some ideas
    may fit in a language but others not really. To me many language give
    the impression to have been patched-up instead of being well designed
    from scratch. Either evolving by featuritis of "good ideas" or needing
    changes to address inherent shortcomings of the basic language design.

    [...]
    There's nothing wrong with new languages pinching ideas from old
    languages - that's creativity and progress, especially when those ideas
    are combined in new and interesting ways, and you can keep on adding
    those ideas right up until your second reference implementation goes
    public.

    But going the other way turns a programming language into a constantly
    moving target that it's impossible for more than a handful of people to master - the handful in question being those who decide what's in and
    what's out. This is bad for programmers' expertise and bad for the
    industry.

    Incompatibilities or change of semantics between versions would be bad!
    For coherently designed and consistently enhanced languages that might
    be less a problem. Having a well designed "Common Language Base" would
    not impose too much effort to master [coherently matching] extensions.
    Of course here we are speaking (only) about "C", specifically, so the
    basic language preconditions are set (and its decades long evolution
    path clearly visible).

    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to James Kuyper on Tue Jul 29 00:56:14 2025
    On Sun, 29 Jun 2025 09:23:01 -0400, James Kuyper wrote:

    It's somewhat more complicated than that. IEEE-784 is a
    radix-independent standard, otherwise equivalent to IEEE-754.

    Did you mean IEEE-854?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to Lawrence D'Oliveiro on Tue Jul 29 21:13:27 2025
    On 2025-07-28 20:56, Lawrence D'Oliveiro wrote:
    On Sun, 29 Jun 2025 09:23:01 -0400, James Kuyper wrote:

    It's somewhat more complicated than that. IEEE-784 is a
    radix-independent standard, otherwise equivalent to IEEE-754.

    Did you mean IEEE-854?

    Yes - Sorry for the confusion.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to dave_thompson_2@comcast.net on Tue Jul 29 21:18:48 2025
    On 2025-07-29 10:49, dave_thompson_2@comcast.net wrote:
    ...
    Astronomers count Julian Day Numbers from 4713 BC proleptic Julian.
    This was chosen to ensure that all astronomical observations or events
    in recorded history have positive dates.

    While that is one benefit of using that date, it was in fact chosen
    because several different astronomical cycles associated with common
    ancient calendar systems all align together at that time. That makes
    conversion between Julian Days and any of those calendars simpler.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From dave_thompson_2@comcast.net@21:1/5 to Keith.S.Thompson+u@gmail.com on Tue Jul 29 10:49:19 2025
    (Sorry for delay, this got stuck)

    On Mon, 14 Apr 2025 15:56:56 -0700, Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    Huge numbers of systems already use the perfectly reasonable POSIX
    epoch, 1970-01-01 00:00:00 UTC. I can think of no good reason to
    standardize anything else.

    NNTP uses unsigned-32bit seconds from 1900-01-01 'UTC' (really a blend
    of GMT then TAI aligned like UTC but not actually representing the
    leapseconds; yes that's a bodge). It will wrap in 2036, about 2 years
    before progams and data (still) using signed-32bit seconds from 1970
    more famously will.

    Astronomers count Julian Day Numbers from 4713 BC proleptic Julian.
    This was chosen to ensure that all astronomical observations or events
    in recorded history have positive dates.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)