• Suggested method for returning a string from a C program?

    From DFS@21:1/5 to All on Tue Mar 18 21:38:55 2025
    I'm doing these algorithm problems at
    https://cses.fi/problemset/list/

    For instance: Weird Algorithm
    https://cses.fi/problemset/task/1068

    My code works fine locally (prints the correct solution to the console),
    but when I submit the .c file the auto-tester flags it with 'runtime
    error' and says the output is empty.

    ------------------------------------------------------------
    // If n is even, divide it by two.
    // If n is odd, multiply it by three and add one.
    // Repeat until n is one.
    // n = 3: output is 3 10 5 16 8 4 2 1


    #include <stdio.h>
    #include <stdlib.h>
    #include <string.h>

    int main(int argc, char *argv[])
    {
        int n = atoi(argv[1]);
        int len = 0;
        char result[10000] = "";
        sprintf(result, "%d ", n);

        while(1) {
    if((n % 2) == 0)
    {n /= 2;}
    else
    {n = (n * 3) + 1;}

    if(n != 1)
    {
    len = strlen(result);
    sprintf(result + len, "%d ", n);
    }
    else
    break;
        }

        len = strlen(result);
        sprintf(result + len, "1 ");
        printf("%s\n",result);

        return 0;
    }
    ------------------------------------------------------------

    Any ideas?
    Thanks

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From DFS@21:1/5 to Keith Thompson on Tue Mar 18 22:43:47 2025
    On 3/18/2025 10:05 PM, Keith Thompson wrote:
    DFS <nospam@dfs.com> writes:
    I'm doing these algorithm problems at
    https://cses.fi/problemset/list/

    For instance: Weird Algorithm
    https://cses.fi/problemset/task/1068

    My code works fine locally (prints the correct solution to the
    console), but when I submit the .c file the auto-tester flags it with
    'runtime error' and says the output is empty.

    ------------------------------------------------------------
    // If n is even, divide it by two.
    // If n is odd, multiply it by three and add one.
    // Repeat until n is one.
    // n = 3: output is 3 10 5 16 8 4 2 1


    #include <stdio.h>
    #include <stdlib.h>
    #include <string.h>

    int main(int argc, char *argv[])
    {
        int n = atoi(argv[1]);
        int len = 0;
        char result[10000] = "";
        sprintf(result, "%d ", n);

        while(1) {
    if((n % 2) == 0)
    {n /= 2;}
    else
    {n = (n * 3) + 1;}

    if(n != 1)
    {
    len = strlen(result);
    sprintf(result + len, "%d ", n);
    }
    else
    break;
        }

        len = strlen(result);
        sprintf(result + len, "1 ");
        printf("%s\n",result);

        return 0;
    }
    ------------------------------------------------------------

    I don't see any problem with the code, and neither does gcc on
    my system.

    It also compiles cleanly on theirs.


    But the code you posted contains a number of NO-BREAK
    SPACE characters (0xa0). "clang -Wno-unicode-whitespace" accepts
    those characters without complaint, and gives non-fatal warnings
    without that option. gcc treats them as a fatal error.

    Those were probably added by one of our newsreaders.

    Here's what it looks like on Notepad++ (showing end of line symbols)

    https://imgur.com/DTX9fZG


    A minor point: You print a trailing space. I don't know whether
    the auto-tester will accept that.

    I removed that, and no such luck.


    Thanks for looking at it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to DFS on Tue Mar 18 20:07:08 2025
    DFS <nospam@dfs.com> writes:

    I'm doing these algorithm problems at
    https://cses.fi/problemset/list/

    For instance: Weird Algorithm
    https://cses.fi/problemset/task/1068

    My code works fine locally (prints the correct solution to the
    console), but when I submit the .c file the auto-tester flags it with
    runtime error' and says the output is empty.

    ------------------------------------------------------------
    // If n is even, divide it by two.
    // If n is odd, multiply it by three and add one.
    // Repeat until n is one.
    // n = 3: output is 3 10 5 16 8 4 2 1


    #include <stdio.h>
    #include <stdlib.h>
    #include <string.h>

    int main(int argc, char *argv[])
    {
    int n = atoi(argv[1]);
    int len = 0;
    char result[10000] = "";
    sprintf(result, "%d ", n);

    while(1) {
    if((n % 2) == 0)
    {n /= 2;}
    else
    {n = (n * 3) + 1;}

    if(n != 1)
    {
    len = strlen(result);
    sprintf(result + len, "%d ", n);
    }
    else
    break;
    }

    len = strlen(result);
    sprintf(result + len, "1 ");
    printf("%s\n",result);

    return 0;
    }
    ------------------------------------------------------------

    Any ideas?

    Have you thought about how large the value of 'n' can
    become inside the while() loop?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Heathfield@21:1/5 to DFS on Wed Mar 19 04:01:48 2025
    On 19/03/2025 03:34, DFS wrote:

    <snip>


    But my main concern right now is getting my submission to work
    with their auto-tester.  All it says is 'runtime error' and
    'empty output'.

    Any ideas what I might be doing wrong?


    $ file post.txt
    post.txt: news, Unicode text, UTF-8 text

    That's not going to help.

    $ hexdump post.txt | cut -b 9-48 | grep " [89a-f][0-9a-f]"
    5d5b 0a29 0a7b c320 2082 82c3 c320 2082
    7261 7667 315b 295d 0a3b c320 2082 82c3
    c320 2082 82c3 6920 746e 6c20 6e65 3d20
    3020 0a3b c320 2082 82c3 c320 2082 82c3
    3030 205d 203d 2222 0a3b c320 2082 82c3
    c320 2082 82c3 7320 7270 6e69 6674 7228
    0a3b 200a 82c3 c320 2082 82c3 c320 2082
    6220 6572 6b61 0a3b c320 2082 82c3 c320
    2082 82c3 7d20 0a0a c320 2082 82c3 c320
    2082 82c3 6c20 6e65 3d20 7320 7274 656c
    286e 6572 7573 746c 3b29 200a 82c3 c320
    2082 82c3 c320 2082 7073 6972 746e 2866
    2220 3b29 200a 82c3 c320 2082 82c3 c320
    6572 7573 746c 3b29 0a0a c320 2082 82c3
    c320 2082 82c3 7220 7465 7275 206e 3b30

    ASCII is a 7-bit code. You have a fair bit of 8-bit noise mixed
    in. That will probably have to go.

    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From DFS@21:1/5 to Tim Rentsch on Tue Mar 18 23:34:33 2025
    On 3/18/2025 11:07 PM, Tim Rentsch wrote:
    DFS <nospam@dfs.com> writes:

    I'm doing these algorithm problems at
    https://cses.fi/problemset/list/

    For instance: Weird Algorithm
    https://cses.fi/problemset/task/1068

    My code works fine locally (prints the correct solution to the
    console), but when I submit the .c file the auto-tester flags it with
    runtime error' and says the output is empty.

    ------------------------------------------------------------
    // If n is even, divide it by two.
    // If n is odd, multiply it by three and add one.
    // Repeat until n is one.
    // n = 3: output is 3 10 5 16 8 4 2 1


    #include <stdio.h>
    #include <stdlib.h>
    #include <string.h>

    int main(int argc, char *argv[])
    {
    int n = atoi(argv[1]);
    int len = 0;
    char result[10000] = "";
    sprintf(result, "%d ", n);

    while(1) {
    if((n % 2) == 0)
    {n /= 2;}
    else
    {n = (n * 3) + 1;}

    if(n != 1)
    {
    len = strlen(result);
    sprintf(result + len, "%d ", n);
    }
    else
    break;
    }

    len = strlen(result);
    sprintf(result + len, "1 ");
    printf("%s\n",result);

    return 0;
    }
    ------------------------------------------------------------

    Any ideas?

    Have you thought about how large the value of 'n' can
    become inside the while() loop?


    Each algorithm problem has a constraint you're supposed to consider
    (which I figure is the largest value they test for).

    For this 'weird algorithm' problem, the constraint is 1 <= n <= 10^6.

    The output of n = 999999 is:

    999999 2999998 1499999 4499998 2249999 6749998 3374999 10124998 5062499 15187498 7593749 22781248 11390624 5695312 2847656 1423828 711914 355957 1067872 533936 266968 133484 66742 33371 100114 50057 150172 75086 37543
    112630 56315 168946 84473 253420 126710 63355 190066 95033 285100 142550
    71275 213826 106913 320740 160370 80185 240556 120278 60139 180418 90209
    270628 135314 67657 202972 101486 50743 152230 76115 228346 114173
    342520 171260 85630 42815 128446 64223 192670 96335 289006 144503 433510
    216755 650266 325133 975400 487700 243850 121925 365776 182888 91444
    45722 22861 68584 34292 17146 8573 25720 12860 6430 3215 9646 4823 14470
    7235 21706 10853 32560 16280 8140 4070 2035 6106 3053 9160 4580 2290
    1145 3436 1718 859 2578 1289 3868 1934 967 2902 1451 4354 2177 6532 3266
    1633 4900 2450 1225 3676 1838 919 2758 1379 4138 2069 6208 3104 1552 776
    388 194 97 292 146 73 220 110 55 166 83 250 125 376 188 94 47 142 71 214
    107 322 161 484 242 121 364 182 91 274 137 412 206 103 310 155 466 233
    700 350 175 526 263 790 395 1186 593 1780 890 445 1336 668 334 167 502
    251 754 377 1132 566 283 850 425 1276 638 319 958 479 1438 719 2158 1079
    3238 1619 4858 2429 7288 3644 1822 911 2734 1367 4102 2051 6154 3077
    9232 4616 2308 1154 577 1732 866 433 1300 650 325 976 488 244 122 61 184
    92 46 23 70 35 106 53 160 80 40 20 10 5 16 8 4 2 1

    The largest n in there is 22781248.


    So my variables are plenty big enough to handle the constraint and output.

    But my main concern right now is getting my submission to work with
    their auto-tester. All it says is 'runtime error' and 'empty output'.

    Any ideas what I might be doing wrong?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From DFS@21:1/5 to Keith Thompson on Wed Mar 19 00:42:57 2025
    On 3/18/2025 11:26 PM, Keith Thompson wrote:
    DFS <nospam@dfs.com> writes:


    There's your problem.

    https://cses.fi/problemset/text/2433

    "In all problems you should read input from standard input and write
    output to standard output."

    ha! It usually helps to read the instructions first.


    The autotester expects your program to read arguments from stdin, not
    from command line arguments.

    It probably passes no arguments to your program, so argv[1] is a null pointer. It's likely your program compiles (assuming the NBSP
    characters were added during posting) and crashes at runtime, producing
    no output.


    I KNEW clc would come through!

    Pretty easy fixes:

    1 use scanf()
    2 update int to long
    3 handle special case of n = 1
    4 instead of collecting the results in a char variable, I print
    them as they're calculated

    The algorithm part was very simple and correct. Later ones won't be so
    easy. I coded 4 so far (but just submitted this one here), and plan on
    doing all 300.

    https://imgur.com/bq0pKIw

    Did you hear a boom?

    Thanks again!


    updated code that passes: ===============================================================
    // If n is even, divide it by two.
    // If n is odd, multiply it by three and add one.
    // Repeat until n is one.
    // example: the sequence for n=3 is 3 10 5 16 8 4 2 1

    #include <stdio.h>
    #include <stdlib.h>

    int main(int argc, char *argv[])
    {
    long n = 0;
    scanf("%ld", &n);
    printf("%ld ",n);
    while(1) {
    if(n==1) {exit(0);}

    if((n % 2) == 0)
    {n /= 2;}
    else
    {n = (n * 3) + 1;}

    if(n != 1)
    {printf("%ld ",n);}
    else
    break;
    }
    printf("1\n");
    return 0;
    }
    ===============================================================

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Heathfield@21:1/5 to DFS on Wed Mar 19 04:51:02 2025
    On 19/03/2025 04:42, DFS wrote:
    Pretty easy fixes:

    1 use scanf()
    2 update int to long
    3 handle special case of n = 1
    4 instead of collecting the results in a char variable, I print
      them as they're calculated

    You've also fixed another glitch which may or may not have been
    significant:

    $ file post2.txt
    post2.txt: news, ASCII text

    So wherever that UTF-8 came from, it's gone now.

    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From DFS@21:1/5 to Richard Heathfield on Wed Mar 19 01:02:04 2025
    On 3/19/2025 12:51 AM, Richard Heathfield wrote:
    On 19/03/2025 04:42, DFS wrote:
    Pretty easy fixes:

    1 use scanf()
    2 update int to long
    3 handle special case of n = 1
    4 instead of collecting the results in a char variable, I print
       them as they're calculated

    You've also fixed another glitch which may or may not have been
    significant:

    $ file post2.txt
    post2.txt: news, ASCII text

    So wherever that UTF-8 came from, it's gone now.

    Cool. Thanks for looking at it.

    Hey, how are your C book (from 2000) sales? Thought about a new edition?

    https://www.amazon.com/C-Unleashed-Richard-Heathfield/dp/0672318962

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From DFS@21:1/5 to Tim Rentsch on Wed Mar 19 00:38:44 2025
    On 3/18/2025 11:07 PM, Tim Rentsch wrote:


    Have you thought about how large the value of 'n' can
    become inside the while() loop?

    I was too smug in my first reply. After Keith pointed out I needed to
    read from stdin, I submitted the code again and it passed some tests but
    failed with 'OUTPUT LIMIT EXCEEDED' when n = 159487.

    Updating int to long worked, and now I'm bona fide!

    So thanks.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Heathfield@21:1/5 to DFS on Wed Mar 19 05:23:23 2025
    On 19/03/2025 05:02, DFS wrote:
    On 3/19/2025 12:51 AM, Richard Heathfield wrote:
    On 19/03/2025 04:42, DFS wrote:
    Pretty easy fixes:

    1 use scanf()
    2 update int to long
    3 handle special case of n = 1
    4 instead of collecting the results in a char variable, I print
       them as they're calculated

    You've also fixed another glitch which may or may not have been
    significant:

    $ file post2.txt
    post2.txt: news, ASCII text

    So wherever that UTF-8 came from, it's gone now.

    Cool.  Thanks for looking at it.

    Hey, how are your C book (from 2000) sales?  Thought about a new
    edition?

    I'm not sure a new edition is necessary, but if it is to be
    written it would be better served by someone like Keith or Tim,
    both of whom have (as I have not) kept up with the
    million-and-one changes that appear to have assailed the simple
    language I once enjoyed.

    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to DFS on Tue Mar 18 22:27:27 2025
    DFS <nospam@dfs.com> writes:

    On 3/18/2025 11:07 PM, Tim Rentsch wrote:


    Have you thought about how large the value of 'n' can
    become inside the while() loop?

    I was too smug in my first reply. [...]

    Yes, I knew that already. Did you think I asked the question
    without having first investigated the problem?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ike Naar@21:1/5 to DFS on Wed Mar 19 07:16:29 2025
    On 2025-03-19, DFS <nospam@dfs.com> wrote:

    I'm doing these algorithm problems at
    https://cses.fi/problemset/list/

    For instance: Weird Algorithm
    https://cses.fi/problemset/task/1068

    My code works fine locally (prints the correct solution to the console),
    but when I submit the .c file the auto-tester flags it with 'runtime
    error' and says the output is empty.

    ------------------------------------------------------------
    // If n is even, divide it by two.
    // If n is odd, multiply it by three and add one.
    // Repeat until n is one.
    // n = 3: output is 3 10 5 16 8 4 2 1


    #include <stdio.h>
    #include <stdlib.h>
    #include <string.h>

    int main(int argc, char *argv[])
    {
        int n = atoi(argv[1]);
        int len = 0;
        char result[10000] = "";
        sprintf(result, "%d ", n);

        while(1) {
    if((n % 2) == 0)
    {n /= 2;}
    else
    {n = (n * 3) + 1;}

    if(n != 1)
    {
    len = strlen(result);
    sprintf(result + len, "%d ", n);
    }
    else
    break;
        }

        len = strlen(result);
        sprintf(result + len, "1 ");
        printf("%s\n",result);

        return 0;
    }
    ------------------------------------------------------------

    Any ideas?

    What happens if the input is a negative number?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to DFS on Wed Mar 19 11:55:50 2025
    On Wed, 19 Mar 2025 00:38:44 -0400
    DFS <nospam@dfs.com> wrote:

    On 3/18/2025 11:07 PM, Tim Rentsch wrote:


    Have you thought about how large the value of 'n' can
    become inside the while() loop?

    I was too smug in my first reply. After Keith pointed out I needed
    to read from stdin, I submitted the code again and it passed some
    tests but failed with 'OUTPUT LIMIT EXCEEDED' when n = 159487.

    Updating int to long worked, and now I'm bona fide!

    So thanks.

    What you did happens to be sufficient for a particular environment
    (supposedly, x86-64 Linux) used both by yourself and by the server that
    tests results.
    In more general case, 'long' is not guaranteed to handle numbers in
    range up to 18,997,161,173 that can happen in this test.
    Something like int64_t would be safer.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to DFS on Wed Mar 19 12:36:47 2025
    On Tue, 18 Mar 2025 21:38:55 -0400
    DFS <nospam@dfs.com> wrote:

    I'm doing these algorithm problems at
    https://cses.fi/problemset/list/

    For instance: Weird Algorithm
    https://cses.fi/problemset/task/1068


    It is not an interesting programming exercise. But it looks to me as a challenging math exercise. I mean, how could we give a not too
    pessimistic estimate for upper bound of length of the sequence that
    starts at given n without running a full sequence? Or estimate for
    maximal value in the sequence?
    So far, I found no answers.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to DFS on Wed Mar 19 10:15:58 2025
    On 19/03/2025 01:38, DFS wrote:

    I'm doing these algorithm problems at
    https://cses.fi/problemset/list/

    For instance: Weird Algorithm
    https://cses.fi/problemset/task/1068

    This is related to the Collatz Conjecture. What's weird is not
    mentioning it.


                    len = strlen(result);
                    sprintf(result + len, "%d ", n);

    And what's odd here is collating the results in a string (especially
    when the possible number of steps is unknown). Why not just print it
    straight to the console?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to bart on Wed Mar 19 12:40:50 2025
    On Wed, 19 Mar 2025 10:15:58 +0000
    bart <bc@freeuk.com> wrote:

    On 19/03/2025 01:38, DFS wrote:

    I'm doing these algorithm problems at
    https://cses.fi/problemset/list/

    For instance: Weird Algorithm
    https://cses.fi/problemset/task/1068

    This is related to the Collatz Conjecture. What's weird is not
    mentioning it.

    Thank you. Wikipedia article about Collatz Conjecture is a good reading.




                    len = strlen(result);
                    sprintf(result + len, "%d ", n);

    And what's odd here is collating the results in a string (especially
    when the possible number of steps is unknown). Why not just print it straight to the console?



    OP got it further down the thread.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From DFS@21:1/5 to bart on Wed Mar 19 09:03:33 2025
    On 3/19/2025 6:15 AM, bart wrote:
    On 19/03/2025 01:38, DFS wrote:

    I'm doing these algorithm problems at
    https://cses.fi/problemset/list/

    For instance: Weird Algorithm
    https://cses.fi/problemset/task/1068

    This is related to the Collatz Conjecture. What's weird is not
    mentioning it.

    I wouldn't have known it was a famous math conjecture, but no doubt the
    author of the problem did.



                     len = strlen(result);
                     sprintf(result + len, "%d ", n);

    And what's odd here is collating the results in a string (especially
    when the possible number of steps is unknown). Why not just print it
    straight to the console?


    My initial submission used straight to console. It was rejected for
    untold reasons, so I tried a variety of changes, including collating.

    I went back to printing to console, and along with the other fixes the
    code was finally accepted.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From DFS@21:1/5 to Michael S on Wed Mar 19 09:13:12 2025
    On 3/19/2025 6:36 AM, Michael S wrote:
    On Tue, 18 Mar 2025 21:38:55 -0400
    DFS <nospam@dfs.com> wrote:

    I'm doing these algorithm problems at
    https://cses.fi/problemset/list/

    For instance: Weird Algorithm
    https://cses.fi/problemset/task/1068


    It is not an interesting programming exercise.

    The early problems are easy. I plan on doing (or giving a good shot at)
    all 300.


    But it looks to me as a
    challenging math exercise. I mean, how could we give a not too
    pessimistic estimate for upper bound of length of the sequence that
    starts at given n without running a full sequence? Or estimate for
    maximal value in the sequence?
    So far, I found no answers.


    If you do find a proof, you'll get a blue ribbon and a free dinner (and
    your own Wikipedia entry).

    https://en.wikipedia.org/wiki/Collatz_conjecture

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Wed Mar 19 14:40:38 2025
    On Wed, 19 Mar 2025 09:03:33 -0400
    DFS <nospam@dfs.com> wibbled:
    On 3/19/2025 6:15 AM, bart wrote:
    On 19/03/2025 01:38, DFS wrote:

    I'm doing these algorithm problems at
    https://cses.fi/problemset/list/

    For instance: Weird Algorithm
    https://cses.fi/problemset/task/1068

    This is related to the Collatz Conjecture. What's weird is not
    mentioning it.

    I wouldn't have known it was a famous math conjecture, but no doubt the >author of the problem did.

    Reading wikipedia it looks like one of those dull problems mathematicians think up when they've got too much free time on their hands.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Muttley@DastardlyHQ.org on Wed Mar 19 17:39:38 2025
    On Wed, 19 Mar 2025 14:40:38 -0000 (UTC)
    Muttley@DastardlyHQ.org wrote:


    Reading wikipedia it looks like one of those dull problems
    mathematicians think up when they've got too much free time on their
    hands.


    Yeah, one of those dull problems that most of the times remain obscure,
    but occasionally end up providing the rest of world with things like
    public-key cryptography.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Wed Mar 19 15:42:06 2025
    On Wed, 19 Mar 2025 17:39:38 +0200
    Michael S <already5chosen@yahoo.com> wibbled:
    On Wed, 19 Mar 2025 14:40:38 -0000 (UTC)
    Muttley@DastardlyHQ.org wrote:


    Reading wikipedia it looks like one of those dull problems
    mathematicians think up when they've got too much free time on their
    hands.


    Yeah, one of those dull problems that most of the times remain obscure,
    but occasionally end up providing the rest of world with things like >public-key cryptography.

    But 99.99% of the time doesn't.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From DFS@21:1/5 to Ike Naar on Wed Mar 19 11:25:41 2025
    On 3/19/2025 3:16 AM, Ike Naar wrote:
    On 2025-03-19, DFS <nospam@dfs.com> wrote:

    I'm doing these algorithm problems at
    https://cses.fi/problemset/list/

    For instance: Weird Algorithm
    https://cses.fi/problemset/task/1068


    What happens if the input is a negative number?

    crash boom bang

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Michael S on Wed Mar 19 17:42:01 2025
    On 19/03/2025 11:40, Michael S wrote:
    On Wed, 19 Mar 2025 10:15:58 +0000
    bart <bc@freeuk.com> wrote:

    On 19/03/2025 01:38, DFS wrote:

    I'm doing these algorithm problems at
    https://cses.fi/problemset/list/

    For instance: Weird Algorithm
    https://cses.fi/problemset/task/1068

    This is related to the Collatz Conjecture. What's weird is not
    mentioning it.

    Thank you. Wikipedia article about Collatz Conjecture is a good reading.



    There's a nice Veritasium video on it, at <https://www.youtube.com/watch?v=094y1Z2wpJg>.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Heathfield@21:1/5 to DFS on Wed Mar 19 17:38:12 2025
    On 19/03/2025 17:23, DFS wrote:
    On 3/19/2025 5:55 AM, Michael S wrote:
    On Wed, 19 Mar 2025 00:38:44 -0400
    DFS <nospam@dfs.com> wrote:

    On 3/18/2025 11:07 PM, Tim Rentsch wrote:


    Have you thought about how large the value of 'n' can
    become inside the while() loop?

    I was too smug in my first reply.  After Keith pointed out I
    needed
    to read from stdin, I submitted the code again and it passed some
    tests but failed with 'OUTPUT LIMIT EXCEEDED' when n = 159487.

    Updating int to long worked, and now I'm bona fide!

    So thanks.

    What you did happens to be sufficient for a particular environment
    (supposedly, x86-64 Linux) used both by yourself and by the
    server that
    tests results.
    In more general case, 'long' is not guaranteed to handle
    numbers in
    range up to 18,997,161,173 that can happen in this test.

    How did you determine that?

    By the language definition.


    ++++++++++++++++++++++++++++++++++++++++++++++

    5.2.4.2.1 Sizes of integer types <limits.h>

    [...]

    — minimum value for an object of type long int
    LONG_MIN
    -2147483647 // −(231 − 1)

    — maximum value for an object of type long int
    LONG_MAX
    +2147483647 // 231 − 1

    ++++++++++++++++++++++++++++++++++++++++++++++

    That is, the long int type is required to have a sign bit and at
    least 31 value bits, giving a guaranteed minimum range of
    -2147483647 to 2147483647. That's 2 thou mill.

    You can squeeze another bit out of it by going unsigned: 0 to
    4294967295. That's 4 thou mill.

    From C99 onwards you can use long long int to give you 63 (or 64
    for unsigned) value bits - printf with %lld or %llu. Roughly 9
    mill mill mill and 18 mill mill mill respectively.


    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to DFS on Wed Mar 19 13:40:10 2025
    On 3/19/25 13:23, DFS wrote:
    ...
        int64_t n = 0, max = 0, thismax = 0;
    ...
        printf("\nmax n = %lld reached at input = %d\n", max, input);
    ...
    You'll get compilation warnings about the printf specifier used with
    int64_t.

    Not if you use the correct specifier:
    #include <inttypes.h>
    printf("\nmax n = %" PRId64 " reached at input = %d\n", max, input);

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From DFS@21:1/5 to Michael S on Wed Mar 19 13:23:47 2025
    On 3/19/2025 5:55 AM, Michael S wrote:
    On Wed, 19 Mar 2025 00:38:44 -0400
    DFS <nospam@dfs.com> wrote:

    On 3/18/2025 11:07 PM, Tim Rentsch wrote:


    Have you thought about how large the value of 'n' can
    become inside the while() loop?

    I was too smug in my first reply. After Keith pointed out I needed
    to read from stdin, I submitted the code again and it passed some
    tests but failed with 'OUTPUT LIMIT EXCEEDED' when n = 159487.

    Updating int to long worked, and now I'm bona fide!

    So thanks.

    What you did happens to be sufficient for a particular environment (supposedly, x86-64 Linux) used both by yourself and by the server that
    tests results.
    In more general case, 'long' is not guaranteed to handle numbers in
    range up to 18,997,161,173 that can happen in this test.

    How did you determine that?

    I ran the program for inputs 1 to 10^6 (1 million):

    Windows11, Tiny C compiler
    $ ptime weird 1 1000000
    max n = 56991483520 reached at input = 704511
    Execution time: 0.559 s

    Kali Linux (Windows WSL), gcc
    $ time ./weird 1 1000000
    max n = 56991483520 reached at input = 704511
    real 0m0.330s

    (56,991,483,520)



    Something like int64_t would be safer.

    Indeed. My code bombed using long when the input n = 151177.

    Used int64_t and it didn't bomb.


    Thanks

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From DFS@21:1/5 to Tim Rentsch on Wed Mar 19 13:23:03 2025
    On 3/19/2025 1:27 AM, Tim Rentsch wrote:
    DFS <nospam@dfs.com> writes:

    On 3/18/2025 11:07 PM, Tim Rentsch wrote:


    Have you thought about how large the value of 'n' can
    become inside the while() loop?

    I was too smug in my first reply. [...]

    Yes, I knew that already. Did you think I asked the question
    without having first investigated the problem?



    I wouldn't presume.

    Did you investigate first?

    I just now did:

    gcc on Kali Linux (in Windows WSL)

    run it: $ ./weird start stop

    $time ./weird 1 1000000
    <1000000 lines will be output>
    max n = 56991483520 reached at input = 704511

    real 0m5.792s



    code
    -----------------------------------------------------------------
    // If n is even, divide it by two.
    // If n is odd, multiply it by three and add one.
    // Repeat until n is one.
    // example: the sequence for n=3 is 3 10 5 16 8 4 2 1

    #include <stdio.h>
    #include <stdlib.h>

    int main(int argc, char *argv[])
    {
        int steps, input;
        int startN = atoi(argv[1]);
        int stopN = atoi(argv[2]);
        int64_t n = 0, max = 0, thismax = 0;

        for (int i = startN; i <= stopN; i++) {

    n = i;
    steps = 1;
    thismax = n;
    while(1) {

    if((n % 2) == 0)
    {n /= 2;}
    else
    {n = (n * 3) + 1;}

    if (n > max) {max = n; input = i;}
    if (n > thismax) {thismax = n;}
    steps++;

    if (i == 1) {
    printf("input 1, max n = 1, steps = 1\n");
    break;
    }

    if(n == 1) {
    printf("input %d, max n = %6lld, steps = %4d\n", i,
    thismax, steps);
    break;
    }
    }


        }
        printf("\nmax n = %lld reached at input = %d\n", max, input);
        return 0;
    }

    -----------------------------------------------------------------

    You'll get compilation warnings about the printf specifier used with
    int64_t.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Richard Heathfield on Wed Mar 19 20:19:03 2025
    On Wed, 19 Mar 2025 17:38:12 +0000
    Richard Heathfield <rjh@cpax.org.uk> wrote:

    On 19/03/2025 17:23, DFS wrote:
    On 3/19/2025 5:55 AM, Michael S wrote:
    On Wed, 19 Mar 2025 00:38:44 -0400
    DFS <nospam@dfs.com> wrote:

    On 3/18/2025 11:07 PM, Tim Rentsch wrote:


    Have you thought about how large the value of 'n' can
    become inside the while() loop?

    I was too smug in my first reply.  After Keith pointed out I
    needed
    to read from stdin, I submitted the code again and it passed some
    tests but failed with 'OUTPUT LIMIT EXCEEDED' when n = 159487.

    Updating int to long worked, and now I'm bona fide!

    So thanks.

    What you did happens to be sufficient for a particular environment
    (supposedly, x86-64 Linux) used both by yourself and by the
    server that
    tests results.
    In more general case, 'long' is not guaranteed to handle
    numbers in
    range up to 18,997,161,173 that can happen in this test.

    How did you determine that?

    By the language definition.


    Well, not exactly.
    I never read C Standard docs except the very first one that I read more
    that I read more than 33 years ago, so not very likely to remember it literally.
    Let's say that I know this particular bit of trivia from 1st hand
    experience and from reading few ABI definitions.


    ++++++++++++++++++++++++++++++++++++++++++++++

    5.2.4.2.1 Sizes of integer types <limits.h>

    [...]

    — minimum value for an object of type long int
    LONG_MIN
    -2147483647 // −(231 − 1)

    — maximum value for an object of type long int
    LONG_MAX
    +2147483647 // 231 − 1

    ++++++++++++++++++++++++++++++++++++++++++++++

    That is, the long int type is required to have a sign bit and at
    least 31 value bits, giving a guaranteed minimum range of
    -2147483647 to 2147483647. That's 2 thou mill.

    You can squeeze another bit out of it by going unsigned: 0 to
    4294967295. That's 4 thou mill.

    From C99 onwards you can use long long int to give you 63 (or 64
    for unsigned) value bits - printf with %lld or %llu. Roughly 9
    mill mill mill and 18 mill mill mill respectively.



    I suspected that, but was not sure, so suggested to DFS a type that I am
    sure about.
    In my own test program I used 'long long', because I knew that on my
    system it is 64-bit. I tend to know such things about systems, I use.
    More interesting question is "How do *you* know about this newfangled gibberish?" :-)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Heathfield@21:1/5 to Michael S on Wed Mar 19 19:03:04 2025
    On 19/03/2025 18:19, Michael S wrote:
    More interesting question is "How do*you* know about this newfangled gibberish?" :-)

    <g> I discovered Usenet in either 1998 or perhaps more likely
    1999. Back then, this newfangled gibberish was all clc ever
    talked about. That and FAQ 1.22, which is a real doozy.

    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From DFS@21:1/5 to James Kuyper on Wed Mar 19 15:06:56 2025
    On 3/19/2025 1:40 PM, James Kuyper wrote:
    On 3/19/25 13:23, DFS wrote:
    ...
        int64_t n = 0, max = 0, thismax = 0;
    ...
        printf("\nmax n = %lld reached at input = %d\n", max, input);
    ...
    You'll get compilation warnings about the printf specifier used with
    int64_t.

    Not if you use the correct specifier:
    #include <inttypes.h>
    printf("\nmax n = %" PRId64 " reached at input = %d\n", max, input);


    I saw that online, but didn't want to add another include.

    But it does make the code slightly more portable - with those changes
    the code now compiles and runs cleanly under Windows tcc and Linux gcc.

    Thanks.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to DFS on Wed Mar 19 12:52:16 2025
    DFS <nospam@dfs.com> writes:

    On 3/19/2025 1:27 AM, Tim Rentsch wrote:

    DFS <nospam@dfs.com> writes:

    On 3/18/2025 11:07 PM, Tim Rentsch wrote:


    Have you thought about how large the value of 'n' can
    become inside the while() loop?

    I was too smug in my first reply. [...]

    Yes, I knew that already. Did you think I asked the question
    without having first investigated the problem?

    I wouldn't presume.

    Did you investigate first?

    Of course.

    I just now did:

    gcc on Kali Linux (in Windows WSL)

    run it: $ ./weird start stop

    $time ./weird 1 1000000
    <1000000 lines will be output>
    max n = 56991483520 reached at input = 704511

    real 0m5.792s



    code
    -----------------------------------------------------------------
    // If n is even, divide it by two.
    // If n is odd, multiply it by three and add one.
    // Repeat until n is one.
    // example: the sequence for n=3 is 3 10 5 16 8 4 2 1

    #include <stdio.h>
    #include <stdlib.h>

    int main(int argc, char *argv[])
    {
    int steps, input;
    int startN = atoi(argv[1]);
    int stopN = atoi(argv[2]);
    int64_t n = 0, max = 0, thismax = 0;

    for (int i = startN; i <= stopN; i++) {

    n = i;
    steps = 1;
    thismax = n;
    while(1) {

    if((n % 2) == 0)
    {n /= 2;}
    else
    {n = (n * 3) + 1;}

    if (n > max) {max = n; input = i;}
    if (n > thismax) {thismax = n;}
    steps++;

    if (i == 1) {
    printf("input 1, max n = 1, steps = 1\n");
    break;
    }

    if(n == 1) {
    printf("input %d, max n = %6lld, steps = %4d\n",
    i, thismax, steps);
    break;
    }
    }


    }
    printf("\nmax n = %lld reached at input = %d\n", max, input);
    return 0;
    }

    -----------------------------------------------------------------

    You'll get compilation warnings about the printf specifier used with
    int64_t.

    I recommend using unsigned long long rather than int64_t. After
    all, all values reached are guaranteed to be non-negative. And
    there is no confusion or uncertainty about what conversion sequence
    to use in the call to printf().

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Michael S on Wed Mar 19 12:59:13 2025
    Michael S <already5chosen@yahoo.com> writes:

    On Wed, 19 Mar 2025 00:38:44 -0400
    DFS <nospam@dfs.com> wrote:

    On 3/18/2025 11:07 PM, Tim Rentsch wrote:


    Have you thought about how large the value of 'n' can
    become inside the while() loop?

    I was too smug in my first reply. After Keith pointed out I needed
    to read from stdin, I submitted the code again and it passed some
    tests but failed with 'OUTPUT LIMIT EXCEEDED' when n = 159487.

    Updating int to long worked, and now I'm bona fide!

    So thanks.

    What you did happens to be sufficient for a particular environment (supposedly, x86-64 Linux) used both by yourself and by the server that
    tests results.
    In more general case, 'long' is not guaranteed to handle numbers in
    range up to 18,997,161,173 that can happen in this test.

    The number 18997161173 is odd. The largest value reached is three
    times that, plus 1, which is 56991483520.

    Something like int64_t would be safer.

    Using unsigned long long is safer still, and easier, because there
    is no need for hoop-jumping to print them out with printx.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Tim Rentsch on Wed Mar 19 22:12:27 2025
    On Wed, 19 Mar 2025 12:59:13 -0700
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    Michael S <already5chosen@yahoo.com> writes:

    On Wed, 19 Mar 2025 00:38:44 -0400
    DFS <nospam@dfs.com> wrote:

    On 3/18/2025 11:07 PM, Tim Rentsch wrote:


    Have you thought about how large the value of 'n' can
    become inside the while() loop?

    I was too smug in my first reply. After Keith pointed out I needed
    to read from stdin, I submitted the code again and it passed some
    tests but failed with 'OUTPUT LIMIT EXCEEDED' when n = 159487.

    Updating int to long worked, and now I'm bona fide!

    So thanks.

    What you did happens to be sufficient for a particular environment (supposedly, x86-64 Linux) used both by yourself and by the server
    that tests results.
    In more general case, 'long' is not guaranteed to handle numbers in
    range up to 18,997,161,173 that can happen in this test.

    The number 18997161173 is odd. The largest value reached is three
    times that, plus 1, which is 56991483520.


    Yes, my mistake.
    I only looked for maximal odd number in the sequence. Forgot about
    even numbers.

    Something like int64_t would be safer.

    Using unsigned long long is safer still, and easier, because there
    is no need for hoop-jumping to print them out with printx.

    I explained the reason in the reply to Richard Heathfield.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Muttley@DastardlyHQ.org on Wed Mar 19 13:13:45 2025
    Muttley@DastardlyHQ.org writes:

    On Wed, 19 Mar 2025 09:03:33 -0400
    DFS <nospam@dfs.com> wibbled:

    On 3/19/2025 6:15 AM, bart wrote:

    On 19/03/2025 01:38, DFS wrote:

    I'm doing these algorithm problems at
    https://cses.fi/problemset/list/

    For instance: Weird Algorithm
    https://cses.fi/problemset/task/1068

    This is related to the Collatz Conjecture. What's weird is not
    mentioning it.

    I wouldn't have known it was a famous math conjecture, but no
    doubt the author of the problem did.

    Reading wikipedia it looks like one of those dull problems
    mathematicians think up when they've got too much free time on
    their hands.

    The 3n+1 problem, as it is sometimes called, is interesting
    because it is easy to state and easy to understand, even without
    any mathematical training beyond grade school, and yet has
    resisted the efforts of many of the best mathematicians in the
    world to try to prove it. It seems like it should be easy, but
    it is in fact incredibly difficult, based on almost 100 years of
    experience.

    If you try looking at it and see if you can make some sort of
    dent in the problem you may find it more interesting than your
    initial impression suggests.

    Related problem: consider a class of analogous problems, where
    instead of 3n+1 we use 3n+k, for k positive and odd. Question:
    for which values of k does the 3n+k algorithm have multiple loops
    rather than just one?

    (I acknowledge the above posting to be off topic, and ask the
    group for forgiveness for this transgression.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From DFS@21:1/5 to Keith Thompson on Wed Mar 19 16:45:54 2025
    On 3/19/2025 4:53 AM, Keith Thompson wrote:
    Ike Naar <ike@sdf.org> writes:
    On 2025-03-19, DFS <nospam@dfs.com> wrote:

    I'm doing these algorithm problems at
    https://cses.fi/problemset/list/


    I've been playing with some of these myself.

    Are you just looking for ones you find interesting?

    I'm starting at the top and going down 1 by 1, no matter how simple.




    I'm actually using
    scanf() for integer input, something I wouldn't do in normal code
    (because the behavior is undefined if the scanned value is out
    of range). This is a rare case where it's safe to assume that
    stdin has nothing harmful.



    My very 1st submission for 'Missing Number' was accepted, but each use
    of scanf() generated a warning:

    warning: ignoring return value of 'scanf' declared with attribute 'warn_unused_result' [-Wunused-result]
    30 | scanf("%d", &nums[i]);


    spoiler: code below



























    ===============================================================
    //identify the missing number in a set of consecutive integers

    #include <stdio.h>
    #include <stdlib.h>

    //comparator function used with qsort
    int compareint (const void * a, const void * b)
    {
    if (*(int*)a > *(int*)b) return 1;
    else if (*(int*)a < *(int*)b) return -1;
    else return 0;
    }

    int main(int argc, char *argv[])
    {
    //vars
    int i = 0, N = 0;

    //line 1 from stdin: number of elements + 1
    scanf("%d", &N);
    int *nums = malloc(N * sizeof(int));

    //line 2 from stdin: list of elements
    for (i = 0; i < N-1; i++) {
    scanf("%d", &nums[i]);
    }

    //sort the array
    qsort(nums, N, sizeof(int), compareint);

    //identify missing number
    int missing = N;
    for(i = 0; i < N; i++) {
    if(nums[i+1] - nums[i] > 1) {
    missing = nums[i] + 1;
    break;
    }
    }

    //print solution
    printf("%d\n", missing);

    //free and end
    free(nums);
    return 0;
    }
    ===============================================================

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Heathfield@21:1/5 to DFS on Wed Mar 19 21:21:27 2025
    On 19/03/2025 20:45, DFS wrote:
    On 3/19/2025 4:53 AM, Keith Thompson wrote:

    <snip>

    I'm actually using
    scanf() for integer input, something I wouldn't do in normal code
    (because the behavior is undefined if the scanned value is out
    of range).  This is a rare case where it's safe to assume that
    stdin has nothing harmful.



    My very 1st submission for 'Missing Number' was accepted, but
    each use of scanf() generated a warning:

    Like Keith, I don't usually use scanf.

    My preference is to read the number into a text buffer using
    fgets, and then convert it using strtoul if I can, or strtol if I
    have to allow for negatives.

    //identify the missing number in a set of consecutive integers

    Not sure why you're sorting and malloc-ing.

    Loop through the input, adding as you go.
    Let the first number be x.
    Let the last number be y.
    Let the total of all numbers be t.
    Let m = ((y*(y+1))-(x*(x+1)))/2 - t;

    "The missing number is: %lld\n", m

    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Heathfield@21:1/5 to Richard Heathfield on Wed Mar 19 21:35:06 2025
    On 19/03/2025 21:21, Richard Heathfield wrote:
    On 19/03/2025 20:45, DFS wrote:
    On 3/19/2025 4:53 AM, Keith Thompson wrote:

    <snip>

    I'm actually using
    scanf() for integer input, something I wouldn't do in normal code
    (because the behavior is undefined if the scanned value is out
    of range).  This is a rare case where it's safe to assume that
    stdin has nothing harmful.



    My very 1st submission for 'Missing Number' was accepted, but
    each use of scanf() generated a warning:

    Like Keith, I don't usually use scanf.

    My preference is to read the number into a text buffer using
    fgets, and then convert it using strtoul if I can, or strtol if I
    have to allow for negatives.

    //identify the missing number in a set of consecutive integers

    Not sure why you're sorting and malloc-ing.

    Loop through the input, adding as you go.
    Let the first number be x.
    Let the last number be y.
    Let the total of all numbers be t.
    Let m = ((y*(y+1))-(x*(x+1)))/2 - t;

    "The missing number is: %lld\n", m

    It occurs to me now that maybe you're sorting in case the numbers
    aren't as consecutive as they're supposed to be, in which case it
    can still be done without sorting, but it's very, /very/ slightly
    harder because you have to keep track of the lowest and highest
    seen so far:

    Loop through the input, adding as you go and remembering your two
    extreme values.
    Let the lowest number be x.
    Let the highest number be y.
    Let the total of all numbers be t.
    Let m = ((y*(y+1))-(x*(x+1)))/2 - t;

    "The missing number is: %lld\n", m

    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From DFS@21:1/5 to Keith Thompson on Wed Mar 19 22:34:04 2025
    On 3/19/2025 5:56 PM, Keith Thompson wrote:
    DFS <nospam@dfs.com> writes:
    On 3/19/2025 4:53 AM, Keith Thompson wrote:


    I used a different approach. I'll encode the description of the
    solution using rot13.

    The program is given an integer n and a list of n-1 integers, not
    necessarily ordered.

    The task is to determine which number is missing from the list.

    Gurer'f n jryy xabja sbezhyn sbe gur fhz bs nyy a ahzoref sebz bar
    gb a. Pbzchgr gur rkcrpgrq fhz, gura fhogenpg gur fhz bs gur ahzoref
    tvira ba gur frpbaq vachg yvar. Gur qvssrerapr vf gur zvffvat ahzore.



    That's dead simple! It works because the input numbers start with 1.

    I'll give it a quick try.

    edit: my attempt passed 9 of 14 tests, but fails on large N because
    it's not calculating sum(1..N) correctly. Line 10. See anything?


    //identify the missing number in a set of otherwise consecutive integers #include <stdio.h>
    #include <stdlib.h>
    int main(int argc, char *argv[]) {
    int i = 0, N = 0, temp = 0;
    int64_t totN = 0, totInputs = 0;
    scanf("%d", &N); //number of elements
    for (i = 0; i < N-1; i++) { //list of elements
    scanf("%d", &temp); //to temp var
    totInputs += temp; //running total
    }
    totN = (N * (N + 1)) / 2; //sum of numbers 1 to N
    printf("N %lld\n",N);
    printf("tot N %lld\n",totN);
    printf("tot inputs %lld\n",totInputs);
    printf("%d\n", totN - totInputs); //solution
    return 0;
    }


    sample output
    N 50000
    tot N -897458648 FAIL (should be 1250025000)
    tot inputs 1250017374 GOOD
    -2147476022 FAIL (should be 7626)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Thu Mar 20 09:50:17 2025
    On Wed, 19 Mar 2025 13:13:45 -0700
    Tim Rentsch <tr.17687@z991.linuxsc.com> wibbled:
    Muttley@DastardlyHQ.org writes:

    On Wed, 19 Mar 2025 09:03:33 -0400
    DFS <nospam@dfs.com> wibbled:

    On 3/19/2025 6:15 AM, bart wrote:

    On 19/03/2025 01:38, DFS wrote:

    I'm doing these algorithm problems at
    https://cses.fi/problemset/list/

    For instance: Weird Algorithm
    https://cses.fi/problemset/task/1068

    This is related to the Collatz Conjecture. What's weird is not
    mentioning it.

    I wouldn't have known it was a famous math conjecture, but no
    doubt the author of the problem did.

    Reading wikipedia it looks like one of those dull problems
    mathematicians think up when they've got too much free time on
    their hands.

    The 3n+1 problem, as it is sometimes called, is interesting
    because it is easy to state and easy to understand, even without
    any mathematical training beyond grade school, and yet has
    resisted the efforts of many of the best mathematicians in the
    world to try to prove it. It seems like it should be easy, but
    it is in fact incredibly difficult, based on almost 100 years of
    experience.

    I guess some maths problems can't be proven directly, they have to be - for want of a better word - run. A bit like the halting problem in CS.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Michael S on Thu Mar 20 05:09:21 2025
    Michael S <already5chosen@yahoo.com> writes:

    On Wed, 19 Mar 2025 17:38:12 +0000
    Richard Heathfield <rjh@cpax.org.uk> wrote:

    On 19/03/2025 17:23, DFS wrote:

    On 3/19/2025 5:55 AM, Michael S wrote:

    On Wed, 19 Mar 2025 00:38:44 -0400
    DFS <nospam@dfs.com> wrote:

    On 3/18/2025 11:07 PM, Tim Rentsch wrote:


    Have you thought about how large the value of 'n' can
    become inside the while() loop?

    I was too smug in my first reply. After Keith pointed out I
    needed
    to read from stdin, I submitted the code again and it passed some
    tests but failed with 'OUTPUT LIMIT EXCEEDED' when n = 159487.

    Updating int to long worked, and now I'm bona fide!

    So thanks.

    What you did happens to be sufficient for a particular environment
    (supposedly, x86-64 Linux) used both by yourself and by the
    server that
    tests results.
    In more general case, 'long' is not guaranteed to handle
    numbers in
    range up to 18,997,161,173 that can happen in this test.

    How did you determine that?

    By the language definition.

    Well, not exactly.
    I never read C Standard docs except the very first one that I read
    more that I read more than 33 years ago, so not very likely to
    remember it literally.
    Let's say that I know this particular bit of trivia from 1st hand
    experience and from reading few ABI definitions.

    ++++++++++++++++++++++++++++++++++++++++++++++

    5.2.4.2.1 Sizes of integer types <limits.h>

    [...]

    ? minimum value for an object of type long int
    LONG_MIN
    -2147483647 // ?(231 ? 1)

    ? maximum value for an object of type long int
    LONG_MAX
    +2147483647 // 231 ? 1

    ++++++++++++++++++++++++++++++++++++++++++++++

    That is, the long int type is required to have a sign bit and at
    least 31 value bits, giving a guaranteed minimum range of
    -2147483647 to 2147483647. That's 2 thou mill.

    You can squeeze another bit out of it by going unsigned: 0 to
    4294967295. That's 4 thou mill.

    From C99 onwards you can use long long int to give you 63 (or 64
    for unsigned) value bits - printf with %lld or %llu. Roughly 9
    mill mill mill and 18 mill mill mill respectively.

    I suspected that, but was not sure, so suggested to DFS a type that I am
    sure about.

    The width of char and [un]signed char must be at least 8 bits.
    The width of [un]signed short must be at least 16 bits.
    The width of [un]signed int must be at least 16 bits.
    The width of [un]signed long must be at least 32 bits.
    The width of [un]signed long long must be at least 64 bits.

    That should be easy enough to remember now.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Muttley@DastardlyHQ.org on Thu Mar 20 04:59:11 2025
    Muttley@DastardlyHQ.org writes:

    On Wed, 19 Mar 2025 13:13:45 -0700
    Tim Rentsch <tr.17687@z991.linuxsc.com> wibbled:

    Muttley@DastardlyHQ.org writes:

    On Wed, 19 Mar 2025 09:03:33 -0400
    DFS <nospam@dfs.com> wibbled:

    On 3/19/2025 6:15 AM, bart wrote:

    On 19/03/2025 01:38, DFS wrote:

    I'm doing these algorithm problems at
    https://cses.fi/problemset/list/

    For instance: Weird Algorithm
    https://cses.fi/problemset/task/1068

    This is related to the Collatz Conjecture. What's weird is not
    mentioning it.

    I wouldn't have known it was a famous math conjecture, but no
    doubt the author of the problem did.

    Reading wikipedia it looks like one of those dull problems
    mathematicians think up when they've got too much free time on
    their hands.

    The 3n+1 problem, as it is sometimes called, is interesting
    because it is easy to state and easy to understand, even without
    any mathematical training beyond grade school, and yet has
    resisted the efforts of many of the best mathematicians in the
    world to try to prove it. It seems like it should be easy, but
    it is in fact incredibly difficult, based on almost 100 years of
    experience.

    I guess some maths problems can't be proven directly, they have to
    be - for want of a better word - run. A bit like the halting
    problem in CS.

    Indeed it is the case that some statements are not provable, even
    though they may still be true. But the 3n+1 problem is unlikely to
    be in that category; it is just very hard to prove.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Richard Heathfield on Thu Mar 20 06:06:37 2025
    Richard Heathfield <rjh@cpax.org.uk> writes:

    On 19/03/2025 05:02, DFS wrote:

    On 3/19/2025 12:51 AM, Richard Heathfield wrote:

    On 19/03/2025 04:42, DFS wrote:

    Pretty easy fixes:

    1 use scanf()
    2 update int to long
    3 handle special case of n = 1
    4 instead of collecting the results in a char variable, I print
    them as they're calculated

    You've also fixed another glitch which may or may not have been
    significant:

    $ file post2.txt
    post2.txt: news, ASCII text

    So wherever that UTF-8 came from, it's gone now.

    Cool. Thanks for looking at it.

    Hey, how are your C book (from 2000) sales? Thought about a new
    edition?

    I'm not sure a new edition is necessary, but if it is to be
    written it would be better served by someone like Keith or Tim,
    both of whom have (as I have not) kept up with the million-and-one
    changes that appear to have assailed the simple language I once
    enjoyed.

    The C99 standard has a list of 54 what it calls "major changes",
    although IMO many or most of those are fairly minor. There are also
    other differences relative to C90, but most of them are simply
    clarifications or slight changes in wording.

    I went through the list of major changes and selected out the items
    I consider the most significant. Here they are, organized into
    several related areas.

    Language constructs taken out from C90:
    REMOVED: implicit int
    REMOVED: implicit function declaration

    Comments:
    // comments are now allowed

    Preprocessor:
    empty macro arguments
    macros with variable number of arguments

    Lexical:
    improved lower bounds for identifier length -
    went from 6 and 31 (for global and non-global) in C90
    to 31 and 63 in C99

    Types added:
    boolean type _Bool
    complex numbers
    long long and unsigned long long

    New language constructs:
    more general initialization for aggregates and unions
    compound literals
    designated initializers
    mixing of declarations and code (including in for() initializers)

    Array related:
    arrays and array types with non-constant extents, aka
    VLA, for variable length arrays, and
    VMT, for variably modified type
    flexible array members (recognizing the "struct hack")

    Miscellaneous:
    'inline' functions
    return with expression now disallowed in void function,
    and vice versa

    I included the "array related" items, and also complex numbers, as
    significant items, even though they aren't used very often. I think
    it's important to know about these features, despite their
    infrequent use. (Incidentally, both complex numbers and VLA/VMT
    were made optional in C11.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Tim Rentsch on Thu Mar 20 12:23:23 2025
    On 20/03/2025 12:09, Tim Rentsch wrote:
    Michael S <already5chosen@yahoo.com> writes:

    I suspected that, but was not sure, so suggested to DFS a type that I am
    sure about.

    The width of char and [un]signed char must be at least 8 bits.
    The width of [un]signed short must be at least 16 bits.
    The width of [un]signed int must be at least 16 bits.
    The width of [un]signed long must be at least 32 bits.
    The width of [un]signed long long must be at least 64 bits.

    That should be easy enough to remember now.

    That table suggests that any program mixing 'short' and 'int' is
    suspect. If 'int' doesn't need to store values beyond 16 bits, then why
    not use 'short'?

    'long' is another troublesome one. If the need is for 32-bit values,
    then it's surprisingly rare in source code.

    In practice, most code now assumes that 'int' is 32 bits, and 'long' is inadvisedly used for 64 bits, since its actual width is typically:

    long

    Windows 32-bit 32 bits
    Windows 64-bit 32 bits
    Linux 32-bit 32 bits
    Linux 64-bit 64 bits

    My suggestion for writing code that is not going to run on 16-bit or
    lesser (or unusual) hardware is to assume:

    char 8 bits
    short 16 bits
    int 32 bits
    long long 64 bits

    and to forget 'long'.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Michael S on Thu Mar 20 05:19:02 2025
    Michael S <already5chosen@yahoo.com> writes:

    On Wed, 19 Mar 2025 12:59:13 -0700
    Tim Rentsch <tr.17687@z991.linuxsc.com> wrote:

    Michael S <already5chosen@yahoo.com> writes:

    On Wed, 19 Mar 2025 00:38:44 -0400
    DFS <nospam@dfs.com> wrote:

    On 3/18/2025 11:07 PM, Tim Rentsch wrote:


    Have you thought about how large the value of 'n' can
    become inside the while() loop?

    I was too smug in my first reply. After Keith pointed out I needed
    to read from stdin, I submitted the code again and it passed some
    tests but failed with 'OUTPUT LIMIT EXCEEDED' when n = 159487.

    Updating int to long worked, and now I'm bona fide!

    So thanks.

    What you did happens to be sufficient for a particular environment
    (supposedly, x86-64 Linux) used both by yourself and by the server
    that tests results.
    In more general case, 'long' is not guaranteed to handle numbers in
    range up to 18,997,161,173 that can happen in this test.

    The number 18997161173 is odd. The largest value reached is three
    times that, plus 1, which is 56991483520.

    Yes, my mistake.
    I only looked for maximal odd number in the sequence. Forgot about
    even numbers.

    Yes, I realized that, after the fact.

    Something like int64_t would be safer.

    Using unsigned long long is safer still, and easier, because there
    is no need for hoop-jumping to print them out with printx.

    I explained the reason in the reply to Richard Heathfield.

    Yes I saw that. Part of my motivation for the comment is to
    augment the knowledge of those who aren't sure.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Michael S on Thu Mar 20 05:15:36 2025
    Michael S <already5chosen@yahoo.com> writes:

    On Tue, 18 Mar 2025 21:38:55 -0400
    DFS <nospam@dfs.com> wrote:

    I'm doing these algorithm problems at
    https://cses.fi/problemset/list/

    For instance: Weird Algorithm
    https://cses.fi/problemset/task/1068

    It is not an interesting programming exercise. But it looks to me as a challenging math exercise. I mean, how could we give a not too
    pessimistic estimate for upper bound of length of the sequence that
    starts at given n without running a full sequence? Or estimate for
    maximal value in the sequence?
    So far, I found no answers.

    You may console yourself with the knowledge that no one else
    has either, even some of the most brilliant mathematicians
    of the last hundred years. In fact it isn't even known that
    all starting points eventually terminate; as far as what has
    been proven goes, some starting points might just keep going
    up forever.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Scott Lurndal on Thu Mar 20 14:00:29 2025
    On 20/03/2025 13:36, Scott Lurndal wrote:
    bart <bc@freeuk.com> writes:
    On 20/03/2025 12:09, Tim Rentsch wrote:
    Michael S <already5chosen@yahoo.com> writes:

    I suspected that, but was not sure, so suggested to DFS a type that I am >>>> sure about.

    The width of char and [un]signed char must be at least 8 bits.
    The width of [un]signed short must be at least 16 bits.
    The width of [un]signed int must be at least 16 bits.
    The width of [un]signed long must be at least 32 bits.
    The width of [un]signed long long must be at least 64 bits.

    That should be easy enough to remember now.

    That table suggests that any program mixing 'short' and 'int' is
    suspect. If 'int' doesn't need to store values beyond 16 bits, then why
    not use 'short'?

    'long' is another troublesome one. If the need is for 32-bit values,
    then it's surprisingly rare in source code.

    Long is useless, because Microsoft made the mistake of defining
    'long' as 32-bits on 64-bit architectures, while unix and linux
    define it as 64-bits.

    Unix and Linux define it as 32 bits on 32-bit architectures and 64 bits
    on 64-bit ones.

    So long can't be used in programs intended to be portable to
    other operating systems.

    As defined by Unix/Linux, long is not portable between different
    Unix/Linux OSes if they run on a different architecture.

    As defined by Microsoft, long is portable between Windows OSes even on different architectures.

    'long long' is defined as a 64-bit
    type in both Windows and Linux.

    Using the defined width types is far better (e.g. uint64_t);
    even if the standard allows the type to not exist on a particular implementation. No useful implementation would fail to define
    uint64_t in these modern times.

    The point was made earlier on that int64_t types are awkward to work
    with; they need that stdint.h header to even exist, and they need those
    ugly macros in inttypes.h to print out their values.

    This is why it popular to just do:

    typedef long long int i64;

    and to use %lld to print, and -LL on literals to force a 64-bit type.

    stdint.h et al are just ungainly bolt-ons, not fully supported by the
    language.

    The problem with 'long' manifests itself there too, since on Linux,
    'int64_t' appears to be commonly defined on top of 'long' for 32-bit
    systems, and 'long long' for 64-bit ones.

    So somebody eschewing those ugly macros and using "%ld" to print an
    'int64_t' type, will find it doesn't work when run on a 64-bit system,
    where "%lld" is needed. Same problem with using '1L' to define an
    int64_t literal.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Heathfield@21:1/5 to Tim Rentsch on Thu Mar 20 13:27:36 2025
    On 20/03/2025 13:06, Tim Rentsch wrote:
    Richard Heathfield <rjh@cpax.org.uk> writes:


    <snip>


    I'm not sure a new edition is necessary, but if it is to be
    written it would be better served by someone like Keith or Tim,
    both of whom have (as I have not) kept up with the million-and-one
    changes that appear to have assailed the simple language I once
    enjoyed.

    The C99 standard has a list of 54 what it calls "major changes",
    although IMO many or most of those are fairly minor. There are also
    other differences relative to C90, but most of them are simply
    clarifications or slight changes in wording.

    Those I largely recall from discussions at the time, but I dare
    to conclude that your lack of a reference to C11, C17, and C23
    means that they had a lesser effect on the language than I'd feared.

    I see now from casual research that C17 was predominantly a bug
    fix, but that C11 and C23 were somewhat busier.

    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to bart on Thu Mar 20 13:36:43 2025
    bart <bc@freeuk.com> writes:
    On 20/03/2025 12:09, Tim Rentsch wrote:
    Michael S <already5chosen@yahoo.com> writes:

    I suspected that, but was not sure, so suggested to DFS a type that I am >>> sure about.

    The width of char and [un]signed char must be at least 8 bits.
    The width of [un]signed short must be at least 16 bits.
    The width of [un]signed int must be at least 16 bits.
    The width of [un]signed long must be at least 32 bits.
    The width of [un]signed long long must be at least 64 bits.

    That should be easy enough to remember now.

    That table suggests that any program mixing 'short' and 'int' is
    suspect. If 'int' doesn't need to store values beyond 16 bits, then why
    not use 'short'?

    'long' is another troublesome one. If the need is for 32-bit values,
    then it's surprisingly rare in source code.

    Long is useless, because Microsoft made the mistake of defining
    'long' as 32-bits on 64-bit architectures, while unix and linux
    define it as 64-bits.

    So long can't be used in programs intended to be portable to
    other operating systems. 'long long' is defined as a 64-bit
    type in both Windows and Linux.

    Using the defined width types is far better (e.g. uint64_t);
    even if the standard allows the type to not exist on a particular implementation. No useful implementation would fail to define
    uint64_t in these modern times.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Scott Lurndal on Thu Mar 20 16:35:37 2025
    On Thu, 20 Mar 2025 13:36:43 GMT
    scott@slp53.sl.home (Scott Lurndal) wrote:

    bart <bc@freeuk.com> writes:
    On 20/03/2025 12:09, Tim Rentsch wrote:
    Michael S <already5chosen@yahoo.com> writes:

    I suspected that, but was not sure, so suggested to DFS a type
    that I am sure about.

    The width of char and [un]signed char must be at least 8 bits.
    The width of [un]signed short must be at least 16 bits.
    The width of [un]signed int must be at least 16 bits.
    The width of [un]signed long must be at least 32 bits.
    The width of [un]signed long long must be at least 64 bits.

    That should be easy enough to remember now.

    That table suggests that any program mixing 'short' and 'int' is
    suspect. If 'int' doesn't need to store values beyond 16 bits, then
    why not use 'short'?

    'long' is another troublesome one. If the need is for 32-bit values,
    then it's surprisingly rare in source code.

    Long is useless, because Microsoft made the mistake of defining
    'long' as 32-bits on 64-bit architectures, while unix and linux
    define it as 64-bits.

    So long can't be used in programs intended to be portable to
    other operating systems. 'long long' is defined as a 64-bit
    type in both Windows and Linux.

    Using the defined width types is far better (e.g. uint64_t);
    even if the standard allows the type to not exist on a particular implementation. No useful implementation would fail to define
    uint64_t in these modern times.


    Unfortunately, gcc people made a mess out of it as well, defining (on
    64-bit Linux platforms) int64_t/uint64_t as aliases to respective 'long'
    types instead of aliasing them to 'long long', to be the same on all
    platforms that matter.
    I'd guess they were afraid of being accused of being sensible.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to bart on Thu Mar 20 14:32:10 2025
    bart <bc@freeuk.com> writes:
    On 20/03/2025 13:36, Scott Lurndal wrote:
    then it's surprisingly rare in source code.

    Long is useless, because Microsoft made the mistake of defining
    'long' as 32-bits on 64-bit architectures, while unix and linux
    define it as 64-bits.

    Unix and Linux define it as 32 bits on 32-bit architectures and 64 bits
    on 64-bit ones.

    That's what I said. Thanks for the confirmation. It doesn't change
    the fact that Microsoft didn't define long as 64-bit on 64-bit architectures, creating incompatibilities that didn't exist in the 32-bit world
    between the two dominant operating systems.

    Remainder of bart's typical windows-centric complaints elided.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Thu Mar 20 14:42:06 2025
    On Thu, 20 Mar 2025 13:36:43 GMT
    scott@slp53.sl.home (Scott Lurndal) wibbled:
    bart <bc@freeuk.com> writes:
    On 20/03/2025 12:09, Tim Rentsch wrote:
    Michael S <already5chosen@yahoo.com> writes:

    I suspected that, but was not sure, so suggested to DFS a type that I am >>>> sure about.

    The width of char and [un]signed char must be at least 8 bits.
    The width of [un]signed short must be at least 16 bits.
    The width of [un]signed int must be at least 16 bits.
    The width of [un]signed long must be at least 32 bits.
    The width of [un]signed long long must be at least 64 bits.

    That should be easy enough to remember now.

    That table suggests that any program mixing 'short' and 'int' is
    suspect. If 'int' doesn't need to store values beyond 16 bits, then why
    not use 'short'?

    'long' is another troublesome one. If the need is for 32-bit values,
    then it's surprisingly rare in source code.

    Long is useless, because Microsoft made the mistake of defining
    'long' as 32-bits on 64-bit architectures, while unix and linux

    Probably for backwards compatibility with 32 bit code that did bit twiddling with longs.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Richard Heathfield on Thu Mar 20 16:50:36 2025
    On Thu, 20 Mar 2025 13:27:36 +0000
    Richard Heathfield <rjh@cpax.org.uk> wrote:


    I see now from casual research that C17 was predominantly a bug
    fix, but that C11 and C23 were somewhat busier.


    As far as basic language is concerned, the biggest change in C23 is
    the meaning of declaration 'bar_t foo()'. In C23 it is an equivalent of
    'bar_t foo(void)', like in C++.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Thu Mar 20 14:50:25 2025
    On Thu, 20 Mar 2025 14:00:29 +0000
    bart <bc@freeuk.com> wibbled:
    On 20/03/2025 13:36, Scott Lurndal wrote:
    Using the defined width types is far better (e.g. uint64_t);
    even if the standard allows the type to not exist on a particular
    implementation. No useful implementation would fail to define
    uint64_t in these modern times.

    The point was made earlier on that int64_t types are awkward to work
    with; they need that stdint.h header to even exist, and they need those
    ugly macros in inttypes.h to print out their values.

    I've never found them awkward to work with and every *nix I've ever developed on had stdint.h. If Windows doesn't thats Window's problem.

    This is why it popular to just do:

    typedef long long int i64;

    Popular maybe in WindowsWorld. Why as a unix dev would I do that when
    standard typedefs already exist for this exact purpose?

    stdint.h et al are just ungainly bolt-ons, not fully supported by the >language.

    Whats that supposed to mean? The core language itself supports very little.
    Do you not use libraries at all?

    So somebody eschewing those ugly macros and using "%ld" to print an

    What makes you think they're macros?

    MacOS:
    stdint.h
    _types/_uint64_t.h
    typedef unsigned long long uint64_t;
    Linux:
    stdint.h
    #if __WORDSIZE == 64
    typedef unsigned long int uint64_t;
    #else
    __extension__
    typedef unsigned long long int uint64_t;
    #endif

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Muttley@DastardlyHQ.org on Thu Mar 20 16:59:14 2025
    On Thu, 20 Mar 2025 14:50:25 -0000 (UTC)
    Muttley@DastardlyHQ.org wrote:


    What makes you think they're macros?


    PRIu64 and PRId64 are macros. They are ugly.
    I don't agree with people that call int64_t ugly, but PRIxN is
    different.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Scott Lurndal on Thu Mar 20 15:11:34 2025
    On 20/03/2025 14:32, Scott Lurndal wrote:
    bart <bc@freeuk.com> writes:
    On 20/03/2025 13:36, Scott Lurndal wrote:
    then it's surprisingly rare in source code.

    Long is useless, because Microsoft made the mistake of defining
    'long' as 32-bits on 64-bit architectures, while unix and linux
    define it as 64-bits.

    Unix and Linux define it as 32 bits on 32-bit architectures and 64 bits
    on 64-bit ones.

    That's what I said. Thanks for the confirmation. It doesn't change
    the fact that Microsoft didn't define long as 64-bit on 64-bit architectures, creating incompatibilities that didn't exist in the 32-bit world
    between the two dominant operating systems.

    Remainder of bart's typical windows-centric complaints elided.


    But your typical anti-Microsoft remarks are fine? Since you called it a 'mistake' to keep 'long' the same between 32/64-bit machines, even
    though both OSes kept 'int' the same.

    It was just a choice.

    Actually, my remarks didn't criticise either MS or Linux; just stated
    some facts. I did criticise STDINT types.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Thu Mar 20 15:16:05 2025
    On Thu, 20 Mar 2025 16:59:14 +0200
    Michael S <already5chosen@yahoo.com> wibbled:
    On Thu, 20 Mar 2025 14:50:25 -0000 (UTC)
    Muttley@DastardlyHQ.org wrote:


    What makes you think they're macros?


    PRIu64 and PRId64 are macros. They are ugly.

    Never even heard of them. Looking them up I can't see much use for them
    frankly except if you're starting out on an unknown system and can't find out the info any other way which would be ... odd.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Muttley@DastardlyHQ.org on Thu Mar 20 17:29:22 2025
    On Thu, 20 Mar 2025 15:16:05 -0000 (UTC)
    Muttley@DastardlyHQ.org wrote:

    On Thu, 20 Mar 2025 16:59:14 +0200
    Michael S <already5chosen@yahoo.com> wibbled:
    On Thu, 20 Mar 2025 14:50:25 -0000 (UTC)
    Muttley@DastardlyHQ.org wrote:


    What makes you think they're macros?


    PRIu64 and PRId64 are macros. They are ugly.

    Never even heard of them. Looking them up I can't see much use for
    them frankly except if you're starting out on an unknown system and
    can't find out the info any other way which would be ... odd.



    Then how exactly do you printf value of type int64_t in a code that
    expected to pass [gcc] compilation with no warnings on two platforms,
    one of which is 64-bit Unix/Linux and another is just about anything
    else?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Muttley@DastardlyHQ.org on Thu Mar 20 15:40:20 2025
    On 20/03/2025 14:50, Muttley@DastardlyHQ.org wrote:
    On Thu, 20 Mar 2025 14:00:29 +0000
    bart <bc@freeuk.com> wibbled:
    On 20/03/2025 13:36, Scott Lurndal wrote:
    Using the defined width types is far better (e.g. uint64_t);
    even if the standard allows the type to not exist on a particular
    implementation. No useful implementation would fail to define
    uint64_t in these modern times.

    The point was made earlier on that int64_t types are awkward to work
    with; they need that stdint.h header to even exist, and they need those
    ugly macros in inttypes.h to print out their values.

    I've never found them awkward to work with and every *nix I've ever developed on had stdint.h. If Windows doesn't thats Window's problem.

    stdint.h is part of a C compiler; it's nothing to do with Windows. My
    remark was about having to write '#include <stdio.h>' on each one of the
    100 modules of your project if you want tto use basic language data types.

    That's unusual among mainstream languages.


    This is why it popular to just do:

    typedef long long int i64;

    Popular maybe in WindowsWorld. Why as a unix dev would I do that when standard typedefs already exist for this exact purpose?

    They're popular everywhere:

    typedef uint32_t Uint32; // from SDL2

    typedef unsigned int GLuint; // from OpenGL

    typedef unsigned int l_uint32; // from Lua

    typedef unsigned int mz_uint32; // from MiniZ compression lib

    typedef uint32_t stbi__uint32; // from STB image library

    typedef uint8_t byte; // from arduino.h

    #ifndef UINT32_TYPE // from SQLite3
    # ifdef HAVE_UINT32_T
    # define UINT32_TYPE uint32_t
    # else
    # define UINT32_TYPE unsigned int
    # endif
    #endif

    While GTK2 uses guint32 instead of uin32_t (sorry I didn't have time to
    locate the correct header out of the nearly 700; this is from: https://docs.gtk.org/glib/types.html#guint32)

    Pretty much every other open source project I look likes to define their
    own types!

    So I have to ask, why do think they do that?

    stdint.h et al are just ungainly bolt-ons, not fully supported by the
    language

    Whats that supposed to mean? The core language itself supports very little. Do you not use libraries at all?

    Literal suffixes such as -L and -ULL are in the core language, and each
    'L' refers to 'long'.

    While printf format modifers such as "l"s in "%lld" are well-established
    in those sets of library routines. They do not directly supported stdint
    types (I think C23 may do something about that).

    Lots of horrible-looking macros that few even know about are supposed to
    be used. So people instead use l/ll or L/LL and cross their fingers.


    So somebody eschewing those ugly macros and using "%ld" to print an

    What makes you think they're macros?

    MacOS:
    stdint.h
    _types/_uint64_t.h
    typedef unsigned long long uint64_t;

    I didn't say they're macros. I said the types are defined on top the
    regular types, and macros are needed to print their values.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Thu Mar 20 15:55:21 2025
    On Thu, 20 Mar 2025 17:29:22 +0200
    Michael S <already5chosen@yahoo.com> wibbled:
    On Thu, 20 Mar 2025 15:16:05 -0000 (UTC)
    Muttley@DastardlyHQ.org wrote:

    On Thu, 20 Mar 2025 16:59:14 +0200
    Michael S <already5chosen@yahoo.com> wibbled:
    On Thu, 20 Mar 2025 14:50:25 -0000 (UTC)
    Muttley@DastardlyHQ.org wrote:


    What makes you think they're macros?


    PRIu64 and PRId64 are macros. They are ugly.

    Never even heard of them. Looking them up I can't see much use for
    them frankly except if you're starting out on an unknown system and
    can't find out the info any other way which would be ... odd.



    Then how exactly do you printf value of type int64_t in a code that
    expected to pass [gcc] compilation with no warnings on two platforms,
    one of which is 64-bit Unix/Linux and another is just about anything
    else?

    Just use %llu everywhere. Warnings only matter if they're important ones.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Thu Mar 20 15:57:44 2025
    On Thu, 20 Mar 2025 15:40:20 +0000
    bart <bc@freeuk.com> wibbled:
    On 20/03/2025 14:50, Muttley@DastardlyHQ.org wrote:
    On Thu, 20 Mar 2025 14:00:29 +0000
    bart <bc@freeuk.com> wibbled:
    On 20/03/2025 13:36, Scott Lurndal wrote:
    Using the defined width types is far better (e.g. uint64_t);
    even if the standard allows the type to not exist on a particular
    implementation. No useful implementation would fail to define
    uint64_t in these modern times.

    The point was made earlier on that int64_t types are awkward to work
    with; they need that stdint.h header to even exist, and they need those
    ugly macros in inttypes.h to print out their values.

    I've never found them awkward to work with and every *nix I've ever developed

    on had stdint.h. If Windows doesn't thats Window's problem.

    stdint.h is part of a C compiler; it's nothing to do with Windows. My
    remark was about having to write '#include <stdio.h>' on each one of the
    100 modules of your project if you want tto use basic language data types.

    Or you could just include your own header that pulls in all the system ones
    and has all your own definitions too. Thats what most people do.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Muttley@DastardlyHQ.org on Thu Mar 20 16:14:54 2025
    On 2025-03-20, Muttley@DastardlyHQ.org <Muttley@DastardlyHQ.org> wrote:
    I guess some maths problems can't be proven directly, they have to be - for want of a better word - run. A bit like the halting problem in CS.

    The halting problem is a perfect example of a problem which *cannot* be
    proven by running anything.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Thu Mar 20 16:29:43 2025
    On Thu, 20 Mar 2025 16:14:54 -0000 (UTC)
    Kaz Kylheku <643-408-1753@kylheku.com> wibbled:
    On 2025-03-20, Muttley@DastardlyHQ.org <Muttley@DastardlyHQ.org> wrote:
    I guess some maths problems can't be proven directly, they have to be - for >> want of a better word - run. A bit like the halting problem in CS.

    The halting problem is a perfect example of a problem which *cannot* be >proven by running anything.

    So if you run the program and it halts that doesn't prove that it will halt? Umm, ok.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Scott Lurndal on Thu Mar 20 16:20:28 2025
    On 2025-03-20, Scott Lurndal <scott@slp53.sl.home> wrote:
    Long is useless, because Microsoft made the mistake of defining
    'long' as 32-bits on 64-bit architectures, while unix and linux
    define it as 64-bits.

    long was once useful for avoiding the predicament int is as few as
    16 bits wide on some systems.

    This is an almost entirely obsolete concern.

    In code that assumes int >= 32 bits, but is otherwise intended
    to be portable, long serves no purpose.

    In comp.lang.c, we use long for anything that needs to go
    beyond 32767, but not beyond 2147483647. :)

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Muttley@DastardlyHQ.org on Thu Mar 20 16:49:10 2025
    On 2025-03-20, Muttley@DastardlyHQ.org <Muttley@DastardlyHQ.org> wrote:
    On Thu, 20 Mar 2025 16:14:54 -0000 (UTC)
    Kaz Kylheku <643-408-1753@kylheku.com> wibbled:
    On 2025-03-20, Muttley@DastardlyHQ.org <Muttley@DastardlyHQ.org> wrote:
    I guess some maths problems can't be proven directly, they have to be - for >>> want of a better word - run. A bit like the halting problem in CS.

    The halting problem is a perfect example of a problem which *cannot* be >>proven by running anything.

    So if you run the program and it halts that doesn't prove that it will halt? Umm, ok.

    If you run a program and it has NOT halted so far, you don't know
    whether or not it halts. If it doesn't halt, you will wait forever. To
    figure that out, you have to resort to proof techniques, not just more
    waiting.

    Determining whether one program halts or not is not even the Halting
    Problem. The Halting Problem consists of the question: is there an
    algorithm which can decide, for any <P, I> (program, input) pairs from
    the universe of all possible programs and inputs, whether P(I) halts.

    The accepted result is that, no, there is no such decision algorithm.
    In short, the halting question is called undecidable.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to bart on Thu Mar 20 11:33:41 2025
    bart <bc@freeuk.com> writes:

    On 20/03/2025 12:09, Tim Rentsch wrote:

    Michael S <already5chosen@yahoo.com> writes:

    I suspected that, but was not sure, so suggested to DFS a type that I am >>> sure about.

    The width of char and [un]signed char must be at least 8 bits.
    The width of [un]signed short must be at least 16 bits.
    The width of [un]signed int must be at least 16 bits.
    The width of [un]signed long must be at least 32 bits.
    The width of [un]signed long long must be at least 64 bits.

    That should be easy enough to remember now.

    That table suggests that any program mixing 'short' and 'int' is
    suspect.

    To me it does not. The table lists minimum values for all
    implementations, and not the most common values for typical
    implementations.

    If 'int' doesn't need to store values beyond 16 bits, then
    why not use 'short'?

    I expect any developer who has spent even a fairly short time
    learning C and writing C code knows the answer to this question,
    and does not need to consult the table shown above to make an
    appropriate choice between 'int' and 'short' in each of the
    circumstances where the question may occur.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Richard Heathfield on Thu Mar 20 11:24:11 2025
    Richard Heathfield <rjh@cpax.org.uk> writes:

    On 20/03/2025 13:06, Tim Rentsch wrote:

    Richard Heathfield <rjh@cpax.org.uk> writes:

    <snip>

    I'm not sure a new edition is necessary, but if it is to be
    written it would be better served by someone like Keith or Tim,
    both of whom have (as I have not) kept up with the million-and-one
    changes that appear to have assailed the simple language I once
    enjoyed.

    The C99 standard has a list of 54 what it calls "major changes",
    although IMO many or most of those are fairly minor. There are also
    other differences relative to C90, but most of them are simply
    clarifications or slight changes in wording.

    Those I largely recall from discussions at the time, but I dare to
    conclude that your lack of a reference to C11, C17, and C23 means that
    they had a lesser effect on the language than I'd feared.

    I chose C99 (and C99 only) because it is the first step after C90,
    and because I think C99 is more common than any other variant.
    There is also the question of how much material to present, and how
    much time would be needed to prepare a faithful summary. I didn't
    want to overwhelm people, and I didn't want to be overwhelmed
    myself, not because of how many changes are involved, but due to the
    effort need to sift through and organize them. I didn't look at the
    C11 standard, nor any subsequent versions of the standard, before
    making the decision to do just C99.

    As it turn out, the C11 standard lists only 15 "major changes" (if
    my quick counting is correct), so your conclusion that later
    versions have had a lesser effect appears to be correct, at least as
    far as C11 goes. If I have time I may post again on this topic,
    doing for the C11 standard what I did for the C99 standard.

    I see now from casual research that C17 was predominantly a bug fix,
    but that C11 and C23 were somewhat busier.

    Looking quickly over the listed changes in C11, I count only six or
    seven that I would put on the same level as the ones I gave for C99.

    My understanding of what was done in the C17 standard agrees with
    your casual research, except I might have said "almost entirely"
    rather than "predominantly".

    I have not spent nearly as much time looking at the C23, especially
    in comparison with C99 or C11. Based on what little I do know about
    C23, I consider that version of C to be one best avoided, for at
    least a decade and perhaps more. I may have more to say about that
    at some point in the future but do not have anything right now.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to bart on Thu Mar 20 20:46:42 2025
    On Thu, 20 Mar 2025 15:40:20 +0000
    bart <bc@freeuk.com> wrote:


    Pretty much every other open source project I look likes to define
    their own types!

    So I have to ask, why do think they do that?


    Most likely mindless parroting of 40 y.o. examples.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Heathfield@21:1/5 to Tim Rentsch on Thu Mar 20 18:53:05 2025
    On 20/03/2025 18:24, Tim Rentsch wrote:

    <snip>

    My understanding of what was done in the C17 standard agrees with
    your casual research, except I might have said "almost entirely"
    rather than "predominantly".

    I have not spent nearly as much time looking at the C23, especially
    in comparison with C99 or C11. Based on what little I do know about
    C23, I consider that version of C to be one best avoided, for at
    least a decade and perhaps more. I may have more to say about that
    at some point in the future but do not have anything right now.

    Thank you for your reply, which as ever was cogent and highly
    informative.

    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Michael S on Thu Mar 20 11:47:58 2025
    Michael S <already5chosen@yahoo.com> writes:

    On Thu, 20 Mar 2025 15:16:05 -0000 (UTC)
    Muttley@DastardlyHQ.org wrote:

    On Thu, 20 Mar 2025 16:59:14 +0200
    Michael S <already5chosen@yahoo.com> wibbled:

    On Thu, 20 Mar 2025 14:50:25 -0000 (UTC)
    Muttley@DastardlyHQ.org wrote:

    What makes you think they're macros?

    PRIu64 and PRId64 are macros. They are ugly.

    Never even heard of them. Looking them up I can't see much use for
    them frankly except if you're starting out on an unknown system and
    can't find out the info any other way which would be ... odd.

    Then how exactly do you printf value of type int64_t in a code that
    expected to pass [gcc] compilation with no warnings on two platforms,
    one of which is 64-bit Unix/Linux and another is just about anything
    else?

    If I needed to print such a value using printf(), I would most
    likely do something like this:

    typedef signed long long SLL;
    typedef unsigned long long ULL;

    ...

    printf( " the value of x64 is: %lld\n", (SLL){ x64 } );

    and avoid the use of the <inttypes.h> macros altogether (not
    to mention making the code more resilient against changes in
    the type of x64).

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Michael S on Thu Mar 20 19:15:55 2025
    On 20/03/2025 18:46, Michael S wrote:
    On Thu, 20 Mar 2025 15:40:20 +0000
    bart <bc@freeuk.com> wrote:


    Pretty much every other open source project I look likes to define
    their own types!

    So I have to ask, why do think they do that?


    Most likely mindless parroting of 40 y.o. examples.


    I don't think so. Where such sets of types exist, they tend to be
    defined on top of long long too, or even on top of stdint.h types.

    Look at this one for example:

    typedef uint8_t byte; // from arduino.h

    I can only one of reason this exists, which is that 'byte' is a far
    nicer denotation.

    You might also consider why such examples existed even 40 years ago.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to bart on Thu Mar 20 19:58:37 2025
    On 2025-03-20, bart <bc@freeuk.com> wrote:
    On 20/03/2025 18:46, Michael S wrote:
    On Thu, 20 Mar 2025 15:40:20 +0000
    bart <bc@freeuk.com> wrote:


    Pretty much every other open source project I look likes to define
    their own types!

    So I have to ask, why do think they do that?


    Most likely mindless parroting of 40 y.o. examples.


    I don't think so. Where such sets of types exist, they tend to be
    defined on top of long long too, or even on top of stdint.h types.

    Look at this one for example:

    typedef uint8_t byte; // from arduino.h

    I can only one of reason this exists, which is that 'byte' is a far
    nicer denotation.

    That's actually a bad way to define byte. You want

    typedef unsigned char byte;

    That's because "unsigned char" is blessed by the language spec with
    certain properties that it behooves you to confer onto your "byte".

    You can alias an object with an arrays of unsigned char, and
    certain things hold, which do not hold if you use uint8_t.
    uint8_t is subject to strict aliasing rules.

    (Under the hood, an implementation may just have uint8_t as a typedef
    for unsigned char, but that's not required.)

    You might also consider why such examples existed even 40 years ago.

    Mostly it's poor programmers who can't wrap their head around using
    a type whose exact size is not known.

    If you're not programming hardware or conforming to external storage
    or packet formats, you don't need crap like int32 or uint64.

    It's an embarrassing blemish on Rust that they made their principal
    integer types like this; it makes all Rust code look idiotically
    hardware dependent. You can't code an abstract algorithm out of
    Sedgewick, Knuth or Cormen in Rust without peppering the code with
    distracting 32's and 64's.



    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Kaz Kylheku on Thu Mar 20 22:57:09 2025
    On Thu, 20 Mar 2025 19:58:37 -0000 (UTC)
    Kaz Kylheku <643-408-1753@kylheku.com> wrote:


    It's an embarrassing blemish on Rust that they made their principal
    integer types like this; it makes all Rust code look idiotically
    hardware dependent. You can't code an abstract algorithm out of
    Sedgewick, Knuth or Cormen in Rust without peppering the code with distracting 32's and 64's.


    If size suffixes make you nervous, Rust has equivalents of size_t and ptrdiff_t. Named, respectively, usize and isize.
    But you probably know it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Keith Thompson on Thu Mar 20 20:59:10 2025
    On 20/03/2025 19:10, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    [...]
    This is why it popular to just do:

    typedef long long int i64;

    and to use %lld to print, and -LL on literals to force a 64-bit type.

    Is it? I don't recall seeing anyone other than you do that.

    stdint.h et al are just ungainly bolt-ons, not fully supported by the
    language.

    No, they're fully supported by the language. They've been in the ISO standard since 1999.

    I don't think so. They are add-ons that could have been created in
    user-code even prior to C99 (user-defined typedefs for 64 bits would
    'need long long').

    All that's happened is that 'stdint.h' has been blessed.

    You wouldn't, in user-code, be able to invent new literal suffixes like
    -LL but for those new types.

    And you wouldn't be able to get the *printf functions of your standard
    library to accept new format codes corresponding to those stdint types.

    And in fact you can do neither of those things now; ergo they're not
    fully supported in the same way that 'char short int long' are.

    The problem with 'long' manifests itself there too, since on Linux,
    'int64_t' appears to be commonly defined on top of 'long' for 32-bit
    systems, and 'long long' for 64-bit ones.

    If you're writing code for which that's a problem, you probably need to
    fix your code.

    So somebody eschewing those ugly macros and using "%ld" to print an
    'int64_t' type, will find it doesn't work when run on a 64-bit system,
    where "%lld" is needed. Same problem with using '1L' to define an
    int64_t literal.

    Somebody writing blatantly non-portable code will run into problems when
    they try to port it.

    I understand that you dislike <stdint.h>. That's both perfectly
    acceptable and not very interesting.

    I dislike it because this stuff it is not hard to do in a language with
    full support, but C always seems to make a dog's dinner of it:

    C Other (example)

    Declare uint64_t x; u64 x
    Literal UINT64_C(123); 123
    Print printf(PRI64d, x); print x
    (Alt?) printf("I64d", x);
    Max of type UINT64_MAX u64.max
    Max of expr (1) x.max
    Read scanf(SCNd64...) read x

    Requires: stdio.h (for printf/scanf)
    stdint.h (for uint64_t/UINT64_C)
    inttypes.h (for PRI64d/SCNd64)

    ((1) I believe not possible without using 'typeof' and _Generic to emulate)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Michael S on Thu Mar 20 21:10:39 2025
    On 2025-03-20, Michael S <already5chosen@yahoo.com> wrote:
    On Thu, 20 Mar 2025 19:58:37 -0000 (UTC)
    Kaz Kylheku <643-408-1753@kylheku.com> wrote:


    It's an embarrassing blemish on Rust that they made their principal
    integer types like this; it makes all Rust code look idiotically
    hardware dependent. You can't code an abstract algorithm out of
    Sedgewick, Knuth or Cormen in Rust without peppering the code with
    distracting 32's and 64's.


    If size suffixes make you nervous, Rust has equivalents of size_t and ptrdiff_t. Named, respectively, usize and isize.
    But you probably know it.

    I do, and I've seen code using size and usize for quantities that are
    not sizes of any kind. While it's better than a hardware depenendcy
    like i64, it's terrible naming. Make an alias called "int"
    for "size" and you're there.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Keith Thompson on Thu Mar 20 23:55:14 2025
    On 20/03/2025 23:18, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 20/03/2025 19:10, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    [...]
    stdint.h et al are just ungainly bolt-ons, not fully supported by the
    language.
    No, they're fully supported by the language. They've been in the ISO
    standard since 1999.

    I don't think so. They are add-ons that could have been created in
    user-code even prior to C99 (user-defined typedefs for 64 bits would
    'need long long').

    Sure, they could; see Doug Gwyn's q8, for example.

    Isn't that what I said? However I've just looked at this 700-line header:

    https://www.lysator.liu.se/c/q8/stdint.h

    Sorry, but there's something wrong if you have to write all that to get
    a handful of fixed-width types. (And if this is for a multitude of
    targets, then there should be a dedicated header per target).

    GCC's stdint.h is 200 lines. Mine is 75 lines. It these types were part
    of the core language, it would be zero lines.


    All that's happened is that 'stdint.h' has been blessed.

    I.e., it was made part of the language, specifically the standard
    library that's part of the language standard. Which is what I said,
    but for some reason you disagreed.

    It's not tied into the core language which has char/short/int and
    controls integer width with 'longs', in denotations, suffixes and format specifier codes.

    Yes, the format specifiers are a bit awkward. Boo hoo.

    They're a lot awkward. They're awkward enough for the built-in types,
    and it gets worse for the bolted-on types.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Richard Heathfield on Thu Mar 20 16:56:38 2025
    Richard Heathfield <rjh@cpax.org.uk> writes:

    On 20/03/2025 18:24, Tim Rentsch wrote:

    <snip>

    My understanding of what was done in the C17 standard agrees with
    your casual research, except I might have said "almost entirely"
    rather than "predominantly".

    I have not spent nearly as much time looking at the C23, especially
    in comparison with C99 or C11. Based on what little I do know about
    C23, I consider that version of C to be one best avoided, for at
    least a decade and perhaps more. I may have more to say about that
    at some point in the future but do not have anything right now.

    Thank you for your reply, which as ever was cogent and highly
    informative.

    Just be thankful my response wasn't typed during a full moon. ;)

    (More seriously: I appreciate the compliment.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to bart on Fri Mar 21 00:46:53 2025
    On 2025-03-20, bart <bc@freeuk.com> wrote:
    On 20/03/2025 23:18, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 20/03/2025 19:10, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    [...]
    stdint.h et al are just ungainly bolt-ons, not fully supported by the >>>>> language.
    No, they're fully supported by the language. They've been in the ISO
    standard since 1999.

    I don't think so. They are add-ons that could have been created in
    user-code even prior to C99 (user-defined typedefs for 64 bits would
    'need long long').

    Sure, they could; see Doug Gwyn's q8, for example.

    Isn't that what I said? However I've just looked at this 700-line header:

    https://www.lysator.liu.se/c/q8/stdint.h

    Sorry, but there's something wrong if you have to write all that to get
    a handful of fixed-width types.

    That's not all that is needed; the material in Q8defs.h is required
    to make the material in the above file work.

    It's not just the types, but all the standard-required identifiers.

    I feel that this Q8 could probably have been organized a little
    differently to make it more compact.

    The Q8defs.h header already has conditionals for platforms.
    Then Q8's inttypes.h still has to switch on some more abstract
    conditions in the preprocessor to select the types.

    This two layer business could just be one, more or less.
    Q8defs.h could just define, for instance, q8_uintptr_t,
    for each platform, and then inttypes just has to expose it under the
    standard name:

    typedef q8_uintptr_t uintptr_t;

    (And if this is for a multitude of
    targets, then there should be a dedicated header per target).

    What? That would pretty much defeat the whole point of this Q8 library.

    What would would select which file to use where?

    You drop those files into your C90, and then you have a facsimile
    of C99 compatibility, on all the platforms supported by Q8.

    GCC's stdint.h is 200 lines. Mine is 75 lines. It these types were part

    Q8 tries to add the C99 stuff for something like 7 implementations
    or implementation familes. GCC is one implementation, but with
    many targets.

    of the core language, it would be zero lines.

    This kind of argumentation is really not of good quality.

    A feature requires lines of code. If those lines are not in some
    satellite file like a C header, then they are elsewhere, like in the
    compiler source code.

    It's not "zero lines" because you hid it in the compiler.

    Shoving into a compiler that which can very well live outside of it
    isn't even categorically a good idea.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Kaz Kylheku on Fri Mar 21 01:23:12 2025
    On 21/03/2025 00:46, Kaz Kylheku wrote:
    On 2025-03-20, bart <bc@freeuk.com> wrote:
    On 20/03/2025 23:18, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 20/03/2025 19:10, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    [...]
    stdint.h et al are just ungainly bolt-ons, not fully supported by the >>>>>> language.
    No, they're fully supported by the language. They've been in the ISO >>>>> standard since 1999.

    I don't think so. They are add-ons that could have been created in
    user-code even prior to C99 (user-defined typedefs for 64 bits would
    'need long long').

    Sure, they could; see Doug Gwyn's q8, for example.

    Isn't that what I said? However I've just looked at this 700-line header:

    https://www.lysator.liu.se/c/q8/stdint.h

    Sorry, but there's something wrong if you have to write all that to get
    a handful of fixed-width types.

    That's not all that is needed; the material in Q8defs.h is required
    to make the material in the above file work.

    It's not just the types, but all the standard-required identifiers.

    I feel that this Q8 could probably have been organized a little
    differently to make it more compact.

    The Q8defs.h header already has conditionals for platforms.
    Then Q8's inttypes.h still has to switch on some more abstract
    conditions in the preprocessor to select the types.

    This two layer business could just be one, more or less.
    Q8defs.h could just define, for instance, q8_uintptr_t,
    for each platform, and then inttypes just has to expose it under the
    standard name:

    typedef q8_uintptr_t uintptr_t;

    (And if this is for a multitude of
    targets, then there should be a dedicated header per target).

    What? That would pretty much defeat the whole point of this Q8 library.

    What would would select which file to use where?

    You drop those files into your C90, and then you have a facsimile
    of C99 compatibility, on all the platforms supported by Q8.

    GCC's stdint.h is 200 lines. Mine is 75 lines. It these types were part

    Q8 tries to add the C99 stuff for something like 7 implementations
    or implementation familes. GCC is one implementation, but with
    many targets.

    I just tried to compile it; it needs an extra q8defs.h file (600 more
    lines), but it fails: "CPU not recognised" (it might have a point if
    this was from the 1990s!).

    gcc just gives lots of errors. To me it's all an ugly lot of code which
    reminds me of what system headers looked like: a messy, fragile
    patchwork of #ifdef blocks.

    of the core language, it would be zero lines.

    This kind of argumentation is really not of good quality.

    A feature requires lines of code. If those lines are not in some
    satellite file like a C header, then they are elsewhere, like in the
    compiler source code.

    Doing stuff the C way requires LOTs of lines of code. Look at all those
    MIN/MAX macros, the PRINT/SCAN macros; there's hundreds of them!

    In the compiler it can be compact (and faster, not having to parse such
    headers millions and millions of times), and if the language was done
    properly, much of it is not needed. For example I showed in another post
    that all MIN/MAX macros can be replaced with T.min and T.max, or
    equivalent syntax.

    It's not "zero lines" because you hid it in the compiler.

    The compile has to cope with fundamental types anyway; there's a limit
    to what headers can do: they have to build on something.

    This is about defining fixed-width types on top of the
    implementation-defined 'char short int long' types.

    I've done this and it's usually a dozen lines. If it takes 1300 lines,
    then you need to look again:

    typedef signed char i8;
    typedef short i16;
    typedef int i32;
    typedef long long i64;

    typedef unsigned char u8;
    typedef unsigned short u16;
    typedef unsigned int u32;
    typedef unsigned long long u64;

    typedef unsigned char byte;

    typedef float r32;
    typedef double r64;

    (This is used at the top of generated code, to keep the rest of it
    somewhat more compact and readable. Such a file actually uses zero
    headers including no standard headers.)

    I don't bother with LEAST/FAST types, whatever those mean. Where did
    that all come from anyway? I've never seen those used.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Keith Thompson on Fri Mar 21 00:05:02 2025
    Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:

    Tim Rentsch <tr.17687@z991.linuxsc.com> writes:

    Michael S <already5chosen@yahoo.com> writes:

    On Tue, 18 Mar 2025 21:38:55 -0400
    DFS <nospam@dfs.com> wrote:

    I'm doing these algorithm problems at
    https://cses.fi/problemset/list/

    For instance: Weird Algorithm
    https://cses.fi/problemset/task/1068

    It is not an interesting programming exercise. But it looks to me
    as a challenging math exercise. I mean, how could we give a not
    too pessimistic estimate for upper bound of length of the sequence
    that starts at given n without running a full sequence? Or
    estimate for maximal value in the sequence? So far, I found no
    answers.

    You may console yourself with the knowledge that no one else
    has either, even some of the most brilliant mathematicians
    of the last hundred years. In fact it isn't even known that
    all starting points eventually terminate; as far as what has
    been proven goes, some starting points might just keep going
    up forever.

    I think someone has mentioned that this is called the Collatz
    Conjecture. According to Wikipedia, it's been shown to hold for
    all positive integers up to 2.95e20 (which is just under 2**68).

    Yes it is sometimes called the Collatz Conjecture, and sometimes
    called the 3n+1 problem, and also is known by several other names.
    I first heard the problem in the early 1980s (at that time it was
    known as the (3n+1)/2 problem), and worked on it on and off for ten
    years or so. I would be astonished if anyone disproved it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Heathfield@21:1/5 to Tim Rentsch on Fri Mar 21 07:48:29 2025
    On 21/03/2025 07:05, Tim Rentsch wrote:
    I would be astonished if anyone disproved it.

    If anyone does, my money's on Noam Elkies.

    <https://blog.computationalcomplexity.org/2003/04/counterexample-to-fermats-last-theorem.html>


    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Fri Mar 21 09:09:02 2025
    On Thu, 20 Mar 2025 16:49:10 -0000 (UTC)
    Kaz Kylheku <643-408-1753@kylheku.com> wibbled:
    On 2025-03-20, Muttley@DastardlyHQ.org <Muttley@DastardlyHQ.org> wrote:
    On Thu, 20 Mar 2025 16:14:54 -0000 (UTC)
    Kaz Kylheku <643-408-1753@kylheku.com> wibbled:
    On 2025-03-20, Muttley@DastardlyHQ.org <Muttley@DastardlyHQ.org> wrote:
    I guess some maths problems can't be proven directly, they have to be - >for
    want of a better word - run. A bit like the halting problem in CS.

    The halting problem is a perfect example of a problem which *cannot* be >>>proven by running anything.

    So if you run the program and it halts that doesn't prove that it will halt? >> Umm, ok.

    If you run a program and it has NOT halted so far, you don't know
    whether or not it halts. If it doesn't halt, you will wait forever. To

    True, but you said it cannot be proven. What you meant was it cannot *always* be proven to halt.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Keith Thompson on Fri Mar 21 11:53:17 2025
    On 21/03/2025 01:47, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:

    You're complaining about how much work it is. All that work
    has been done for you by the implementers. Decades ago.

    We are talking about defining types like 'int32' on top of 'char short
    int long', yes? Then how difficult could it possibly be?

    I just
    did a quick test comparing complation times for an empty program
    with no #include directives and an empty program with #include
    directives for <stdint.h> and <inttypes.h>. The difference was
    about 3 milliseconds. I quite literally could not care care less.

    I'm sorry but that's a really poor attitude, with bad consequences.
    You're saying it doesn't matter how complex a header or set of headers
    is, even when out of proportion to the task.

    But this is why we end up with such complicated headers.

    Take the GTK2 library with hundreds of header files. The compiler has to process all that, for each module that includes GTK (specifically,
    350Kloc across 550 unique headers via ~1000 #includes).

    That's a lot of work, and leads to poor solutions like PCH, which are
    specific to certain compilers.

    But I happen to know that all those declarations could be flattened into
    a single 25Kloc header file (I don't mean just using -E; all #defines
    are retained for example).

    While the 75 headers and 50K lines of SDL2 reduce to one 3Kloc header.
    Much easier to manage one interface file! And quicker to compile.

    So, why isn't that routinely done by the purveyors of libraries? (Or is everyone who downloads the GTK /headers/ for example keen to do their
    own development!)

    The user of the ONE library doesn't care about the internal structure of
    the header files.

    I think your response clarifies matters. Nobody cares, even as compilers
    grind to a halt under all preprocessing.

    Doing stuff the C way requires LOTs of lines of code. Look at all
    those MIN/MAX macros, the PRINT/SCAN macros; there's hundreds of them!

    So what? I don't have to read those lines of code. All I have to read
    is the standard (or some other document) that tells me how to use it.

    It is crass. But it also affects the programmer because they have to
    explicitly include that specific header and then remember the dedicated
    MIN/MAX macros for the specific type. And they have to /know/ the type.

    If the type is opaque, or is an alias, then they won't know the name of
    the macro. If the typedef is 'T' for example, then what's the macro for
    its MAX value? Which header will they need?

    In a certain language I use, it will be T.max (and there are no
    headers). I'm not suggesting that C turns into that language, only that somebody acknowledges the downsides of using C's approach, since again
    nobody cares.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Muttley@DastardlyHQ.org on Fri Mar 21 17:12:23 2025
    On 2025-03-21, Muttley@DastardlyHQ.org <Muttley@DastardlyHQ.org> wrote:
    On Thu, 20 Mar 2025 16:49:10 -0000 (UTC)
    Kaz Kylheku <643-408-1753@kylheku.com> wibbled:
    On 2025-03-20, Muttley@DastardlyHQ.org <Muttley@DastardlyHQ.org> wrote:
    On Thu, 20 Mar 2025 16:14:54 -0000 (UTC)
    Kaz Kylheku <643-408-1753@kylheku.com> wibbled:
    On 2025-03-20, Muttley@DastardlyHQ.org <Muttley@DastardlyHQ.org> wrote: >>>>> I guess some maths problems can't be proven directly, they have to be - >>for
    want of a better word - run. A bit like the halting problem in CS.

    The halting problem is a perfect example of a problem which *cannot* be >>>>proven by running anything.

    So if you run the program and it halts that doesn't prove that it will halt?
    Umm, ok.

    If you run a program and it has NOT halted so far, you don't know
    whether or not it halts. If it doesn't halt, you will wait forever. To

    True, but you said it cannot be proven. What you meant was it cannot *always* be proven to halt.

    Running a program can only prove those programs which terminate
    within a time period you're willing to wait. For all others, this
    approach produces no answer: it remains undecided.

    You don't seem to know what is the Halting Problem in CS; it is
    the informal name for the question whether there exists an
    algorithm which can prove that any program operating on any
    input (i.e. any Turing computation) will halt or not.

    The question "will this small C program halt, which obviously contains
    no loops or recursion, so we know that it does" is not an instance of
    the Halting Problem. (It is one input case in the Halting Problem,
    among an infinity of cases.)

    Running programs is not a decision procedure that solves the Halting
    Problem.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Waldek Hebisch@21:1/5 to bart on Fri Mar 21 17:51:52 2025
    bart <bc@freeuk.com> wrote:
    On 20/03/2025 13:36, Scott Lurndal wrote:
    bart <bc@freeuk.com> writes:
    On 20/03/2025 12:09, Tim Rentsch wrote:
    Michael S <already5chosen@yahoo.com> writes:

    I suspected that, but was not sure, so suggested to DFS a type that I am >>>>> sure about.

    The width of char and [un]signed char must be at least 8 bits.
    The width of [un]signed short must be at least 16 bits.
    The width of [un]signed int must be at least 16 bits.
    The width of [un]signed long must be at least 32 bits.
    The width of [un]signed long long must be at least 64 bits.

    That should be easy enough to remember now.

    That table suggests that any program mixing 'short' and 'int' is
    suspect. If 'int' doesn't need to store values beyond 16 bits, then why
    not use 'short'?

    'long' is another troublesome one. If the need is for 32-bit values,
    then it's surprisingly rare in source code.

    Long is useless, because Microsoft made the mistake of defining
    'long' as 32-bits on 64-bit architectures, while unix and linux
    define it as 64-bits.

    Unix and Linux define it as 32 bits on 32-bit architectures and 64 bits
    on 64-bit ones.

    So long can't be used in programs intended to be portable to
    other operating systems.

    As defined by Unix/Linux, long is not portable between different
    Unix/Linux OSes if they run on a different architecture.

    It portably between 32 and 64 bit machines gives word-sized
    integer type.

    As defined by Microsoft, long is portable between Windows OSes even on different architectures.

    It gives 'long' different meaning than it had previously. And to
    that matters rather useless meaning, as already 'int' gives 32
    bit integers on bigger machines.

    'long long' is defined as a 64-bit
    type in both Windows and Linux.

    Using the defined width types is far better (e.g. uint64_t);
    even if the standard allows the type to not exist on a particular
    implementation. No useful implementation would fail to define
    uint64_t in these modern times.

    <snip>
    The problem with 'long' manifests itself there too, since on Linux,
    'int64_t' appears to be commonly defined on top of 'long' for 32-bit
    systems, and 'long long' for 64-bit ones.

    You mixed up this: 'int64_t' is defined as 'long long' for 32-bit
    systems and as 'long' for 64-bit ones. Doing it as you wrote
    would give you variable length type. Of course, if you need
    word-sized integer in Windows you may define it as 'long' for 32-bit
    Windows and as 'long long' for 64-bit ones.

    --
    Waldek Hebisch

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Waldek Hebisch on Fri Mar 21 18:51:27 2025
    On 2025-03-21, Waldek Hebisch <antispam@fricas.org> wrote:
    bart <bc@freeuk.com> wrote:
    On 20/03/2025 13:36, Scott Lurndal wrote:
    bart <bc@freeuk.com> writes:
    On 20/03/2025 12:09, Tim Rentsch wrote:
    Michael S <already5chosen@yahoo.com> writes:

    I suspected that, but was not sure, so suggested to DFS a type that I am >>>>>> sure about.

    The width of char and [un]signed char must be at least 8 bits.
    The width of [un]signed short must be at least 16 bits.
    The width of [un]signed int must be at least 16 bits.
    The width of [un]signed long must be at least 32 bits.
    The width of [un]signed long long must be at least 64 bits.

    That should be easy enough to remember now.

    That table suggests that any program mixing 'short' and 'int' is
    suspect. If 'int' doesn't need to store values beyond 16 bits, then why >>>> not use 'short'?

    'long' is another troublesome one. If the need is for 32-bit values,
    then it's surprisingly rare in source code.

    Long is useless, because Microsoft made the mistake of defining
    'long' as 32-bits on 64-bit architectures, while unix and linux
    define it as 64-bits.

    Unix and Linux define it as 32 bits on 32-bit architectures and 64 bits
    on 64-bit ones.

    So long can't be used in programs intended to be portable to
    other operating systems.

    As defined by Unix/Linux, long is not portable between different
    Unix/Linux OSes if they run on a different architecture.

    It portably between 32 and 64 bit machines gives word-sized
    integer type.

    The bitness of modern mainstream machines is their address size.

    C99 gaves us address-sized integers: intptr_t and uintptr_t.
    If you want a 64 bit type on a 64 bit system and 32 bit type
    on a 32 bit system, use those.

    (The problem is that idea observed in Unix-like environments that long
    is expected to be address-sized precedes C99.)

    As defined by Microsoft, long is portable between Windows OSes even on
    different architectures.

    It gives 'long' different meaning than it had previously. And to
    that matters rather useless meaning, as already 'int' gives 32
    bit integers on bigger machines.

    In Microsoft land, there is a LONG type, which is involved in
    the Win32 ABIs. That was their mistake.

    In plenty of interfaces, Windows uses the types WORD and DWORD, which
    are 16 and 32 bits wide unsigned types. (As well as QWORD,
    a 64 bitter).

    The problem is, when someone needed the signed versions of
    WORD and DWORD, they found them to be missing, and stupidly came up with
    the names INT and LONG (and derived typedefs like LPARAM).
    Nobody caught this code smell and so it got woven into the Windows API. Probably long before 32 bit Windows, I'm guessing.

    Thus, LONG has to continue to be a 32 bit type, since it is
    used as a "signed DWORD".

    But it's inconceivable for LONG to to be a typedef for anything other
    than long. Too much code depends on the wrong idea that LONG and long
    are interchangeable.

    Thus long has to be stuck on 32 bits.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Waldek Hebisch on Sat Mar 22 00:01:56 2025
    On 21/03/2025 17:51, Waldek Hebisch wrote:
    bart <bc@freeuk.com> wrote:

    As defined by Unix/Linux, long is not portable between different
    Unix/Linux OSes if they run on a different architecture.

    It portably between 32 and 64 bit machines gives word-sized
    integer type.

    Which was not its intention. (Probably intptr_t or ssize_t is better for
    that purpose, and will be portable between Windows and Linux.)


    As defined by Microsoft, long is portable between Windows OSes even on
    different architectures.

    It gives 'long' different meaning than it had previously.


    I explained the differences without necessarily saying one is better
    than the other. Sometimes one is more more useful, sometimes the other.


    And to
    that matters rather useless meaning, as already 'int' gives 32
    bit integers on bigger machines.

    Well, 'long' is also useless on 32-bit Linux machines as it is the same
    size as 'int'. Why didn't it also increase when 'int' migrated from i16
    to i32?

    One 'con' for Linux' approach is when someone assumes 'long' is i32;
    when they run code on 64 bits, it will either be wasteful, or it could
    go badly wrong.

    While those running on Linux64 and expect 'long' to be double the width
    of 'int', may also experience failures on Linux32.

    On Windows, you just learn to avoid 'long' completely. After all you
    don't need 5 basic types for four integer sizes!



    'long long' is defined as a 64-bit
    type in both Windows and Linux.

    Using the defined width types is far better (e.g. uint64_t);
    even if the standard allows the type to not exist on a particular
    implementation. No useful implementation would fail to define
    uint64_t in these modern times.

    <snip>
    The problem with 'long' manifests itself there too, since on Linux,
    'int64_t' appears to be commonly defined on top of 'long' for 32-bit
    systems, and 'long long' for 64-bit ones.

    You mixed up this: 'int64_t' is defined as 'long long' for 32-bit
    systems and as 'long' for 64-bit ones.

    Sorry, yes. But it shows how confusing it all is:

    LL/ll is used for 64 bits on 32-bit systems
    L/l is used for 32 bits on 64-bit systems!

    (I'm so glad I switched to all-64-bits in my own stuff, early last decade.

    However lots of software has taken a long time to catch up. I acquired
    an RPi 4 board 5 years ago with a view to doing 64-bit ARM development,
    but most OSes were still 32 bits, and 64-bit ones immature. (You need a
    64-bit OS to easily develop and run 64-bit programs.)

    Even now, 32-bit OSes are supplied by default. I finally got a solid
    64-bit OS for it last week. I just wondered what the point is of having
    64-bit hardware if people just run 32-bit stuff on it.)


    Doing it as you wrote
    would give you variable length type. Of course, if you need
    word-sized integer in Windows you may define it as 'long' for 32-bit
    Windows and as 'long long' for 64-bit ones.


    (I'll tell you a secret: my C compiler automatically reads a special
    header that includes these definitions:

    typedef signed char i8;
    typedef short i16;
    typedef int i32;
    typedef long long int i64;

    typedef unsigned char u8;
    typedef unsigned char byte;
    typedef unsigned short u16;
    typedef unsigned int u32;
    typedef unsigned long long int u64;

    typedef float r32;
    typedef double r64;

    It makes the writing of hundreds of small test programs so much easier.
    I wonder how many do the same, although they'd have to use a discrete
    header.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Keith Thompson on Sat Mar 22 00:23:01 2025
    On 21/03/2025 19:04, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 21/03/2025 01:47, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    You're complaining about how much work it is. All that work
    has been done for you by the implementers. Decades ago.

    We are talking about defining types like 'int32' on top of 'char short
    int long', yes? Then how difficult could it possibly be?

    If you want <stdint.h> and <inttypes.h> headers that work correctly with
    all relevant compilers, it's not particularly easy. I'll note that the
    MUSL implementation of <stdint.h> is 117 lines, compared to 308 for GNU
    libc.

    I just
    did a quick test comparing complation times for an empty program
    with no #include directives and an empty program with #include
    directives for <stdint.h> and <inttypes.h>. The difference was
    about 3 milliseconds. I quite literally could not care care less.

    I'm sorry but that's a really poor attitude, with bad
    consequences. You're saying it doesn't matter how complex a header or
    set of headers is, even when out of proportion to the task.

    But this is why we end up with such complicated headers.

    Complicated headers that work.

    [...\

    I think your response clarifies matters. Nobody cares, even as
    compilers grind to a halt under all preprocessing.

    If compilers ground to a halt, I would certainly care. They don't.

    50 modules each including GTK.h say, which was 0.33Mloc across 500
    headers (so reading 16Mloc and 25,000 headers in total when all are
    compiled) would not impact your builds at all? OK.

    But I expect if a major library supplier came out with the idea of a streamlined, compact one-file header that took 90% less compile-time,
    that would be hailed as a brilliant advance!

    Doing stuff the C way requires LOTs of lines of code. Look at all
    those MIN/MAX macros, the PRINT/SCAN macros; there's hundreds of them!
    So what? I don't have to read those lines of code. All I have to
    read
    is the standard (or some other document) that tells me how to use it.

    It is crass. But it also affects the programmer because they have to
    explicitly include that specific header and then remember the
    dedicated MIN/MAX macros for the specific type. And they have to
    /know/ the type.

    Yes, of course.

    If the type is opaque, or is an alias, then they won't know the name
    of the macro. If the typedef is 'T' for example, then what's the macro
    for its MAX value? Which header will they need?

    There may not be one. For example, there's no format specifier for
    time_t. If I need to print a time_t value, I'll probably cast to
    intmax_t and use "%jd". If I'm being obsessive about portability, I'll
    test whether time_t is signed, unsigned, or floating-point (all of which
    are possible in principle), and use "%jd", "%ju", or "%Lf".

    I wonder if you realise how utterly ridiculous all those incantations sound?

    This is a C program using one of the extensions from my old compiler:

    #include <stdio.h>
    #include <time.h>

    int main(void) {
    time_t t = clock();
    printf("%v\n", t); # (used to be '?'; changed to 'v')
    }

    The compiler replaces the 'v' format with a conventional format code
    according to the type of the expression. For my 'time_t', it happens to
    be 'lld'.

    In a certain language I use, it will be T.max (and there are no
    headers). I'm not suggesting that C turns into that language, only
    that somebody acknowledges the downsides of using C's approach, since
    again nobody cares.

    I acknowledge the downsides of C's approach. See, all you had to
    do was ask.

    As Dennis Ritchie said, "C is quirky, flawed, and an enormous success".

    So it's quirky and flawed.

    It would be very nice if C had some kind of more generic I/O that
    doesn't require remembering arbitrary format strings and qualifiers
    for each type, and that doesn't provide format strings for a lot of
    types in the standard library, and certainly not for types defined
    in user code. And I'd *love* it if a future C standard removed
    the undefined behavior for numeric input using *scanf(). Other C
    programmers will have different priorities than yours or mine.

    If I want to print a time_t value in C++, I just write
    `std::cout << t` and the compiler figures out which overloaded
    function to call.

    That's amazing.

    Scripting languages like Perl and Python have
    print functions that take arguments of arbitrary types, and ways
    to interpolate numeric values into string literals. And so on.

    I'm not sure what would be the best way to add something like this
    in C202y or C203z. There are arguments against adding too many new
    features to C when other languages are available; if C became C++
    Lite, a lot of programmers would just use C++. There are a number
    of different approaches that could be taken, and choosing among
    them is not trivial.

    See above.

    When I talk about how to work with C as it's currently defined,
    you tend to see advocacy where it doesn't exist. When you complain
    about things in C that I don't think are real problems, you tend
    to assume that I'm saying C is perfect, something I've never said.
    When you ask why something in C is defined the way it is, you don't acknowledge when people take the time to explain it.

    This forum tends to not be critical of the language. But even if aspects
    of it can't be changed, there's no reason not to call them out.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Waldek Hebisch@21:1/5 to bart on Sat Mar 22 01:41:26 2025
    bart <bc@freeuk.com> wrote:
    On 21/03/2025 17:51, Waldek Hebisch wrote:
    bart <bc@freeuk.com> wrote:

    As defined by Unix/Linux, long is not portable between different
    Unix/Linux OSes if they run on a different architecture.

    It portably between 32 and 64 bit machines gives word-sized
    integer type.

    Which was not its intention.

    This was intention when 64-bit machines appeared.

    (Probably intptr_t or ssize_t is better for
    that purpose, and will be portable between Windows and Linux.)

    Those did not exist in 1991 and would be needed only for small
    machines. Except for Microsoft which decided to push its own,
    different way.

    As defined by Microsoft, long is portable between Windows OSes even on
    different architectures.

    It gives 'long' different meaning than it had previously.


    I explained the differences without necessarily saying one is better
    than the other. Sometimes one is more more useful, sometimes the other.

    AFAICS your main trouble with 'long' is inconsistency. And the
    inconsistency is due to Microsoft, as previously 'long' had
    consitent definition on resonable machines: integer with max of
    32 bits and word size, which simplified to word size on
    "bigger" machines.

    And to
    that matters rather useless meaning, as already 'int' gives 32
    bit integers on bigger machines.

    Well, 'long' is also useless on 32-bit Linux machines as it is the same
    size as 'int'.

    Not always. Motorola 68000 used 16-bit int. That was because
    original 68000 had 16-bit bus which made 16-bit integers faster.

    One 'con' for Linux' approach is when someone assumes 'long' is i32;
    when they run code on 64 bits, it will either be wasteful, or it could
    go badly wrong.

    One 'con' of any assumption is that somebody can make a different
    assumption.

    (I'm so glad I switched to all-64-bits in my own stuff, early last decade.

    However lots of software has taken a long time to catch up. I acquired
    an RPi 4 board 5 years ago with a view to doing 64-bit ARM development,
    but most OSes were still 32 bits, and 64-bit ones immature. (You need a 64-bit OS to easily develop and run 64-bit programs.)

    I am not sure what was available for RPi 4. But in 2019 I got 64-bit
    chinese ARM board and it was well supported by 64-bit Linux (armbian). Apparently Raspberry Pi foundation wanted to have the same OS on all
    their boards (not only newest ones) so they delivered 32-bit OS for
    some time after 64-bit board appeared. IIUC normal Linux distributions
    rather quickly got 64-bit ARM versions.

    Even now, 32-bit OSes are supplied by default. I finally got a solid
    64-bit OS for it last week. I just wondered what the point is of having 64-bit hardware if people just run 32-bit stuff on it.)

    See above.

    --
    Waldek Hebisch

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Waldek Hebisch@21:1/5 to bart on Sat Mar 22 02:37:29 2025
    bart <bc@freeuk.com> wrote:

    Sorry, but there's something wrong if you have to write all that to get
    a handful of fixed-width types. (And if this is for a multitude of
    targets, then there should be a dedicated header per target).

    GCC's stdint.h is 200 lines. Mine is 75 lines. It these types were part
    of the core language, it would be zero lines.

    You need few (say 3) lines of compiler code to define a type.
    AFAICS there are 24 types in intXX... and uintXX... family.
    So about 72 lines in compiler. For compatiblity with older
    code you probably should define types under internal names
    and have 24 lines in stdint.h (+3 lines of include guard).

    stdint.h defines quite a bit more than just types, so actual
    saving from having them built into compiler would be small,
    in particular since preprocessor is fast. On my machine
    I see 91 code lines after preprocessing of stdint.h. And
    actually, several of definitions like '__pid_t' go beyond C
    and are needed by other headers. So, really is not a big
    deal.

    BTW: I just tried

    /tmp$ time gcc -E foo.c | wc
    1000006 2000022 12889013

    real 0m0.359s
    user 0m0.351s
    sys 0m0.085s

    So gcc preprocessor is able to handle almost 3 million lines
    per second. The lines were short, but gcc goes trough
    piles of header files resonably fast, probably much faster
    than you think.

    I also tried to compile file contaning 100000 declarations
    like:

    extern int a0(void);
    ....
    extern int a999999(void);

    Compilation of such file takes 1.737s, so about 575000 lines
    per second. So a lot of function declarations in header
    files should not slow gcc too much. Typedefs seem to
    trigger quadratic behaviour, so a lot of typedefs is likely
    to slow down gcc quite a lot. But a limited bunch of
    typedefs should not be too bad. And it probably does not
    make much difference if the typedefs are in source file
    or built into compiler.

    --
    Waldek Hebisch

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Kaz Kylheku on Sat Mar 22 02:16:25 2025
    On 21/03/2025 18:51, Kaz Kylheku wrote:
    On 2025-03-21, Waldek Hebisch <antispam@fricas.org> wrote:
    bart <bc@freeuk.com> wrote:
    On 20/03/2025 13:36, Scott Lurndal wrote:
    bart <bc@freeuk.com> writes:
    On 20/03/2025 12:09, Tim Rentsch wrote:
    Michael S <already5chosen@yahoo.com> writes:

    I suspected that, but was not sure, so suggested to DFS a type that I am
    sure about.

    The width of char and [un]signed char must be at least 8 bits. >>>>>> The width of [un]signed short must be at least 16 bits. >>>>>> The width of [un]signed int must be at least 16 bits. >>>>>> The width of [un]signed long must be at least 32 bits. >>>>>> The width of [un]signed long long must be at least 64 bits. >>>>>>
    That should be easy enough to remember now.

    That table suggests that any program mixing 'short' and 'int' is
    suspect. If 'int' doesn't need to store values beyond 16 bits, then why >>>>> not use 'short'?

    'long' is another troublesome one. If the need is for 32-bit values, >>>>> then it's surprisingly rare in source code.

    Long is useless, because Microsoft made the mistake of defining
    'long' as 32-bits on 64-bit architectures, while unix and linux
    define it as 64-bits.

    Unix and Linux define it as 32 bits on 32-bit architectures and 64 bits
    on 64-bit ones.

    So long can't be used in programs intended to be portable to
    other operating systems.

    As defined by Unix/Linux, long is not portable between different
    Unix/Linux OSes if they run on a different architecture.

    It portably between 32 and 64 bit machines gives word-sized
    integer type.

    The bitness of modern mainstream machines is their address size.

    C99 gaves us address-sized integers: intptr_t and uintptr_t.
    If you want a 64 bit type on a 64 bit system and 32 bit type
    on a 32 bit system, use those.

    (The problem is that idea observed in Unix-like environments that long
    is expected to be address-sized precedes C99.)

    As defined by Microsoft, long is portable between Windows OSes even on
    different architectures.

    It gives 'long' different meaning than it had previously. And to
    that matters rather useless meaning, as already 'int' gives 32
    bit integers on bigger machines.

    In Microsoft land, there is a LONG type, which is involved in
    the Win32 ABIs. That was their mistake.

    In plenty of interfaces, Windows uses the types WORD and DWORD, which
    are 16 and 32 bits wide unsigned types. (As well as QWORD,
    a 64 bitter).


    Those types are an invention of Microsoft and only appear in WinAPIs.
    There seem to be hundreds of them, but they are well defined. Mostly
    they are consistent between 32- and 64-bit systems too.

    Since I've mostly used WinAPI via an FFI, I'm used to creating my own
    bindings and using my own aliases for this ttypes. (Mostly it comes down
    to one of i32 i64 u32 u64 plus a pointer type.)


    The problem is, when someone needed the signed versions of
    WORD and DWORD, they found them to be missing, and stupidly came up with
    the names INT and LONG (and derived typedefs like LPARAM).
    Nobody caught this code smell and so it got woven into the Windows API. Probably long before 32 bit Windows, I'm guessing.

    Thus, LONG has to continue to be a 32 bit type, since it is
    used as a "signed DWORD".

    But it's inconceivable for LONG to to be a typedef for anything other
    than long. Too much code depends on the wrong idea that LONG and long
    are interchangeable.

    Thus long has to be stuck on 32 bits.

    There really isn't much in it. Here's the evolution as I see it (I stand
    to be corrected if necessary):

    C-Windows C-Linux My stuff
    Machine int long int long int 'dint'

    16 bits 16 32 16 32 16 32
    32 bits 32 32 32 32 32 64
    64 bits 32 32 32 64 64 [128]

    The most logical and consistent progression seems to be with 'my
    stuff'!Except I no longer support 128-bit types after doing so briefly.

    The point is that there was usually a size exactly double the width of
    'int', but it become less necessary when 'int' reached 64 bits.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to bart on Sat Mar 22 04:15:05 2025
    On 2025-03-22, bart <bc@freeuk.com> wrote:
    Since I've mostly used WinAPI via an FFI, I'm used to creating my own bindings and using my own aliases for this ttypes. (Mostly it comes down
    to one of i32 i64 u32 u64 plus a pointer type.)

    When I FFI to Windows, I tend to be a stickler for using names that
    are exactly the same, wherever possible. I have DWORD, HWND,
    and whatnot.

    You can see it in this code; it is self-contained. It defines
    all the symbols it needs, and requires just the DLLs:

    https://rosettacode.org/wiki/Window_creation#Win32/Win64

    Thus long has to be stuck on 32 bits.

    There really isn't much in it. Here's the evolution as I see it (I stand
    to be corrected if necessary):

    There isn't much in it, but it doesn't take much in order to
    pin things down and prevent change.

    The point is that there was usually a size exactly double the width of
    'int', but it become less necessary when 'int' reached 64 bits.

    Yes, because other than for masks of 128 bits, there isn't a whole
    lot of stuff you can *count* for which you need such large integers.

    Money?

    In an accounting system, if you use signed 64 bits for pennies, you can
    go to 9.2 x 10^16 dollars.

    Large integers are needed for crypto and such, but then 128 isn't enough anyway.

    Obviously, IPv4 addresses are 128. In protocol stacks, switching and
    routing, it could be useful to have 128 bit operations on them for
    masking and comparing and whatnot.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Sat Mar 22 10:19:16 2025
    On Sat, 22 Mar 2025 15:05:43 +1100
    Alexis <flexibeast@gmail.com> wibbled:
    Muttley@DastardlyHQ.org writes:

    But 99.99% of the time doesn't.

    Famously, mathematician G.H. Hardy was a fan of number theory _because_
    it seemed to have no 'real world' applications (i.e. applications
    outside of mathematics itself). Eventually, of course, it became the >theoretical basis of public-key cryptography.

    Maths is the foundation of most technology, that doesn't mean all its
    problems are useful. I could ponder balancing wheels on top of each other
    but that wouldn't lead to the invention of the car.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Keith Thompson on Sat Mar 22 13:06:30 2025
    On 22/03/2025 03:50, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 21/03/2025 19:04, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 21/03/2025 01:47, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    You're complaining about how much work it is. All that work
    has been done for you by the implementers. Decades ago.

    We are talking about defining types like 'int32' on top of 'char short >>>> int long', yes? Then how difficult could it possibly be?
    If you want <stdint.h> and <inttypes.h> headers that work correctly
    with all relevant compilers, it's not particularly easy. I'll note
    that the MUSL implementation of <stdint.h> is 117 lines, compared to
    308 for GNU libc.

    I just did a quick test comparing complation times for an empty
    program with no #include directives and an empty program with
    #include directives for <stdint.h> and <inttypes.h>. The
    difference was about 3 milliseconds. I quite literally could not
    care care less.

    I'm sorry but that's a really poor attitude, with bad
    consequences. You're saying it doesn't matter how complex a header or
    set of headers is, even when out of proportion to the task.

    But this is why we end up with such complicated headers.
    Complicated headers that work.
    [...\

    I think your response clarifies matters. Nobody cares, even as
    compilers grind to a halt under all preprocessing.
    If compilers ground to a halt, I would certainly care. They don't.

    50 modules each including GTK.h say, which was 0.33Mloc across 500
    headers (so reading 16Mloc and 25,000 headers in total when all are
    compiled) would not impact your builds at all? OK.

    First you talked about compilers grinding to a halt, then you talked
    about headers not impacting builds at all. Those goalposts of yours
    move *fast*.

    You missed the "?".

    For the record, as you can see above, I did not say that builds would
    not be impacted. Do not put words into my mouth again.

    Let me ask it again: so ploughing through a third of a million lines of
    code across hundreds of #includes, even at the faster throughput
    compared with compiling code, for a module of a few hundred lines, will
    have little impact?

    How about a project with 50 or 100 modules, each using that big header,
    that needs to be built from scratch?

    (I would download the GTK2 headers, if still available, to test it. But
    it would be a huge undertaking. Just the set of compiler options needed
    to tell it all the search paths to look for the right headers amongst
    the 700 files and dozen of directories would be challenging.

    Usually this is done with additional tools (PKCONFIG or some such thing) because it is so complicated. With my little toy compiler however it was
    trial and error: look manually for each missing header, and add that to
    the growing list of search paths.

    So let me ask /this/ again: if such a library consisted of ONE compact
    header file, would it make the process simpler? Would it make
    compilation of multiple modules faster?)


    printf("%v\n", t); # (used to be '?'; changed to 'v')
    }

    The compiler replaces the 'v' format with a conventional format code
    according to the type of the expression. For my 'time_t', it happens
    to be 'lld'.

    That's nice. Seriously, it's nice. If it were added to a future
    edition of the language, I'd likely use it (once I could count on it
    being supported, which would take a while).

    The Go language has something like that.

    You can add extensions like that to your own compiler easily
    enough. Adding them to the C standard (which requires getting all implementers to support them) is a lot harder. Does it work for
    both output (printf) and input (scanf)? What if the format string
    isn't a string literal; does the compiler generate code to adjust
    it, allocating space for the translated string and deallocating it
    after the call? Does it work with printf("%n", &var)? What about qualifiers, specifying width, base, and so forth.

    My feature was a proof of concept. The 60 lines of code used to test it
    worked specifically for 'printf', and didn't attempt to parse additional attributes like field widths (I wasn't implementing half of printf).

    It only works when the format string is a literal (but so does gcc's
    checking that format codes match parameter types).

    It only looks for "$v" and "=v"; the latter will add a label:

    #include <stdio.h>
    #include <stdint.h>

    int main(void) {
    uint64_t a = 10;
    double b = 23.1;
    void* c = main;

    printf("%=v %=v %=v\n", a, b, c);
    printf("%=v %=v %=v\n", c, a, b);
    }

    The output is:

    A=10 B=23.100000 C=0000000000401000
    C=0000000000401000 A=10 B=23.100000

    The motivation was partly to simplify writing lots of debug prints,
    which rarely have field widths.


    an integer of arbitrary type in hexadecimal in a right-justified
    8-digit zero-padded field? The feature implies an ability for
    generic code that works with different types; can programmers use
    that feature for things other than format strings? How much more
    complicated would the C language (as distinct from the library)
    have to be to support this?

    As I said the test was 60 lines of ad hoc code. It's not going to be
    wildly complicated. As a language feature, somebody would need to write
    a specification.

    However, if a new format code is introduced, it should ideally be known
    to the printf library functions, even it can't do much with it, for
    those cases where it is used within a runtime string.


    If you have answers to all those questions, and to all the other
    questions that haven't occurred to me, I wouldn't mind seeing
    something like that in a future version of C. I haven't looked
    closely at Go, but it's a compiled language with a feature similar
    to what you describe; it could probably be mined for ideas.

    Or maybe we should be thinking in terms of something other than format strings. The idea that "%v", which is a 3-byte chunk of data, has compile-time implications in certain contexts is a bit unnerving.

    Ada chose the short name "Put" for its output routines.
    It's overloaded, so you can write `Put(this); Put(that);
    Put(the_other);` Maybe that a little too verbose, but how about
    a new built-in operator that takes an argument of any of various
    types and yields something that can be printed? '$' is available.
    I haven't thought this through.

    The problem with Print is that it is a language feature that can take a arbitrary number of arguments, all of different types. Some may need
    formatting info.

    This is challenging for a language, if it wants to do it via functions
    (which usually have a fixed number of arguments, of a specific type).
    Some advanced features may needed.

    C probably introduced variadic functions just for this purpose (which
    remains challenging to implement!).

    (I deal with it in my languages by having Print as a statement, with
    special syntax. But internally the result is a series of function calls (print_i64 etc); ugly, sprawling code. Users will not see it, but I do.)

    That's amazing.

    Not particularly.

    Sorry, I was being sarcastic. There were languages in the 60s where you
    could just say 'print x' and it knew how to stringify x whatever its
    type was. My first language using an 8KB compiler had such a Print
    (though not many types).

    C has programmer-defined operator (and function)
    overloading as a language feature. (There are IMHO some serious
    flaws in C++'s use of overloaded "<<" for output, but I won't go
    into that here.)

    You get an interesting set of error messages if you write >> instead of <<.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Waldek Hebisch on Sat Mar 22 12:20:50 2025
    On 22/03/2025 02:37, Waldek Hebisch wrote:
    bart <bc@freeuk.com> wrote:

    Sorry, but there's something wrong if you have to write all that to get
    a handful of fixed-width types. (And if this is for a multitude of
    targets, then there should be a dedicated header per target).

    GCC's stdint.h is 200 lines. Mine is 75 lines. It these types were part
    of the core language, it would be zero lines.

    You need few (say 3) lines of compiler code to define a type.
    AFAICS there are 24 types in intXX... and uintXX... family.

    There's about 8, unless you include FAST/LEAST which are of interest to
    a minority of the minority who are even aware of them.

    So about 72 lines in compiler. For compatiblity with older
    code you probably should define types under internal names
    and have 24 lines in stdint.h (+3 lines of include guard).

    It's a lot more than 72 lines in a compiler to support numeric types.

    But this is about exposing fixed-size type aliases to existing types,
    which can be done fairly tidily in a compiler, but not as tidily as when
    all the built-in types can be expressed as one token; some need multiple tokens.

    Also C requires those aliases to be hidden unless a particular header is
    used.

    Still, I was remarking on those Q8 headers requiring 1300 lines to add
    these aliases.

    stdint.h defines quite a bit more than just types, so actual
    saving from having them built into compiler would be small,
    in particular since preprocessor is fast. On my machine
    I see 91 code lines after preprocessing of stdint.h. And
    actually, several of definitions like '__pid_t' go beyond C
    and are needed by other headers. So, really is not a big
    deal.

    BTW: I just tried

    /tmp$ time gcc -E foo.c | wc
    1000006 2000022 12889013

    real 0m0.359s
    user 0m0.351s
    sys 0m0.085s

    So gcc preprocessor is able to handle almost 3 million lines
    per second. The lines were short, but gcc goes trough
    piles of header files resonably fast, probably much faster
    than you think.

    Testing -E is tricky, since the output is textual, and usually
    interspersed with # lines giving source line number info.

    What was in foo.c?

    In any case, I know that gcc can process headers reasonably fast
    (otherwise it would take 10 seconds to plough through windows.h at the
    speed of compiling code).

    But it's the sheer size and scale of some headers that is the problem.
    Why do you think precompiled headers were invented?

    Compiling this program:

    #define SDL_MAIN_HANDLED
    #include "SDL2/SDL.h"
    int main(){}

    took gcc 0.85 seconds on my machine (however hello.c takes 0.2 seconds)

    (SDL2 headers comprise 75 .h files and 50K lines; so about 75Kloc
    throughput.)

    Compiling this program:

    #include <windows.h>
    int main(){}

    took 1.36 seconds. window.h might comprise 100 or 165 unique headers,
    with 100-200K unique lines of code; I forget.

    These figures are for each module that uses those headers. (My bcc took
    0.07 seconds for the SDL test. The windows.h test can't be compared as
    my version of that header is much smaller.)


    I also tried to compile file contaning 100000 declarations
    like:

    extern int a0(void);
    ....
    extern int a999999(void);

    Compilation of such file takes 1.737s, so about 575000 lines
    per second. So a lot of function declarations in header
    files should not slow gcc too much.

    I get these results (the test file has 'int main(){}' at the end):

    c:\c>tim gcc fred.c
    Time: 4.832

    c:\c>tim tcc fred.c
    Time: 4.620

    c:\c>tim bcc fred.c
    Compiling fred.c to fred.exe
    Time: 1.324

    Bear in mind that both gcc/tcc are presumably optimised code; bcc isn't)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Peter 'Shaggy' Haywood@21:1/5 to All on Sat Mar 22 19:07:18 2025
    Groovy hepcat DFS was jivin' in comp.lang.c on Wed, 19 Mar 2025 03:42
    pm. It's a cool scene! Dig it.

    On 3/18/2025 11:26 PM, Keith Thompson wrote:
    DFS <nospam@dfs.com> writes:

    There's your problem.

    https://cses.fi/problemset/text/2433

    "In all problems you should read input from standard input and write
    output to standard output."

    ha! It usually helps to read the instructions first.

    The autotester expects your program to read arguments from stdin, not
    from command line arguments.

    It probably passes no arguments to your program, so argv[1] is a null
    pointer. It's likely your program compiles (assuming the NBSP
    characters were added during posting) and crashes at runtime,
    producing no output.

    I KNEW clc would come through!

    Pretty easy fixes:

    1 use scanf()

    Normally I'd say take care with scanf(). But in this case, since the
    program is intended to be executed in an automated environment, it
    should be fine.
    The reason scanf() can be a bit iffy is that you can't control what a
    user will enter. If you search Google or Duck Duck Go for "comp.lang.c
    faq" you can find more information on this and other issues. (The FAQ
    is still out there, people..., somewhere...)

    2 update int to long
    3 handle special case of n = 1

    The problem definition doesn't mention any special case. You should, I
    think, treat 1 like any other number. So the output for 1 should be

    1 4 2 1

    4 instead of collecting the results in a char variable, I print
    them as they're calculated

    Yep, that's a more usual approach.
    Another suggestion I have is to use a separate function to do part of
    the work. But it's not vital.
    Also, since the specification says that only positive numbers are to
    be accepted, it makes sense (to me, at least) to use an unsigned type
    for n.
    One more thing: using while(1){...break;} is a bit pointless. You can
    use do{...}while(1 != n) instead.
    Here's my solution, for what it's worth:

    #include <stdio.h>

    unsigned long weird(unsigned long n)
    {
    printf("%lu", n);

    if(n & 1)
    {
    /* Odd - multiply by 3 & add 1. */
    n = n * 3 + 1;
    }
    else
    {
    /* Even - divide by 2. */
    n /= 2;
    }
    return n;
    }

    int main(void)
    {
    unsigned long n;

    /* Get n from stdin. */
    scanf("%lu", &n);

    /* Now feed it to the algorithm. */
    do
    {
    n = weird(n);
    putchar(' ');
    } while(1 != n);

    printf("%lu\n", n);
    return 0;
    }


    --


    ----- Dig the NEW and IMPROVED news sig!! -----


    -------------- Shaggy was here! ---------------
    Ain't I'm a dawg!!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Keith Thompson on Sat Mar 22 14:41:40 2025
    On 22/03/2025 04:50, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:

    This is a C program using one of the extensions from my old compiler:

    #include <stdio.h>
    #include <time.h>

    int main(void) {
    time_t t = clock();
    printf("%v\n", t); # (used to be '?'; changed to 'v')
    }

    The compiler replaces the 'v' format with a conventional format code
    according to the type of the expression. For my 'time_t', it happens
    to be 'lld'.

    That's nice. Seriously, it's nice. If it were added to a future
    edition of the language, I'd likely use it (once I could count on it
    being supported, which would take a while).


    I am sure it could be done using variadic macros, _Generic, and newer
    C23 features like "typeof". But it would be difficult to handle
    non-constant format strings, and may be difficult to do efficiently.
    (C++ now has something like that in the modern "format" library. Of
    course, it has the added power and complexity of handling user-defined
    types too.)

    It is certainly quite possible (in C11) to put together a macro so that
    you can do :

    int x = ... ;
    const char *s = ...
    double d = ...;

    print("The result of ", s, " is ", x, " and ", d, "\r\n);

    I suspect the biggest challenge to getting that into the standard
    library would not be technical implementation in C11 - it would be
    getting C programmers to agree that it is a good idea!


    It would certainly be nice to have a less cumbersome way to specify
    formats in C - especially for the size-specific integer types. But I
    think it is likely that to do it /well/, you need a lot more apparatus
    than C has at the moment, or is likely to gain in the future.


    It would be very nice if C had some kind of more generic I/O that
    doesn't require remembering arbitrary format strings and qualifiers
    for each type, and that doesn't provide format strings for a lot of
    types in the standard library, and certainly not for types defined
    in user code. And I'd *love* it if a future C standard removed
    the undefined behavior for numeric input using *scanf(). Other C
    programmers will have different priorities than yours or mine.
    If I want to print a time_t value in C++, I just write
    `std::cout << t` and the compiler figures out which overloaded
    function to call.

    That's amazing.

    Not particularly. C has programmer-defined operator (and function)

    (You meant "C++" here?)

    overloading as a language feature. (There are IMHO some serious
    flaws in C++'s use of overloaded "<<" for output, but I won't go
    into that here.)

    [...]


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Heathfield@21:1/5 to Peter 'Shaggy' Haywood on Sat Mar 22 13:25:43 2025
    On 22/03/2025 08:07, Peter 'Shaggy' Haywood wrote:

    <snip>

    Here's my solution, for what it's worth:

    #include <stdio.h>

    unsigned long weird(unsigned long n)
    {
    printf("%lu", n);

    if(n & 1)
    {
    /* Odd - multiply by 3 & add 1. */
    n = n * 3 + 1;

    Or you can save yourself a multiplication

    n = (n << 2) - (n - 1);

    potentially shaving entire picoseconds off the runtime.

    }
    else
    {
    /* Even - divide by 2. */
    n /= 2;

    do
    {
    n >>= 1;
    } while(~(n & 1));

    ...and there goes another attosecond.

    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to bart on Sat Mar 22 07:05:40 2025
    bart <bc@freeuk.com> writes:

    On 21/03/2025 01:47, Keith Thompson wrote:

    I just
    did a quick test comparing complation times for an empty program
    with no #include directives and an empty program with #include
    directives for <stdint.h> and <inttypes.h>. The difference was
    about 3 milliseconds. I quite literally could not care care less.

    I'm sorry but that's a really poor attitude, with bad
    consequences.

    I'm with Keith on this one, and I expect most other C
    developers are also.

    You're saying it doesn't matter how complex a header or
    set of headers is, even when out of proportion to the task.

    This paraphrasing doesn't match what Keith said. Whether
    deliberately or not, your interpretation is off base.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Waldek Hebisch@21:1/5 to bart on Sat Mar 22 13:50:24 2025
    bart <bc@freeuk.com> wrote:
    On 22/03/2025 02:37, Waldek Hebisch wrote:
    bart <bc@freeuk.com> wrote:

    Sorry, but there's something wrong if you have to write all that to get
    a handful of fixed-width types. (And if this is for a multitude of
    targets, then there should be a dedicated header per target).

    GCC's stdint.h is 200 lines. Mine is 75 lines. It these types were part
    of the core language, it would be zero lines.

    You need few (say 3) lines of compiler code to define a type.
    AFAICS there are 24 types in intXX... and uintXX... family.

    There's about 8, unless you include FAST/LEAST which are of interest to
    a minority of the minority who are even aware of them.

    Well, FAST/LEAST types are mandatory for standard compliance.

    So about 72 lines in compiler. For compatiblity with older
    code you probably should define types under internal names
    and have 24 lines in stdint.h (+3 lines of include guard).

    It's a lot more than 72 lines in a compiler to support numeric types.

    But this is about exposing fixed-size type aliases to existing types,
    which can be done fairly tidily in a compiler,

    Yes, there is complex machinery to support types and compiler
    data structires in general. But once machinery is in place
    you need to create type node and insert it into symbol table.
    In gcc creation of type node may look like:

    sizetype = make_node (INTEGER_TYPE);
    TYPE_NAME (sizetype) = get_identifier ("sizetype");
    TYPE_PRECISION (sizetype) = precision;
    TYPE_UNSIGNED (sizetype) = 1;
    scalar_int_mode mode = smallest_int_mode_for_size (precision);
    SET_TYPE_MODE (sizetype, mode);
    SET_TYPE_ALIGN (sizetype, GET_MODE_ALIGNMENT (TYPE_MODE (sizetype)));
    TYPE_SIZE (sizetype) = bitsize_int (precision);
    TYPE_SIZE_UNIT (sizetype) = size_int (GET_MODE_SIZE (mode));
    set_min_and_max_values_for_integral_type (sizetype, precision, UNSIGNED);

    But such things are repeated for several types, so one can have
    function like 'create_integer_type_node' which is doing the above,
    but for type name and precision which are arguments to the function.

    Of course, if you need to do soemthing special to a type, then
    you may need a lot of code. But integer types should be handled
    anyway, so there is really no special code beyond what is already
    there.

    but not as tidily as when
    all the built-in types can be expressed as one token; some need multiple tokens.

    Unlike classic integer types, types in stdint.h have names which
    are single token, so all you need to do is to insert entry in the
    symbol table.

    Also C requires those aliases to be hidden unless a particular header is used.

    Yes, so either one need some mechanism to hide/expose builtin
    indentifiers, or (what is typically done) one need to use
    reserved names for builtin indentifiers and use stdint.h to
    define standard name as an alias.

    Still, I was remarking on those Q8 headers requiring 1300 lines to add
    these aliases.

    I am not sure if Q8 really needed all those lines. But you
    should take into account that it provided type without
    cooperation with the compiler.

    stdint.h defines quite a bit more than just types, so actual
    saving from having them built into compiler would be small,
    in particular since preprocessor is fast. On my machine
    I see 91 code lines after preprocessing of stdint.h. And
    actually, several of definitions like '__pid_t' go beyond C
    and are needed by other headers. So, really is not a big
    deal.

    BTW: I just tried

    /tmp$ time gcc -E foo.c | wc
    1000006 2000022 12889013

    real 0m0.359s
    user 0m0.351s
    sys 0m0.085s

    So gcc preprocessor is able to handle almost 3 million lines
    per second. The lines were short, but gcc goes trough
    piles of header files resonably fast, probably much faster
    than you think.

    Testing -E is tricky, since the output is textual, and usually
    interspersed with # lines giving source line number info.

    What was in foo.c?

    Just 1000000 lines of declarations (no includes etc.). The
    point was that skipping input is easier than copying things to
    the output, so that should give reasonable estimate of time
    spent on parts that disappear after preprocessing.

    In any case, I know that gcc can process headers reasonably fast
    (otherwise it would take 10 seconds to plough through windows.h at the
    speed of compiling code).

    But it's the sheer size and scale of some headers that is the problem.
    Why do you think precompiled headers were invented?

    Some headers are pretty big. But fact that at source level
    stdint.h has some hundreds of lines is not big problem, it
    is still rather small. Note that it is convenient to have
    common headers for variants of architecure, on my Linux
    I can run 3 kinds of programs: classic 32-bit ones, 64-bit ones
    and x32 (which uses 32-bit addresses, but uses 64-bit
    instructions). Each needs slightly different definitions
    in header files. This is handled by conditionals in
    header files. And in fact large part of headers are shared
    with different architecures (and even different OS-es using
    the same libc).

    Compiling this program:

    #define SDL_MAIN_HANDLED
    #include "SDL2/SDL.h"
    int main(){}

    took gcc 0.85 seconds on my machine (however hello.c takes 0.2 seconds)

    (SDL2 headers comprise 75 .h files and 50K lines; so about 75Kloc throughput.)

    Compiling this program:

    #include <windows.h>
    int main(){}

    took 1.36 seconds. window.h might comprise 100 or 165 unique headers,
    with 100-200K unique lines of code; I forget.

    These figures are for each module that uses those headers. (My bcc took
    0.07 seconds for the SDL test. The windows.h test can't be compared as
    my version of that header is much smaller.)


    I also tried to compile file contaning 100000 declarations
    ^^^^^^
    Oops, should be 1000000.

    like:

    extern int a0(void);
    ....
    extern int a999999(void);

    Compilation of such file takes 1.737s, so about 575000 lines
    per second. So a lot of function declarations in header
    files should not slow gcc too much.

    I get these results (the test file has 'int main(){}' at the end):

    c:\c>tim gcc fred.c
    Time: 4.832

    c:\c>tim tcc fred.c
    Time: 4.620

    c:\c>tim bcc fred.c
    Compiling fred.c to fred.exe
    Time: 1.324

    Bear in mind that both gcc/tcc are presumably optimised code; bcc isn't)

    On my machine tcc preprocesses probably about 20% faster than gcc.
    For timing compilation I used 'gcc -c' to generate linkable object
    file, so no need to have 'main'. When actually compilning on my
    machine tcc is significantly faster, on the same file 'tcc -c'
    takes 0.418s, so more than 4 times faster than 'gcc -c'.

    I do not know why 'tcc' is that much faster. One possibility
    is that 'gcc' contains various extra info in compiler data
    structures that help when optimizing, but require extra effort
    even when not optimiziong.

    --
    Waldek Hebisch

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Waldek Hebisch@21:1/5 to Kaz Kylheku on Sat Mar 22 14:07:43 2025
    Kaz Kylheku <643-408-1753@kylheku.com> wrote:
    On 2025-03-22, bart <bc@freeuk.com> wrote:

    The point is that there was usually a size exactly double the width of
    'int', but it become less necessary when 'int' reached 64 bits.

    Yes, because other than for masks of 128 bits, there isn't a whole
    lot of stuff you can *count* for which you need such large integers.

    Money?

    In an accounting system, if you use signed 64 bits for pennies, you can
    go to 9.2 x 10^16 dollars.

    Actually, to do fast division of N-bit number by fixed N-bit number
    one need 2N-bit multiplication. Such divisions appear in base
    convertions (to decimal) and when doing "decimal" rounding.
    Also, converting between currencies needs extra accuracy. So
    _fast_ financial arithmetic may need rather large number of digits,
    current tendecy is to allow up to 37 digits in intermediate quantities.

    Note that 64-bit _result_ type means that 32-bit integers are
    largest that can be multiplied exactly, that is very limiting.

    Large integers are needed for crypto and such, but then 128 isn't enough anyway.

    Double word arthmetic is crucial if you want efficient high-level implementation of multiple precision arithmetic. So, 64-bit is
    enough on 32-bit machines, 128-bit is needed on 64-bit machines
    and hypotetical 128-bit machines would need 256-bit integer
    type as a building block for efficient multiple precision
    arithmetic.

    --
    Waldek Hebisch

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From DFS@21:1/5 to Peter 'Shaggy' Haywood on Sat Mar 22 10:29:13 2025
    On 3/22/2025 4:07 AM, Peter 'Shaggy' Haywood wrote:
    Groovy hepcat DFS was jivin' in comp.lang.c on Wed, 19 Mar 2025 03:42
    pm. It's a cool scene! Dig it.

    On 3/18/2025 11:26 PM, Keith Thompson wrote:
    DFS <nospam@dfs.com> writes:

    There's your problem.

    https://cses.fi/problemset/text/2433

    "In all problems you should read input from standard input and write
    output to standard output."

    ha! It usually helps to read the instructions first.

    The autotester expects your program to read arguments from stdin, not
    from command line arguments.

    It probably passes no arguments to your program, so argv[1] is a null
    pointer. It's likely your program compiles (assuming the NBSP
    characters were added during posting) and crashes at runtime,
    producing no output.

    I KNEW clc would come through!

    Pretty easy fixes:

    1 use scanf()

    Normally I'd say take care with scanf(). But in this case, since the program is intended to be executed in an automated environment, it
    should be fine.
    The reason scanf() can be a bit iffy is that you can't control what a
    user will enter. If you search Google or Duck Duck Go for "comp.lang.c
    faq" you can find more information on this and other issues. (The FAQ
    is still out there, people..., somewhere...)

    https://c-faq.com/

    I still see links to that document from time to time, like on university websites.


    2 update int to long
    3 handle special case of n = 1

    The problem definition doesn't mention any special case. You should, I think, treat 1 like any other number. So the output for 1 should be

    1 4 2 1


    It's a 'special case' because n is already 1.

    Your code passed all CSES tests but this one.



    4 instead of collecting the results in a char variable, I print
    them as they're calculated

    Yep, that's a more usual approach.
    Another suggestion I have is to use a separate function to do part of
    the work. But it's not vital.
    Also, since the specification says that only positive numbers are to
    be accepted, it makes sense (to me, at least) to use an unsigned type
    for n.
    One more thing: using while(1){...break;} is a bit pointless. You can
    use do{...}while(1 != n) instead.
    Here's my solution, for what it's worth:

    #include <stdio.h>

    unsigned long weird(unsigned long n)
    {
    printf("%lu", n);

    if(n & 1)
    {
    /* Odd - multiply by 3 & add 1. */
    n = n * 3 + 1;
    }
    else
    {
    /* Even - divide by 2. */
    n /= 2;
    }
    return n;
    }

    int main(void)
    {
    unsigned long n;

    /* Get n from stdin. */
    scanf("%lu", &n);

    /* Now feed it to the algorithm. */
    do
    {
    n = weird(n);
    putchar(' ');
    } while(1 != n);

    printf("%lu\n", n);
    return 0;
    }


    Cool.


    I tweaked my original and got it down to: --------------------------------------------------------
    #include <stdio.h>

    int main(void)
    {
    long n = 0;
    scanf("%ld", &n);
    while(n > 1) {
    printf("%ld ",n);
    n = (n % 2) ? (n * 3 + 1) : (n / 2);
    }
    printf("1\n");
    return 0;
    }
    --------------------------------------------------------

    I also liked the Number Spiral, Coin Piles and Palindrome Reorder problems.


    Thanks for the input!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Waldek Hebisch on Sat Mar 22 14:22:54 2025
    antispam@fricas.org (Waldek Hebisch) writes:
    bart <bc@freeuk.com> wrote:
    On 21/03/2025 17:51, Waldek Hebisch wrote:
    bart <bc@freeuk.com> wrote:

    (Probably intptr_t or ssize_t is better for
    that purpose, and will be portable between Windows and Linux.)

    Those did not exist in 1991 and would be needed only for small
    machines. Except for Microsoft which decided to push its own,
    different way.

    ssize_t existed in before 1990 (in SVR4). intptr_t came later.

    $ grep ssize_t common/head/*
    common/head/aio.h: ssize_t aio__return; /* operation result value */
    common/head/unistd.h:extern ssize_t read(int, void *, size_t); common/head/unistd.h:extern ssize_t write(int, const void *, size_t);


    One 'con' for Linux' approach is when someone assumes 'long' is i32;
    when they run code on 64 bits, it will either be wasteful, or it could
    go badly wrong.

    One 'con' of any assumption is that somebody can make a different
    assumption.

    Programmers shouldn't be making 'assumptions' in the first place.

    The architectural ABI describes fully the capabilities of the
    native types. If the programmer isn't aware of that, they
    shouldn't be programming.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Heathfield@21:1/5 to Scott Lurndal on Sat Mar 22 14:32:01 2025
    On 22/03/2025 14:22, Scott Lurndal wrote:
    Programmers shouldn't be making 'assumptions' in the first place.

    The architectural ABI describes fully the capabilities of the
    native types. If the programmer isn't aware of that, they
    shouldn't be programming.

    Did programmer just assume that all programmers are told on which
    platforms their code will be run? Not all C programmers are so
    pampered.

    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Peter 'Shaggy' Haywood on Sat Mar 22 14:30:13 2025
    Peter 'Shaggy' Haywood <phaywood@alphalink.com.au> writes:
    Groovy hepcat DFS was jivin' in comp.lang.c on Wed, 19 Mar 2025 03:42
    pm. It's a cool scene! Dig it.

    On 3/18/2025 11:26 PM, Keith Thompson wrote:
    DFS <nospam@dfs.com> writes:

    There's your problem.

    https://cses.fi/problemset/text/2433

    "In all problems you should read input from standard input and write
    output to standard output."

    ha! It usually helps to read the instructions first.

    The autotester expects your program to read arguments from stdin, not
    from command line arguments.

    It probably passes no arguments to your program, so argv[1] is a null
    pointer. It's likely your program compiles (assuming the NBSP
    characters were added during posting) and crashes at runtime,
    producing no output.

    I KNEW clc would come through!

    Pretty easy fixes:

    1 use scanf()

    Normally I'd say take care with scanf(). But in this case, since the
    program is intended to be executed in an automated environment, it
    should be fine.
    The reason scanf() can be a bit iffy is that you can't control what a
    user will enter. If you search Google or Duck Duck Go for "comp.lang.c
    faq" you can find more information on this and other issues. (The FAQ
    is still out there, people..., somewhere...)

    2 update int to long
    3 handle special case of n = 1

    The problem definition doesn't mention any special case. You should, I
    think, treat 1 like any other number. So the output for 1 should be

    1 4 2 1

    4 instead of collecting the results in a char variable, I print
    them as they're calculated

    Yep, that's a more usual approach.
    Another suggestion I have is to use a separate function to do part of
    the work. But it's not vital.
    Also, since the specification says that only positive numbers are to
    be accepted, it makes sense (to me, at least) to use an unsigned type
    for n.
    One more thing: using while(1){...break;} is a bit pointless. You can
    use do{...}while(1 != n) instead.
    Here's my solution, for what it's worth:

    And here's mine.

    int
    main(int argc, const char **argv)
    {
    unsigned int n;
    scanf("%u", &n);
    printf("%u ", n);
    do {
    n = (n & 1u) ? n * 3u + 1u : n >> 1u;
    printf("%u ", n);
    } while (n != 1u);
    putchar('\n');
    return 0;
    }

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From DFS@21:1/5 to Scott Lurndal on Sat Mar 22 11:31:48 2025
    On 3/22/2025 10:30 AM, Scott Lurndal wrote:

    And here's mine.

    int
    main(int argc, const char **argv)
    {
    unsigned int n;
    scanf("%u", &n);
    printf("%u ", n);
    do {
    n = (n & 1u) ? n * 3u + 1u : n >> 1u;
    printf("%u ", n);
    } while (n != 1u);
    putchar('\n');
    return 0;
    }


    Wrong answer on 5 of 14 CSES tests.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Waldek Hebisch on Sat Mar 22 15:47:24 2025
    On 22/03/2025 13:50, Waldek Hebisch wrote:
    bart <bc@freeuk.com> wrote:
    On 22/03/2025 02:37, Waldek Hebisch wrote:
    bart <bc@freeuk.com> wrote:

    Sorry, but there's something wrong if you have to write all that to get >>>> a handful of fixed-width types. (And if this is for a multitude of
    targets, then there should be a dedicated header per target).

    GCC's stdint.h is 200 lines. Mine is 75 lines. It these types were part >>>> of the core language, it would be zero lines.

    You need few (say 3) lines of compiler code to define a type.
    AFAICS there are 24 types in intXX... and uintXX... family.

    There's about 8, unless you include FAST/LEAST which are of interest to
    a minority of the minority who are even aware of them.

    Well, FAST/LEAST types are mandatory for standard compliance.

    So about 72 lines in compiler. For compatiblity with older
    code you probably should define types under internal names
    and have 24 lines in stdint.h (+3 lines of include guard).

    It's a lot more than 72 lines in a compiler to support numeric types.

    But this is about exposing fixed-size type aliases to existing types,
    which can be done fairly tidily in a compiler,

    Yes, there is complex machinery to support types and compiler
    data structires in general. But once machinery is in place
    you need to create type node and insert it into symbol table.
    In gcc creation of type node may look like:

    sizetype = make_node (INTEGER_TYPE);
    TYPE_NAME (sizetype) = get_identifier ("sizetype");
    TYPE_PRECISION (sizetype) = precision;
    TYPE_UNSIGNED (sizetype) = 1;
    scalar_int_mode mode = smallest_int_mode_for_size (precision);
    SET_TYPE_MODE (sizetype, mode);
    SET_TYPE_ALIGN (sizetype, GET_MODE_ALIGNMENT (TYPE_MODE (sizetype)));
    TYPE_SIZE (sizetype) = bitsize_int (precision);
    TYPE_SIZE_UNIT (sizetype) = size_int (GET_MODE_SIZE (mode));
    set_min_and_max_values_for_integral_type (sizetype, precision, UNSIGNED);

    But such things are repeated for several types, so one can have
    function like 'create_integer_type_node' which is doing the above,
    but for type name and precision which are arguments to the function.

    Of course, if you need to do soemthing special to a type, then
    you may need a lot of code. But integer types should be handled
    anyway, so there is really no special code beyond what is already
    there.

    but not as tidily as when
    all the built-in types can be expressed as one token; some need multiple
    tokens.

    Unlike classic integer types, types in stdint.h have names which
    are single token, so all you need to do is to insert entry in the
    symbol table.

    I've done an experiment to add such types to my C compiler. It's
    slightly tricky because "int" etc are not independent types; they are
    special tokens that work together to form a full type spec.

    Anyway it involved these additional lines. First in the symbol table
    (what is used to initialise the main ST):

    ("int8", kstdtypesym, ti8),
    ("int16", kstdtypesym, ti16),
    ("int32", kstdtypesym, ti32),
    ("int64", kstdtypesym, ti64),

    ("uint8", kstdtypesym, tu8),
    ("uint16", kstdtypesym, tu16),
    ("uint32", kstdtypesym, tu32),
    ("uint64", kstdtypesym, tu64),

    (I didn't add _t to avoid a clash; this is still a working compiler.)

    In the set of tokens:

    (kstdtypesym, $, "k", 0),

    In the parser, for the code that deals with a typespec:

    when kstdtypesym then
    d.typeno := lx.subcode
    lex()

    Then a further 3 lines were modified to add 'kstdtypesym' to a list of type-starter tokens.

    So in all, 12 new lines were added, and 3 lines modified. The extra 16
    types of stdint.h would need 16 extra lines in the symbol table. The
    types there are hard-coded: the compiler knows its target!

    (Some compilers may work with multiple targets, then it would be a
    little more elaborate.)

    The size of the compiler increased by 38 code bytes, and 238 data bytes
    (or 254 if "_t" was used!).

    The corresponding portion of stdint.h is 240 bytes, so actually there
    isn't much in it. It's just a more professional way of doing this stuff,
    and now the types really are part of the core language (support for
    suffixes and format codes is still needed).

    However adding a feature such as MAXOF(T) definitely would be more
    efficient than those dozens of macros, and replaces much of limits.h
    too. Unfortunately you can't just add features like this.

    I also tried to compile file contaning 100000 declarations
    ^^^^^^
    Oops, should be 1000000.

    Yeah, I got that!

    like:

    extern int a0(void);
    ....
    extern int a999999(void);

    Compilation of such file takes 1.737s, so about 575000 lines
    per second. So a lot of function declarations in header
    files should not slow gcc too much.

    I get these results (the test file has 'int main(){}' at the end):

    c:\c>tim gcc fred.c
    Time: 4.832

    c:\c>tim tcc fred.c
    Time: 4.620

    c:\c>tim bcc fred.c
    Compiling fred.c to fred.exe
    Time: 1.324

    Bear in mind that both gcc/tcc are presumably optimised code; bcc isn't)

    On my machine tcc preprocesses probably about 20% faster than gcc.

    Given that now tcc has a quite conformant preprocessor (unlike 0.9.26
    which was very buggy), I had a theory that many compilers are just
    sharing the same one working implementation of it!

    A difference of 20% is not significant when gcc is involved; the latter
    could be doing all sorts of extra things.

    For timing compilation I used 'gcc -c' to generate linkable object
    file, so no need to have 'main'.

    I created EXEs since on bcc there was a tiny bug when generating OBJ
    without 'main', which meant I couldn't time it. (I'll deal with that later.)

    However if I leave the main() in, then all can generate OBJ files
    instead, and the timings are pretty much the same.

    When actually compilning on my
    machine tcc is significantly faster, on the same file 'tcc -c'
    takes 0.418s, so more than 4 times faster than 'gcc -c'.

    I do not know why 'tcc' is that much faster.

    On mine it's roughly the same as gcc, even a locally built tcc. That was surprising. I thought it might be a poor hash function, as the 1M names
    are very similar. I tried a version with 1M random function names, but
    tcc was even slower (7.x seconds); gcc was the same, and bcc was 1.6
    seconds.

    If this was my product, I would investigate what was slowing it down.

    One possibility
    is that 'gcc' contains various extra info in compiler data
    structures that help when optimizing, but require extra effort
    even when not optimiziong.

    My test was with gcc 14.1. gcc 10.3 was about 4 seconds; faster than
    tcc! (Yeah, something 'off' there.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Scott Lurndal on Sat Mar 22 16:25:33 2025
    On 22/03/2025 14:22, Scott Lurndal wrote:
    antispam@fricas.org (Waldek Hebisch) writes:
    bart <bc@freeuk.com> wrote:
    On 21/03/2025 17:51, Waldek Hebisch wrote:
    bart <bc@freeuk.com> wrote:

    (Probably intptr_t or ssize_t is better for
    that purpose, and will be portable between Windows and Linux.)

    Those did not exist in 1991 and would be needed only for small
    machines. Except for Microsoft which decided to push its own,
    different way.

    ssize_t existed in before 1990 (in SVR4). intptr_t came later.

    $ grep ssize_t common/head/*
    common/head/aio.h: ssize_t aio__return; /* operation result value */
    common/head/unistd.h:extern ssize_t read(int, void *, size_t); common/head/unistd.h:extern ssize_t write(int, const void *, size_t);


    One 'con' for Linux' approach is when someone assumes 'long' is i32;
    when they run code on 64 bits, it will either be wasteful, or it could
    go badly wrong.

    One 'con' of any assumption is that somebody can make a different
    assumption.

    Programmers shouldn't be making 'assumptions' in the first place.

    The architectural ABI describes fully the capabilities of the
    native types. If the programmer isn't aware of that, they
    shouldn't be programming.


    This is 'making assumptions' when the code is subsequently run on a
    different platform.

    And those assumptions made by a million programmers in myriad codebases
    are behind decisions on whether to increase 'int' and/or 'long' when implementing the language on a new architecture with wider types.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Keith Thompson on Sat Mar 22 17:00:26 2025
    On 20/03/2025 23:18, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 20/03/2025 19:10, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    [...]
    stdint.h et al are just ungainly bolt-ons, not fully supported by the
    language.
    No, they're fully supported by the language. They've been in the ISO
    standard since 1999.

    I don't think so. They are add-ons that could have been created in
    user-code even prior to C99 (user-defined typedefs for 64 bits would
    'need long long').

    Sure, they could; see Doug Gwyn's q8, for example.

    All that's happened is that 'stdint.h' has been blessed.

    I.e., it was made part of the language, specifically the standard
    library that's part of the language standard. Which is what I said,
    but for some reason you disagreed.

    Yes, the format specifiers are a bit awkward. Boo hoo.

    There's a further problem here:

    -------------------------------------
    #include <stdio.h>
    #include <stdint.h>

    #define strtype(x) _Generic((x),\
    default: "other",\
    char: "char",\
    signed char: "signed char",\
    short: "short",\
    int: "int",\
    long: "long",\
    long long: "long long",\
    unsigned char: "unsigned char",\
    unsigned short: "unsigned short",\
    unsigned int: "unsigned int",\
    unsigned long: "unsigned long",\
    unsigned long long: "unsigned long long",\
    int8_t: "int8",\
    int16_t: "int16",\
    int32_t: "int32",\
    int64_t: "int64",\
    uint8_t: "uint8",\
    uint16_t: "uint16",\
    uint32_t: "uint32",\
    uint64_t: "uint64"\
    )

    int main(void) {
    uint64_t x;

    puts(strtype(x));
    }
    -------------------------------------

    Many of the types are aliases of each other. Which ones will be aliases,
    can vary by platform.

    This is another reason that this was a poor solution.

    (My language also has some aliases, but they are defined as such.

    The built-in types are the set 'i8 ... u64', with aliases defined on top
    such as 'byte' for 'u8', and 'int' for 'i64' on a particular implementation.

    This is the opposite of how it works in stdint.h, where specific-width
    types are defined on top of non-specific-width ones! It's just backwards.

    It also doesn't need _Generic for this purpose; this will work:)

    u64 x
    puts(x.typestr)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Richard Heathfield on Sat Mar 22 19:12:13 2025
    On Sat, 22 Mar 2025 13:25:43 +0000
    Richard Heathfield <rjh@cpax.org.uk> wrote:

    On 22/03/2025 08:07, Peter 'Shaggy' Haywood wrote:

    <snip>

    Here's my solution, for what it's worth:

    #include <stdio.h>

    unsigned long weird(unsigned long n)
    {
    printf("%lu", n);

    if(n & 1)
    {
    /* Odd - multiply by 3 & add 1. */
    n = n * 3 + 1;

    Or you can save yourself a multiplication

    n = (n << 2) - (n - 1);

    potentially shaving entire picoseconds off the runtime.

    }
    else
    {
    /* Even - divide by 2. */
    n /= 2;

    do
    {
    n >>= 1;
    } while(~(n & 1));

    ...and there goes another attosecond.


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Scott Lurndal on Sat Mar 22 19:19:40 2025
    On Sat, 22 Mar 2025 14:30:13 GMT
    scott@slp53.sl.home (Scott Lurndal) wrote:

    Peter 'Shaggy' Haywood <phaywood@alphalink.com.au> writes:
    Groovy hepcat DFS was jivin' in comp.lang.c on Wed, 19 Mar 2025 03:42
    pm. It's a cool scene! Dig it.

    On 3/18/2025 11:26 PM, Keith Thompson wrote:
    DFS <nospam@dfs.com> writes:

    There's your problem.

    https://cses.fi/problemset/text/2433

    "In all problems you should read input from standard input and
    write output to standard output."

    ha! It usually helps to read the instructions first.

    The autotester expects your program to read arguments from stdin,
    not from command line arguments.

    It probably passes no arguments to your program, so argv[1] is a
    null pointer. It's likely your program compiles (assuming the
    NBSP characters were added during posting) and crashes at runtime,
    producing no output.

    I KNEW clc would come through!

    Pretty easy fixes:

    1 use scanf()

    Normally I'd say take care with scanf(). But in this case, since
    the
    program is intended to be executed in an automated environment, it
    should be fine.
    The reason scanf() can be a bit iffy is that you can't control
    what a
    user will enter. If you search Google or Duck Duck Go for
    "comp.lang.c faq" you can find more information on this and other
    issues. (The FAQ is still out there, people..., somewhere...)

    2 update int to long
    3 handle special case of n = 1

    The problem definition doesn't mention any special case. You
    should, I
    think, treat 1 like any other number. So the output for 1 should be

    1 4 2 1

    4 instead of collecting the results in a char variable, I print
    them as they're calculated

    Yep, that's a more usual approach.
    Another suggestion I have is to use a separate function to do part
    of
    the work. But it's not vital.
    Also, since the specification says that only positive numbers are
    to
    be accepted, it makes sense (to me, at least) to use an unsigned type
    for n.
    One more thing: using while(1){...break;} is a bit pointless. You
    can
    use do{...}while(1 != n) instead.
    Here's my solution, for what it's worth:

    And here's mine.

    int
    main(int argc, const char **argv)
    {
    unsigned int n;
    scanf("%u", &n);
    printf("%u ", n);
    do {
    n = (n & 1u) ? n * 3u + 1u : n >> 1u;
    printf("%u ", n);
    } while (n != 1u);
    putchar('\n');
    return 0;
    }


    It looks like you didn't follow the thread.
    You code is buggy because type 'unsigned int' is too narrow.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Richard Heathfield on Sat Mar 22 19:17:31 2025
    On Sat, 22 Mar 2025 13:25:43 +0000
    Richard Heathfield <rjh@cpax.org.uk> wrote:


    Or you can save yourself a multiplication

    n = (n << 2) - (n - 1);

    potentially shaving entire picoseconds off the runtime.


    Unlikely.
    More likely, your transformation will confuse compiler into generation
    of suboptmal code.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Heathfield@21:1/5 to Michael S on Sat Mar 22 17:22:53 2025
    On 22/03/2025 17:17, Michael S wrote:
    On Sat, 22 Mar 2025 13:25:43 +0000
    Richard Heathfield <rjh@cpax.org.uk> wrote:


    Or you can save yourself a multiplication

    n = (n << 2) - (n - 1);

    potentially shaving entire picoseconds off the runtime.


    Unlikely.
    More likely, your transformation will confuse compiler into generation
    of suboptmal code.

    Quite possibly. But does this face look bothered?

    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Tim Rentsch on Sat Mar 22 16:46:28 2025
    Tim Rentsch <tr.17687@z991.linuxsc.com> writes:

    Richard Heathfield <rjh@cpax.org.uk> writes:

    On 20/03/2025 13:06, Tim Rentsch wrote:
    [...]

    The C99 standard has a list of 54 what it calls "major changes",
    although IMO many or most of those are fairly minor. There are
    also other differences relative to C90, but most of them are
    simply clarifications or slight changes in wording.

    Those I largely recall from discussions at the time, but I dare
    to conclude that your lack of a reference to C11, C17, and C23
    means that they had a lesser effect on the language than I'd
    feared.

    [...]

    As it turn out, the C11 standard lists only 15 "major changes" (if
    my quick counting is correct), so your conclusion that later
    versions have had a lesser effect appears to be correct, at least as
    far as C11 goes. If I have time I may post again on this topic,
    doing for the C11 standard what I did for the C99 standard.

    Here is my summary of the corresponding list in the C11 standard
    (descriptive headings reprepresent my own views on each area):


    Changes every user of C11 will or should want to know about (they
    may choose not to use any particular items, but every one is
    important to at least know about)

    REMOVED: the gets() library function
    SUPPORT MADE CONDITIONAL: complex numbers, VLAs and VMTs

    unicode characters and strings

    a new form of expression, _Generic, for type-generic expressions

    anonymous structures and unions (which are members of an
    enclosing struct or union that do not themselves have a
    member name)

    static assertions
    [but there are well-known techniques for providing SA in C90/C99]

    no-return functions (guaranteed not to return to their caller)

    support for opening files for exclusive access


    Significant new functionality that some but not all people care about

    support for multiple threads of execution

    querying and specifying object alignment (also aligned_alloc library
    function)

    facilities for "fast path" program exit (quick_exit, at_quick_exit)


    Minor additions probably of interest to relatively few people

    macros defining additional characteristics of floating point (in
    float.h)

    macros to create complex numbers


    Conditionally supported Annexes - perhaps important to some, but
    mostly not essential

    bounds-checking variants of several standard library functions

    language functionality extensions meant to help with analyzability
    of source code

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Keith Thompson on Sun Mar 23 01:34:54 2025
    On 22/03/2025 21:52, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 22/03/2025 03:50, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 21/03/2025 19:04, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    [...]
    I think your response clarifies matters. Nobody cares, even as
    compilers grind to a halt under all preprocessing.
    If compilers ground to a halt, I would certainly care. They don't.

    50 modules each including GTK.h say, which was 0.33Mloc across 500
    headers (so reading 16Mloc and 25,000 headers in total when all are
    compiled) would not impact your builds at all? OK.
    First you talked about compilers grinding to a halt, then you talked
    about headers not impacting builds at all. Those goalposts of yours
    move *fast*.

    You missed the "?".

    No, I didn't. You insinuated that I had implied that large headers
    would not impact builds. You phrased it as if you were questioning
    something I had said. I don't care to debate this specific issue
    further, but please be mmore careful.

    For the record, as you can see above, I did not say that builds would
    not be impacted. Do not put words into my mouth again.

    Let me ask it again: so ploughing through a third of a million lines
    of code across hundreds of #includes, even at the faster throughput
    compared with compiling code, for a module of a few hundred lines,
    will have little impact?

    How about a project with 50 or 100 modules, each using that big
    header, that needs to be built from scratch?

    I don't know. I haven't taken the time to measure the impact,
    and if I did there wouldn't be much I could do about it. It hasn't particularly inconvenienced me.

    Most of the builds I do from source are straightforward. The general
    pattern is roughly "configure;make;sudo make install". I have
    a wrapper script that does that, figures out what arguments to
    pass, adds some extra options, and so forth. I'm aware configures
    scripts can be very large (the one for coreutils close to 95,000
    lines), but I really don't care or see why I should. It works.
    Building coreutils and installing 9.6 on my system took about 5
    minutes, during which I was able to go off and do other things.

    It's strange: in one part of the computing world, the speed of building software is a very big deal. All sorts of efforts are going on to deal
    with it. Compilation speed for developers is always an issue. There is a general movement away from LLVM-based backends /because/ it is so slow.

    And yet in another part (namely comp.lang.c) it appears to be a total non-issue!

    Maybe people are just so inured to lengthy build-times that they think
    it is quite normal. Or the slowness is hidden behind clever makefiles
    that do their utmost to avoid compiling as much as possible. Or they
    just throw extra resources (fast machines and lots of cores) to mitigate it.

    Could the whole thing be streamlined? Could a new version of the
    autotools be developed that doesn't cater as much to obsolete systems
    and generates smaller configure scripts? Would it substantially
    improve build times? Quite possibly, but I'm not likely to be the
    one to work about the details (unless someone wants to hire me to
    look into it).

    Build-times is one part; ease of deployment is another. This is the the
    API for SDL2 as translated from C headers into my language:

    C:\sdl>dir sdl.m
    8/07/2024 13:15 158,055 sdl.m


    Below is the API in its original C form. The above file is much easier
    to work with, copy, bundle, browse etc. I ask you, why would anybody
    care about all those separate headers? What purpose they do serve?

    (This SDL library is small compared with some! At least all the files
    are in one folder.)


    [...]

    So let me ask /this/ again: if such a library consisted of ONE compact
    header file, would it make the process simpler? Would it make
    compilation of multiple modules faster?)

    I don't know.

    But it seems likely that it would. It's hardly going to make it slower!

    When I have time, I will look into it myself. There is already a feature
    of my C compiler that can reduce a complex set of headers into one file [demonstrated above], but in the syntax of one of my two languages.

    That could be modified to generate C instead. (This is not just
    preprocessing; it needs to build a symbol table then work from that,
    taking care to preserve macro definitions, and suppressing system header symbols.)


    ------------------
    c:\sdl\dir sdl2
    14/06/2023 19:02 5,512 begin_code.h
    14/06/2023 19:02 1,480 close_code.h
    14/06/2023 19:02 8,084 SDL.h
    14/06/2023 19:02 12,455 SDL_assert.h
    14/06/2023 19:02 14,796 SDL_atomic.h
    14/06/2023 19:02 59,694 SDL_audio.h
    14/06/2023 19:02 3,205 SDL_bits.h
    14/06/2023 19:02 9,047 SDL_blendmode.h
    14/06/2023 19:02 4,307 SDL_clipboard.h
    14/06/2023 19:02 8,894 SDL_config.h
    14/06/2023 19:02 17,458 SDL_cpuinfo.h
    14/06/2023 19:02 108,871 SDL_egl.h
    14/06/2023 19:02 9,802 SDL_endian.h
    14/06/2023 19:02 5,177 SDL_error.h
    14/06/2023 19:02 47,284 SDL_events.h
    14/06/2023 19:02 5,533 SDL_filesystem.h
    14/06/2023 19:02 40,039 SDL_gamecontroller.h
    14/06/2023 19:02 3,418 SDL_gesture.h
    14/06/2023 19:02 3,146 SDL_guid.h
    14/06/2023 19:02 43,268 SDL_haptic.h
    14/06/2023 19:02 17,842 SDL_hidapi.h
    14/06/2023 19:02 110,398 SDL_hints.h
    14/06/2023 19:02 39,015 SDL_joystick.h
    14/06/2023 19:02 11,044 SDL_keyboard.h
    14/06/2023 19:02 15,629 SDL_keycode.h
    14/06/2023 19:02 3,908 SDL_loadso.h
    14/06/2023 19:02 3,812 SDL_locale.h
    14/06/2023 19:02 11,684 SDL_log.h
    14/06/2023 19:02 8,809 SDL_main.h
    14/06/2023 19:02 6,693 SDL_messagebox.h
    14/06/2023 19:02 3,380 SDL_metal.h
    14/06/2023 19:02 2,845 SDL_misc.h
    14/06/2023 19:02 17,087 SDL_mouse.h
    14/06/2023 19:02 14,286 SDL_mutex.h
    14/06/2023 19:02 1,155 SDL_name.h
    14/06/2023 19:02 81,091 SDL_opengl.h
    14/06/2023 19:02 1,254 SDL_opengles.h
    14/06/2023 19:02 1,606 SDL_opengles2.h
    14/06/2023 19:02 42,938 SDL_opengles2_gl2.h
    14/06/2023 19:02 241,221 SDL_opengles2_gl2ext.h
    14/06/2023 19:02 646 SDL_opengles2_gl2platform.h
    14/06/2023 19:02 11,131 SDL_opengles2_khrplatform.h
    14/06/2023 19:02 863,870 SDL_opengl_glext.h
    14/06/2023 19:02 24,522 SDL_pixels.h
    28/07/2024 12:43 6,746 SDL_platform.h
    14/06/2023 19:02 3,199 SDL_power.h
    14/06/2023 19:02 2,106 SDL_quit.h
    14/06/2023 19:02 12,860 SDL_rect.h
    14/06/2023 19:02 73,762 SDL_render.h
    14/06/2023 19:02 243 SDL_revision.h
    14/06/2023 19:02 28,101 SDL_rwops.h
    14/06/2023 19:02 16,929 SDL_scancode.h
    14/06/2023 19:02 10,265 SDL_sensor.h
    14/06/2023 19:02 5,904 SDL_shape.h
    14/06/2023 19:02 29,523 SDL_stdinc.h
    14/06/2023 19:02 36,798 SDL_surface.h
    14/06/2023 19:02 20,772 SDL_system.h
    14/06/2023 19:02 11,506 SDL_syswm.h
    14/06/2023 19:02 2,000 SDL_test.h
    14/06/2023 19:02 3,235 SDL_test_assert.h
    14/06/2023 19:02 6,872 SDL_test_common.h
    14/06/2023 19:02 2,163 SDL_test_compare.h
    14/06/2023 19:02 3,385 SDL_test_crc32.h
    14/06/2023 19:02 5,432 SDL_test_font.h
    14/06/2023 19:02 13,203 SDL_test_fuzzer.h
    14/06/2023 19:02 4,634 SDL_test_harness.h
    14/06/2023 19:02 2,215 SDL_test_images.h
    14/06/2023 19:02 1,954 SDL_test_log.h
    14/06/2023 19:02 4,630 SDL_test_md5.h
    14/06/2023 19:02 1,787 SDL_test_memory.h
    14/06/2023 19:02 3,156 SDL_test_random.h
    14/06/2023 19:02 17,273 SDL_thread.h
    14/06/2023 19:02 7,290 SDL_timer.h
    14/06/2023 19:02 4,506 SDL_touch.h
    14/06/2023 19:02 1,031 SDL_types.h
    14/06/2023 19:02 6,870 SDL_version.h
    14/06/2023 19:02 79,917 SDL_video.h
    14/06/2023 19:02 8,540 SDL_vulkan.h
    78 File(s) 4,774,738 bytes

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to bart on Sun Mar 23 10:50:43 2025
    On Sun, 23 Mar 2025 01:34:54 +0000
    bart <bc@freeuk.com> wrote:


    It's strange: in one part of the computing world, the speed of
    building software is a very big deal. All sorts of efforts are going
    on to deal with it. Compilation speed for developers is always an
    issue. There is a general movement away from LLVM-based backends
    /because/ it is so slow.


    What "general movement" are you talking about?
    I can't recollect any new* language for general-purpose computers that
    is used by more than dozen* persons which is not based on LLVM back end. Despite its undeniable slowness.
    Of course, apart from those that are based on one of 3 other major infrastructures - ART, .Net and JVM.

    new = 15 years or less

    dozen - figurative.
    For example, 9-10 years ago I defined interface definition language
    with accompanying tools that since then was used by ~20-25 people. It
    still counts as less than "figurative dozen", because all users were /
    are employees of the same company.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Heathfield@21:1/5 to Tim Rentsch on Sun Mar 23 08:25:59 2025
    On 22/03/2025 23:46, Tim Rentsch wrote:
    Here is my summary of the corresponding list in the C11 standard
    (descriptive headings reprepresent my own views on each area):

    Thank you, Tim. What I'm taking away from this is that I'm not
    personally affected by the changes. The getsectomy is a welcome
    surprise, and the rest I can safely ignore.

    No doubt others will appreciate your summary for other reasons,
    so I've scribbled them down locally. Thanks again.

    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Muttley@DastardlyHQ.org on Sun Mar 23 11:01:16 2025
    On Thu, 20 Mar 2025 15:55:21 -0000 (UTC)
    Muttley@DastardlyHQ.org wrote:

    On Thu, 20 Mar 2025 17:29:22 +0200
    Michael S <already5chosen@yahoo.com> wibbled:



    Then how exactly do you printf value of type int64_t in a code that >expected to pass [gcc] compilation with no warnings on two platforms,
    one of which is 64-bit Unix/Linux and another is just about anything
    else?

    Just use %llu everywhere. Warnings only matter if they're important
    ones.


    Unimportant warnings matter a lot, because they make seeing
    important warnings so much harder.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Richard Heathfield on Sun Mar 23 12:06:55 2025
    On Sun, 23 Mar 2025 08:25:59 +0000
    Richard Heathfield <rjh@cpax.org.uk> wrote:

    On 22/03/2025 23:46, Tim Rentsch wrote:
    Here is my summary of the corresponding list in the C11 standard (descriptive headings reprepresent my own views on each area):

    Thank you, Tim. What I'm taking away from this is that I'm not
    personally affected by the changes. The getsectomy is a welcome
    surprise, and the rest I can safely ignore.

    No doubt others will appreciate your summary for other reasons,
    so I've scribbled them down locally. Thanks again.


    Apart from removal of gets() and not going into numerics for sake of
    brevity, C11 has several useful additions to "old" library stuff, some
    long overdue.
    The #1 on my list is qsort_s().
    The #2 is timespec_get().

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Keith Thompson on Sun Mar 23 11:41:18 2025
    On Fri, 21 Mar 2025 20:50:51 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    bart <bc@freeuk.com> writes:

    This is a C program using one of the extensions from my old
    compiler:

    #include <stdio.h>
    #include <time.h>

    int main(void) {
    time_t t = clock();
    printf("%v\n", t); # (used to be '?'; changed to
    'v') }

    The compiler replaces the 'v' format with a conventional format code according to the type of the expression. For my 'time_t', it happens
    to be 'lld'.

    That's nice. Seriously, it's nice. If it were added to a future
    edition of the language, I'd likely use it (once I could count on it
    being supported, which would take a while).

    The Go language has something like that.

    You can add extensions like that to your own compiler easily
    enough. Adding them to the C standard (which requires getting all implementers to support them) is a lot harder. Does it work for
    both output (printf) and input (scanf)?


    That's the easiest question. And the right answer is "No, it does not."

    What if the format string
    isn't a string literal; does the compiler generate code to adjust
    it, allocating space for the translated string and deallocating it
    after the call? Does it work with printf("%n", &var)? What about qualifiers, specifying width, base, and so forth. How do I print
    an integer of arbitrary type in hexadecimal in a right-justified
    8-digit zero-padded field? The feature implies an ability for
    generic code that works with different types; can programmers use
    that feature for things other than format strings? How much more
    complicated would the C language (as distinct from the library)
    have to be to support this?

    If you have answers to all those questions, and to all the other
    questions that haven't occurred to me, I wouldn't mind seeing
    something like that in a future version of C. I haven't looked
    closely at Go, but it's a compiled language with a feature similar
    to what you describe; it could probably be mined for ideas.

    Or maybe we should be thinking in terms of something other than format strings. The idea that "%v", which is a 3-byte chunk of data, has compile-time implications in certain contexts is a bit unnerving.

    Ada chose the short name "Put" for its output routines.
    It's overloaded, so you can write `Put(this); Put(that);
    Put(the_other);` Maybe that a little too verbose, but how about
    a new built-in operator that takes an argument of any of various
    types and yields something that can be printed? '$' is available.
    I haven't thought this through.



    In theory, printf extension that is a little less nice than Bart's, but
    close, can be developed in C23 with no additional core language
    features.

    printf("%v\n", _TYIX(t));

    Where _TYIX defined as
    #define _TYIX(x) typeof_unqual((x)), (x)

    In practice, it seems that C23 Standard does not say enough about
    return value of typeof_unqual to make it feasible. Or, may be, my
    source of information (en.cppreference.com/w/c/language/typeof)
    is not up to date.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Heathfield@21:1/5 to Michael S on Sun Mar 23 10:15:32 2025
    On 23/03/2025 10:06, Michael S wrote:
    Apart from removal of gets() and not going into numerics for sake of
    brevity, C11 has several useful additions to "old" library stuff, some
    long overdue.

    I won't dispute it, but I've coped without them for a very, very
    long[1] time without ever once mourning their absence.

    [1] A missed opportunity: long long really, /really/ should have
    been very long.

    --
    Richard Heathfield
    Email: rjh at cpax dot org dot uk
    "Usenet is a strange place" - dmr 29 July 1999
    Sig line 4 vacant - apply within

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to bart on Sun Mar 23 14:12:37 2025
    On Sun, 23 Mar 2025 11:25:14 +0000
    bart <bc@freeuk.com> wrote:

    On 23/03/2025 08:50, Michael S wrote:
    On Sun, 23 Mar 2025 01:34:54 +0000
    bart <bc@freeuk.com> wrote:


    It's strange: in one part of the computing world, the speed of
    building software is a very big deal. All sorts of efforts are
    going on to deal with it. Compilation speed for developers is
    always an issue. There is a general movement away from LLVM-based
    backends /because/ it is so slow.


    What "general movement" are you talking about?
    I can't recollect any new* language for general-purpose computers
    that is used by more than dozen* persons which is not based on LLVM
    back end. Despite its undeniable slowness.

    There's Rust + Cranelift:

    "The goal of this project is to create an alternative codegen backend
    for the rust compiler based on Cranelift. This has the potential to
    improve compilation times in debug mode."


    It looks like a schism. Since Rust is sort of religious movement,
    schisms are inevitable part of it.

    There's Go which was never based on LLVM:

    "At the beginning of the project we considered using LLVM for gc but
    decided it was too large and slow to meet our performance goals."

    ('gc' is 'Go Compiler'. Maybe Go is older than 15 years?

    Yes, it is. 17+.

    Still, LLVM
    seems to have been around

    In 2007 LLVM was formally 7 y.o. but until 2005 it was a tiny project
    with very little (or none?) payed workforce. I can't say it with
    certainty, but it seems that LLVM didn't really become usable until
    2008-2009.

    and was thought to be slow then.)


    gollvm certainly exists and works.
    Users that want to use go on something other than very few platforms
    supported by Google's "self hosted" implementation appear to have two
    main choices: gogcc and gollvm. I don't know which is chosen more often.

    And there's Zig:


    Isn't current distribution of Zig based on LLVM?
    Just wondering, the chance that Zig will fly by now approaches zero.

    https://news.ycombinator.com/item?id=39154513

    There are other's comments:

    "LLVM used to be hailed as a great thing, but with language projects
    such as Rust, Zig and others complaining it's bad and slow and
    they're moving away from it – how bad is LLVM really?"

    Here's is a random quote from Reddit:

    "2 minutes is really good for a full build. 2 minutes is pretty bad
    for a one line change.

    I also quit my job recently because of their terrible infrastructure.
    All home-grown of course. A horrible mess of Python, C++ and Make.

    So demotivating. And nobody except me cared."

    TBH, for me 2 minutes would be really terrible even for a full build.
    So would 2 seconds! (How big was this executable?)


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Michael S on Sun Mar 23 11:25:14 2025
    On 23/03/2025 08:50, Michael S wrote:
    On Sun, 23 Mar 2025 01:34:54 +0000
    bart <bc@freeuk.com> wrote:


    It's strange: in one part of the computing world, the speed of
    building software is a very big deal. All sorts of efforts are going
    on to deal with it. Compilation speed for developers is always an
    issue. There is a general movement away from LLVM-based backends
    /because/ it is so slow.


    What "general movement" are you talking about?
    I can't recollect any new* language for general-purpose computers that
    is used by more than dozen* persons which is not based on LLVM back end. Despite its undeniable slowness.

    There's Rust + Cranelift:

    "The goal of this project is to create an alternative codegen backend
    for the rust compiler based on Cranelift. This has the potential to
    improve compilation times in debug mode."

    There's Go which was never based on LLVM:

    "At the beginning of the project we considered using LLVM for gc but
    decided it was too large and slow to meet our performance goals."

    ('gc' is 'Go Compiler'. Maybe Go is older than 15 years? Still, LLVM
    seems to have been around and was thought to be slow then.)

    And there's Zig:

    https://news.ycombinator.com/item?id=39154513

    There are other's comments:

    "LLVM used to be hailed as a great thing, but with language projects
    such as Rust, Zig and others complaining it's bad and slow and they're
    moving away from it – how bad is LLVM really?"

    Here's is a random quote from Reddit:

    "2 minutes is really good for a full build. 2 minutes is pretty bad for
    a one line change.

    I also quit my job recently because of their terrible infrastructure.
    All home-grown of course. A horrible mess of Python, C++ and Make.

    So demotivating. And nobody except me cared."

    TBH, for me 2 minutes would be really terrible even for a full build. So
    would 2 seconds! (How big was this executable?)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@dastardlyhq.com@21:1/5 to All on Sun Mar 23 16:22:20 2025
    On Sun, 23 Mar 2025 11:05:29 +1100
    Alexis <flexibeast@gmail.com> gabbled:
    Muttley@DastardlyHQ.org writes:
    In fact, i would suggest that it's increasingly difficult to find
    non-recent mathematics that _hasn't_ found direct or non-direct 'real
    world' applications.

    The one under discussion for a start.

    The history of developments in mathematics suggests that your "99.99%"
    claim is significantly incorrect. (If you'd said, say, "9.99%", my
    internal reaction would have been "Hm, i guess that might be the case.")

    I'm guessing you don't know the difference between pure maths and applied maths. The former is chock full of curiosities that one day may be useful
    but so far their only use is keeping maths profs entertained.

    All that said, i've lurked long enough in this group to know that trying
    to have a good-faith conversation with you is pointless; i was mainly

    Oh spare me. I'm tired of people who can't get a consensus around their particular holy view throwing all their toys out their prams and claim trolling or something similar.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From DFS@21:1/5 to Lynn McGuire on Sun Mar 23 12:29:49 2025
    On 3/22/2025 2:32 PM, Lynn McGuire wrote:
    On 3/18/2025 8:38 PM, DFS wrote:

    I'm doing these algorithm problems at
    https://cses.fi/problemset/list/

    For instance: Weird Algorithm
    https://cses.fi/problemset/task/1068

    My code works fine locally (prints the correct solution to the
    console), but when I submit the .c file the auto-tester flags it with
    'runtime error' and says the output is empty.

    ------------------------------------------------------------
    // If n is even, divide it by two.
    // If n is odd, multiply it by three and add one.
    // Repeat until n is one.
    // n = 3: output is 3 10 5 16 8 4 2 1


    #include <stdio.h>
    #include <stdlib.h>
    #include <string.h>

    int main(int argc, char *argv[])
    {
         int n = atoi(argv[1]);
         int len = 0;
         char result[10000] = "";
         sprintf(result, "%d ", n);

         while(1) {
             if((n % 2) == 0)
                 {n /= 2;}
             else
                 {n = (n * 3) + 1;}

             if(n != 1)
                 {
                     len = strlen(result);
                     sprintf(result + len, "%d ", n);
                 }
             else
                 break;
         }

         len = strlen(result);
         sprintf(result + len, "1 ");
         printf("%s\n",result);

         return 0;
    }
    ------------------------------------------------------------

    Any ideas?
    Thanks

    strdup.
       https://www.geeksforgeeks.org/strdup-strdndup-functions-c/

    Lynn



    The main issue, quickly spotted by Keith Thompson, was the program was
    supposed to read input from stdin. I had it reading from the command
    line (so no output), and using an int where long was needed (returned
    some wrong answers).


    Thanks.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Michael S on Sun Mar 23 12:56:55 2025
    Michael S <already5chosen@yahoo.com> writes:

    On Thu, 20 Mar 2025 15:55:21 -0000 (UTC)
    Muttley@DastardlyHQ.org wrote:

    On Thu, 20 Mar 2025 17:29:22 +0200
    Michael S <already5chosen@yahoo.com> wibbled:



    Then how exactly do you printf value of type int64_t in a code that
    expected to pass [gcc] compilation with no warnings on two platforms,
    one of which is 64-bit Unix/Linux and another is just about anything
    else?

    Just use %llu everywhere. Warnings only matter if they're important
    ones.

    Unimportant warnings matter a lot, because they make seeing
    important warnings so much harder.

    Absolutely.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Richard Heathfield on Sun Mar 23 12:35:39 2025
    Richard Heathfield <rjh@cpax.org.uk> writes:

    On 22/03/2025 23:46, Tim Rentsch wrote:

    Here is my summary of the corresponding list in the C11 standard
    (descriptive headings reprepresent my own views on each area):

    Thank you, Tim. What I'm taking away from this is that I'm not
    personally affected by the changes.

    I had a similar reaction when C99 came out. In fact it was several
    years before I started looking at C99 as a suitable alternative to
    using just C90. Gradually I began to see that the new features of
    C99 offered substantial advantages over what C90 offers. My
    impression is the C community at large followed a similar timeline.

    For C11 I am somewhere in the middle. Most of the time I find C99
    adequate, and don't need the capabilities added in C11; so C99 is
    still my default choice, with C11 being the exception. However,
    for building software libraries rather than just programs, C11
    allows some significant benefits, so I am turning more and more
    to C11 as I add functionality to the library projects I'm working
    on. (I should add that I try to write code that can benefit from
    C11 but still can be called from C99 or C90, perhaps with reduced functionality.)

    No doubt others will appreciate your summary for other reasons, so
    I've scribbled them down locally. Thanks again.

    Thank you, that is nice to hear.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Keith Thompson on Sun Mar 23 23:19:19 2025
    On Sun, 23 Mar 2025 14:13:46 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    Michael S <already5chosen@yahoo.com> writes:
    On Fri, 21 Mar 2025 20:50:51 -0700
    Keith Thompson <Keith.S.Thompson+u@gmail.com> wrote:

    bart <bc@freeuk.com> writes:

    This is a C program using one of the extensions from my old
    compiler:

    #include <stdio.h>
    #include <time.h>

    int main(void) {
    time_t t = clock();
    printf("%v\n", t); # (used to be '?'; changed
    to 'v') }

    The compiler replaces the 'v' format with a conventional format
    code according to the type of the expression. For my 'time_t',
    it happens to be 'lld'.

    That's nice. Seriously, it's nice. If it were added to a future
    edition of the language, I'd likely use it (once I could count on
    it being supported, which would take a while).

    The Go language has something like that.

    You can add extensions like that to your own compiler easily
    enough. Adding them to the C standard (which requires getting all
    implementers to support them) is a lot harder. Does it work for
    both output (printf) and input (scanf)?


    That's the easiest question. And the right answer is "No, it does
    not."

    I was asking about bart's language extension. Do you have inside
    information about that?

    [...]

    In theory, printf extension that is a little less nice than Bart's,
    but close, can be developed in C23 with no additional core language features.

    printf("%v\n", _TYIX(t));

    Where _TYIX defined as
    #define _TYIX(x) typeof_unqual((x)), (x)

    In practice, it seems that C23 Standard does not say enough about
    return value of typeof_unqual to make it feasible. Or, may be, my
    source of information (en.cppreference.com/w/c/language/typeof)
    is not up to date.

    I don't see how that would work. typeof_unqual doesn't have a return
    value; it yields a type, and can be used as a type name in a
    declaration. Unless I've missed something big, C23 doesn't let you
    pass a type name as a function argument.


    Thank you. I completely misunderstood typeof.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Tim Rentsch on Mon Mar 24 13:09:00 2025
    On 23/03/2025 20:35, Tim Rentsch wrote:
    Richard Heathfield <rjh@cpax.org.uk> writes:

    On 22/03/2025 23:46, Tim Rentsch wrote:

    Here is my summary of the corresponding list in the C11 standard
    (descriptive headings reprepresent my own views on each area):

    Thank you, Tim. What I'm taking away from this is that I'm not
    personally affected by the changes.

    I had a similar reaction when C99 came out. In fact it was several
    years before I started looking at C99 as a suitable alternative to
    using just C90. Gradually I began to see that the new features of
    C99 offered substantial advantages over what C90 offers. My
    impression is the C community at large followed a similar timeline.

    For C11 I am somewhere in the middle. Most of the time I find C99
    adequate, and don't need the capabilities added in C11; so C99 is
    still my default choice, with C11 being the exception. However,
    for building software libraries rather than just programs, C11
    allows some significant benefits, so I am turning more and more
    to C11 as I add functionality to the library projects I'm working
    on. (I should add that I try to write code that can benefit from
    C11 but still can be called from C99 or C90, perhaps with reduced functionality.)


    I can somewhat agree with those sentiments.

    In my field, C99 support was not an option for many compilers for
    several years - some toolchains have never had good support for it. (I remember seeing a toolchain in the 2010's advertised as "now with
    partial C99 support" as a feature.) So it took time before I could use
    it properly.

    However, once I got used to it, the difference to the way I code was significant. I greatly dislike the occasions when I have to go back to
    the middle ages coding in C90 - though not as much as the dark ages of large-scale assembly programming. I have not the slightest doubt that
    with C99, my code is clearer, better structured, has a lower risk of
    errors and is easier to maintain than with C90. I won't claim it is
    /hugely/ better, but it is definitely significant. C99 is when C grew up.

    The changes in C11 are much less, but there are some that are useful to
    me. C17 is just a bug-fix (and slightly nicer typography). C23 has a
    number of useful features, but again is not like the step from C90 to
    C99. (And of course newer standards always have features that I dislike
    too.)

    One thing I generally like with newer standards is that it can mean
    fewer implementation-specific features in my code, or fewer ugly
    workarounds. I expect to see at least some of my gcc __attribute__'s
    replaced by [[C23 attributes]]. C11 came with a real _Static_assert,
    instead of having to use ugly macros for the task. These sorts of
    things don't make a big impact in my code (and are often tidied away in
    macros anyway), but are nice to see.

    I don't expect to be relying on C23 features for a while yet, however -
    I expect most of my new C code to be compatible with C99 (or at least, "gnu99"). And of course I won't change existing projects to newer
    standards.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to bart on Mon Mar 24 12:51:40 2025
    On 23/03/2025 02:34, bart wrote:

    It's strange: in one part of the computing world, the speed of building software is a very big deal. All sorts of efforts are going on to deal
    with it. Compilation speed for developers is always an issue. There is a general movement away from LLVM-based backends /because/ it is so slow.

    And yet in another part (namely comp.lang.c) it appears to be a total non-issue!


    You find it strange that different parts of the computing world (or,
    more appropriately, software development world) have different
    priorities, needs and focuses for their tools? I find it very strange
    that anyone would find that strange!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to David Brown on Mon Mar 24 14:07:20 2025
    On 24/03/2025 11:51, David Brown wrote:
    On 23/03/2025 02:34, bart wrote:

    It's strange: in one part of the computing world, the speed of
    building software is a very big deal. All sorts of efforts are going
    on to deal with it. Compilation speed for developers is always an
    issue. There is a general movement away from LLVM-based backends /
    because/ it is so slow.

    And yet in another part (namely comp.lang.c) it appears to be a total
    non-issue!


    You find it strange that different parts of the computing world (or,
    more appropriately, software development world) have different
    priorities, needs and focuses for their tools?  I find it very strange
    that anyone would find that strange!



    What was strange was that that one view was shared by pretty much
    everyone in comp.lang.c.

    Even though one or two (like Scott Lurndal) had reported significant
    build times (even you had remarked on it and made suggestions), but that
    was brushed off.

    I don't know; maybe with fast builds, people would have to do more work
    instead of taking a coffee break!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to bart on Mon Mar 24 15:32:32 2025
    On 24/03/2025 15:07, bart wrote:
    On 24/03/2025 11:51, David Brown wrote:
    On 23/03/2025 02:34, bart wrote:

    It's strange: in one part of the computing world, the speed of
    building software is a very big deal. All sorts of efforts are going
    on to deal with it. Compilation speed for developers is always an
    issue. There is a general movement away from LLVM-based backends /
    because/ it is so slow.

    And yet in another part (namely comp.lang.c) it appears to be a total
    non-issue!


    You find it strange that different parts of the computing world (or,
    more appropriately, software development world) have different
    priorities, needs and focuses for their tools?  I find it very strange
    that anyone would find that strange!



    What was strange was that that one view was shared by pretty much
    everyone in comp.lang.c.

    Do you know what the people in comp.lang.c have in common?

    We program in C.

    Do you know /why/ people program in C?

    There can be many reasons, but a very common one is that they want fast resulting binaries. Most serious programmers are familiar with more
    than one language, and pretty much all other languages are higher level, easier, and have higher developer productivity than C - but the
    resulting binaries are almost always slower.

    C programmers are typically not bothered about build times because a)
    their build times are rarely high (Scott's projects are C++), and b),
    they are willing to sacrifice high build times if it means more
    efficient run times.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to David Brown on Mon Mar 24 17:10:41 2025
    On Mon, 24 Mar 2025 15:32:32 +0100
    David Brown <david.brown@hesbynett.no> wrote:

    On 24/03/2025 15:07, bart wrote:
    On 24/03/2025 11:51, David Brown wrote:
    On 23/03/2025 02:34, bart wrote:

    It's strange: in one part of the computing world, the speed of
    building software is a very big deal. All sorts of efforts are
    going on to deal with it. Compilation speed for developers is
    always an issue. There is a general movement away from LLVM-based
    backends / because/ it is so slow.

    And yet in another part (namely comp.lang.c) it appears to be a
    total non-issue!


    You find it strange that different parts of the computing world
    (or, more appropriately, software development world) have
    different priorities, needs and focuses for their tools?  I find
    it very strange that anyone would find that strange!



    What was strange was that that one view was shared by pretty much
    everyone in comp.lang.c.

    Do you know what the people in comp.lang.c have in common?

    We program in C.

    Do you know /why/ people program in C?

    There can be many reasons, but a very common one is that they want
    fast resulting binaries. Most serious programmers are familiar with
    more than one language, and pretty much all other languages are
    higher level, easier, and have higher developer productivity than C -
    but the resulting binaries are almost always slower.


    I disagree.
    35-36 years ago I forgot about Pascal after few months of exposure to C.
    It didn't happen because of speed of resulting binaries, but due to my
    own improved productivity.
    Ada and Rust are two other exampled of languages that lag behinds C in productivity.
    And in all three cases above I still did not leave the realm of
    relatively good languages. Unfortunately, I have to use at least one
    bad language as well - tcl.

    Besides, you forgot the main reason people program in C - often they
    have no other choice.

    C programmers are typically not bothered about build times because a)
    their build times are rarely high (Scott's projects are C++), and b),
    they are willing to sacrifice high build times if it means more
    efficient run times.


    More important reason is similar to one mentioned above - they have no
    other choice. Neither of language nor of compiler.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Mon Mar 24 15:00:17 2025
    On Mon, 24 Mar 2025 15:32:32 +0100
    David Brown <david.brown@hesbynett.no> wibbled:
    On 24/03/2025 15:07, bart wrote:
    What was strange was that that one view was shared by pretty much
    everyone in comp.lang.c.

    Do you know what the people in comp.lang.c have in common?

    We program in C.

    Do you know /why/ people program in C?

    There can be many reasons, but a very common one is that they want fast >resulting binaries. Most serious programmers are familiar with more
    than one language, and pretty much all other languages are higher level, >easier, and have higher developer productivity than C - but the
    resulting binaries are almost always slower.

    C programmers are typically not bothered about build times because a)
    their build times are rarely high (Scott's projects are C++), and b),
    they are willing to sacrifice high build times if it means more
    efficient run times.

    I'm not sure what kind of build time he's looking for either. On this Mac
    ARM laptop I can compile a 7600 line utility I wrote in C in 0.8 seconds
    real time, and that is using a makefile with 18 seperate source files and
    a header and includes link time. So unless he's rebuilding the linux kernel every day I don't see what the problem is.

    loki$ ls *.c *.h | wc -l
    19
    loki$ wc -l *.c *.h
    :
    :
    691 globals.h
    7602 total
    loki$ time make
    :
    :
    real 0m0.815s
    user 0m0.516s
    sys 0m0.252s

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Muttley@DastardlyHQ.org on Mon Mar 24 17:22:36 2025
    On Mon, 24 Mar 2025 15:00:17 -0000 (UTC)
    Muttley@DastardlyHQ.org wrote:

    On Mon, 24 Mar 2025 15:32:32 +0100
    David Brown <david.brown@hesbynett.no> wibbled:
    On 24/03/2025 15:07, bart wrote:
    What was strange was that that one view was shared by pretty much
    everyone in comp.lang.c.

    Do you know what the people in comp.lang.c have in common?

    We program in C.

    Do you know /why/ people program in C?

    There can be many reasons, but a very common one is that they want
    fast resulting binaries. Most serious programmers are familiar with
    more than one language, and pretty much all other languages are
    higher level, easier, and have higher developer productivity than C
    - but the resulting binaries are almost always slower.

    C programmers are typically not bothered about build times because
    a) their build times are rarely high (Scott's projects are C++), and
    b), they are willing to sacrifice high build times if it means more >efficient run times.

    I'm not sure what kind of build time he's looking for either. On this
    Mac ARM laptop I can compile a 7600 line utility I wrote in C in 0.8
    seconds real time, and that is using a makefile with 18 seperate
    source files and a header and includes link time. So unless he's
    rebuilding the linux kernel every day I don't see what the problem is.

    loki$ ls *.c *.h | wc -l
    19
    loki$ wc -l *.c *.h
    :
    :
    691 globals.h
    7602 total
    loki$ time make
    :
    :
    real 0m0.815s
    user 0m0.516s
    sys 0m0.252s



    You would illustrate you point better if you run 'time make -B'.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to David Brown on Mon Mar 24 15:44:10 2025
    David Brown <david.brown@hesbynett.no> writes:
    On 24/03/2025 15:07, bart wrote:
    On 24/03/2025 11:51, David Brown wrote:
    On 23/03/2025 02:34, bart wrote:


    C programmers are typically not bothered about build times because a)
    their build times are rarely high (Scott's projects are C++), and b),
    they are willing to sacrifice high build times if it means more
    efficient run times.

    C++ and C aren't that far apart. I work with two codebases,
    one in C++ (several million SLOC) and one in C (linux kernel).

    I'd like to see Bart (try to) compile linux with his C compiler.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to bart on Mon Mar 24 15:42:11 2025
    bart <bc@freeuk.com> writes:
    On 24/03/2025 11:51, David Brown wrote:
    On 23/03/2025 02:34, bart wrote:

    It's strange: in one part of the computing world, the speed of
    building software is a very big deal. All sorts of efforts are going
    on to deal with it. Compilation speed for developers is always an
    issue. There is a general movement away from LLVM-based backends /
    because/ it is so slow.

    And yet in another part (namely comp.lang.c) it appears to be a total
    non-issue!


    You find it strange that different parts of the computing world (or,
    more appropriately, software development world) have different
    priorities, needs and focuses for their tools?  I find it very strange
    that anyone would find that strange!



    What was strange was that that one view was shared by pretty much
    everyone in comp.lang.c.

    Even though one or two (like Scott Lurndal) had reported significant
    build times (even you had remarked on it and made suggestions), but that
    was brushed off.

    You're taking my statements out of context. Yes, a build for a
    very large project (which you've never seen, much less worked on),
    can take time. That's just a fact. Your tools could never build
    such a project.


    I don't know; maybe with fast builds, people would have to do more work >instead of taking a coffee break!

    Or they use tools like make(1) to increase productivity by only
    rebuilding the portion of the project that needs to be rebuilt;
    which takes a few seconds. Thus, they already have fast builds
    and aren't wasting cycles rebuilding code that hasn't changed.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Mon Mar 24 16:12:46 2025
    On Mon, 24 Mar 2025 17:22:36 +0200
    Michael S <already5chosen@yahoo.com> wibbled:
    On Mon, 24 Mar 2025 15:00:17 -0000 (UTC)
    Muttley@DastardlyHQ.org wrote:
    real 0m0.815s
    user 0m0.516s
    sys 0m0.252s



    You would illustrate you point better if you run 'time make -B'.

    The point is illustrated quite nicely. It was quite clear that was a full
    build given what I wrote.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Muttley@DastardlyHQ.org on Mon Mar 24 16:02:57 2025
    On 24/03/2025 15:00, Muttley@DastardlyHQ.org wrote:
    On Mon, 24 Mar 2025 15:32:32 +0100
    David Brown <david.brown@hesbynett.no> wibbled:
    On 24/03/2025 15:07, bart wrote:
    What was strange was that that one view was shared by pretty much
    everyone in comp.lang.c.

    Do you know what the people in comp.lang.c have in common?

    We program in C.

    Do you know /why/ people program in C?

    There can be many reasons, but a very common one is that they want fast
    resulting binaries. Most serious programmers are familiar with more
    than one language, and pretty much all other languages are higher level,
    easier, and have higher developer productivity than C - but the
    resulting binaries are almost always slower.

    C programmers are typically not bothered about build times because a)
    their build times are rarely high (Scott's projects are C++), and b),
    they are willing to sacrifice high build times if it means more
    efficient run times.

    I'm not sure what kind of build time he's looking for either. On this Mac
    ARM laptop I can compile a 7600 line utility I wrote in C in 0.8 seconds
    real time, and that is using a makefile with 18 seperate source files and
    a header and includes link time. So unless he's rebuilding the linux kernel every day I don't see what the problem is.

    loki$ ls *.c *.h | wc -l
    19
    loki$ wc -l *.c *.h
    :
    :
    691 globals.h
    7602 total
    loki$ time make
    :
    :
    real 0m0.815s
    user 0m0.516s
    sys 0m0.252s


    So, your throughput is a whopping 9.5K lines/second?

    Here's a project I'm working at the minute (not in C):

    c:\bx>tim mm bb
    Compiling bb.m to bb.exe
    Time: 0.066

    Build time is 1/15th of a second, for about 30K lines (so 450Klps). But
    I also want to compile this via C, to take advantage of gcc's superior optimiser. So first I transpile to C (that's also under 70ms):

    c:\bx>mc -c bb
    Compiling bb.m to bb.c

    Now I can invoke gcc:

    c:\bx>tim gcc -O3 -s bb.c -o dd
    Time: 12.316

    The generated C file is 38Kloc, and takes 12 seconds. It takes nearly
    TWO HUNDRED TIMES longer to build.

    I can tolerate that from time to time,

    Using gcc-O0 takes only two seconds, but since it's generating slower
    code than mine, there is no point:

    Build time Run time

    bcc bb 70 ms 628 ms
    gcc -O0 2000 ms 1517 ms
    gcc -O3 12300 ms 579 ms

    (On this test, gcc-O3 was 8% faster, but in depends on the input to this interpreter project. On average it is 25% faster.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Mon Mar 24 16:56:03 2025
    On Mon, 24 Mar 2025 16:49:35 +0000
    bart <bc@freeuk.com> wibbled:
    On 24/03/2025 16:17, Muttley@DastardlyHQ.org wrote:
    *GASP*! 12 whole seconds! How can you cope with your day being interrupted >> like that for so long!

    So applying your 100:1 ratio, you'd spend 20 whole minutes pondering
    your next move before compiling again?

    No, because unlike you I understand the concept of splitting into modules
    and having a makefile that just rebuilds whats changed, not the entire codebase.

    When you have near-instant build times then it's completely different
    way of working.

    No it isn't. THat example I gave you built in 0.8 seconds. It would make
    zero difference to me if it took 8 or 80.

    Do you think people who work with scripting languages (or even writing
    HTML) would tolerate an exasperating 12-second day between hitting Run,
    and their test-run starting?

    In the case of Python they tolerate hopeless performance so who knows.

    In the case of this project, development is incremental: run a test,
    there's an opcode not yet done, add the lines for it, test it again.

    Or do a timing test, measure, tweak a line to two, time it again to see
    if it's any better.

    Or there might be bunch of configuration and debug settings, that don't >warrant dedicated CLI options, so to change a setting means changing one
    line and rebuilding. Why not? It only takes an instant!

    If I had to wait 10+ seconds each time then it would both take all
    fucking day AND drive me around the bend.

    See above for modules.

    You really haven't got a clue.

    Says the guy who rebuilds everything from scratch each time. Must be irony
    week again.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Muttley@DastardlyHQ.org on Mon Mar 24 16:49:35 2025
    On 24/03/2025 16:17, Muttley@DastardlyHQ.org wrote:
    On Mon, 24 Mar 2025 16:02:57 +0000
    bart <bc@freeuk.com> wibbled:
    On 24/03/2025 15:00, Muttley@DastardlyHQ.org wrote:
    real 0m0.815s
    user 0m0.516s
    sys 0m0.252s


    So, your throughput is a whopping 9.5K lines/second?

    On a tired laptop with multiple files it seems pretty good to me.

    Build time is 1/15th of a second, for about 30K lines (so 450Klps). But
    I also want to compile this via C, to take advantage of gcc's superior
    optimiser. So first I transpile to C (that's also under 70ms):

    I'll paraphrase someone elses point - who fucking cares? If you need lightning
    fast compilation because you're constantly rebuilding your shit code to see if
    it'll even compile never mind work then that says a lot about you as a dev.

    The thinking/writing to compilation time ratio I imagine with most devs would probably be a minimum of 100 - 1, possibly much larger



    The generated C file is 38Kloc, and takes 12 seconds. It takes nearly
    TWO HUNDRED TIMES longer to build.

    *GASP*! 12 whole seconds! How can you cope with your day being interrupted like that for so long!

    So applying your 100:1 ratio, you'd spend 20 whole minutes pondering
    your next move before compiling again?

    When you have near-instant build times then it's completely different
    way of working.

    Do you think people who work with scripting languages (or even writing
    HTML) would tolerate an exasperating 12-second day between hitting Run,
    and their test-run starting?

    In the case of this project, development is incremental: run a test,
    there's an opcode not yet done, add the lines for it, test it again.

    Or do a timing test, measure, tweak a line to two, time it again to see
    if it's any better.

    Or there might be bunch of configuration and debug settings, that don't
    warrant dedicated CLI options, so to change a setting means changing one
    line and rebuilding. Why not? It only takes an instant!

    If I had to wait 10+ seconds each time then it would both take all
    fucking day AND drive me around the bend.

    You really haven't got a clue.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Mon Mar 24 16:17:41 2025
    On Mon, 24 Mar 2025 16:02:57 +0000
    bart <bc@freeuk.com> wibbled:
    On 24/03/2025 15:00, Muttley@DastardlyHQ.org wrote:
    real 0m0.815s
    user 0m0.516s
    sys 0m0.252s


    So, your throughput is a whopping 9.5K lines/second?

    On a tired laptop with multiple files it seems pretty good to me.

    Build time is 1/15th of a second, for about 30K lines (so 450Klps). But
    I also want to compile this via C, to take advantage of gcc's superior >optimiser. So first I transpile to C (that's also under 70ms):

    I'll paraphrase someone elses point - who fucking cares? If you need lightning fast compilation because you're constantly rebuilding your shit code to see if it'll even compile never mind work then that says a lot about you as a dev.
    The thinking/writing to compilation time ratio I imagine with most devs would probably be a minimum of 100 - 1, possibly much larger.

    The generated C file is 38Kloc, and takes 12 seconds. It takes nearly
    TWO HUNDRED TIMES longer to build.

    *GASP*! 12 whole seconds! How can you cope with your day being interrupted
    like that for so long!

    I can tolerate that from time to time,

    Get a life.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to bart on Mon Mar 24 17:15:00 2025
    On 2025-03-24, bart <bc@freeuk.com> wrote:
    When you have near-instant build times then it's completely different
    way of working.

    Indeed, this is not just old man shaking fist at the cloud.

    It turns out that there are subcultures in contemporary development that
    value quasi-instant build times.

    There exists a language called Dart for front-end work, which has
    a framework for that called Flutter. I've never used it and don't
    advocate it.

    In early 2024, the Dart project announced that they were adding macros.

    A year later, they announced that they are scrapping and removing
    the feature:

    One of the reasons?

    "Semantic introspection, unfortunately, turned out to introduce large
    compile-time costs which made it difficult to keep stateful hot reload
    hot."

    Source: https://medium.com/dartlang/an-update-on-dart-macros-data-serialization-06d3037d4f12

    Turns out what that refers to is that they want update times measured in **milliseconds*; that is, from the time a developer makes a change, to
    that change being compiled, loaded in the live application and running.

    I tried discussing this on HackerNews. In the uote below, A is me, and
    B: is a response from an actual Dart/Flutter maintainer:

    https://news.ycombinator.com/item?id=42872693

    A> It takes seconds to minutes to make the code change, but when you hit
    A> the hot-key to deploy it to the target, it's gotta compile and upload
    A> in milliseconds?

    B> Yup! Those seconds to minutes are meaningful time well spent by the user
    B> thinking about their program and the problem. Those milliseconds are
    B> just them sitting on their thumb getting mad at the machine.

    Not being entirely convinced, I retorted:

    A> You literally cannot get your thumb under your ass in milliseconds
    A> to sit on it, unless you're the Olympic record holder for that
    A> sporting event.

    :)

    But, the guy claims it was based on years of developer surveys,
    gathering opt-in metrics, and doing various UX research.

    Do you think people who work with scripting languages (or even writing
    HTML) would tolerate an exasperating 12-second day between hitting Run,
    and their test-run starting?

    Right; see above. People doing front-end work with Flutter are
    in this category.

    I'm not convinced that they couldn't spare a second or two though.

    According to the above source though, when developers have to wait milliseconds, they are just sitting on their thumbs and getting
    mad at the machine.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Michael S on Mon Mar 24 19:07:44 2025
    On 24/03/2025 16:10, Michael S wrote:
    On Mon, 24 Mar 2025 15:32:32 +0100
    David Brown <david.brown@hesbynett.no> wrote:

    On 24/03/2025 15:07, bart wrote:
    On 24/03/2025 11:51, David Brown wrote:
    On 23/03/2025 02:34, bart wrote:

    It's strange: in one part of the computing world, the speed of
    building software is a very big deal. All sorts of efforts are
    going on to deal with it. Compilation speed for developers is
    always an issue. There is a general movement away from LLVM-based
    backends / because/ it is so slow.

    And yet in another part (namely comp.lang.c) it appears to be a
    total non-issue!


    You find it strange that different parts of the computing world
    (or, more appropriately, software development world) have
    different priorities, needs and focuses for their tools?  I find
    it very strange that anyone would find that strange!



    What was strange was that that one view was shared by pretty much
    everyone in comp.lang.c.

    Do you know what the people in comp.lang.c have in common?

    We program in C.

    Do you know /why/ people program in C?

    There can be many reasons, but a very common one is that they want
    fast resulting binaries. Most serious programmers are familiar with
    more than one language, and pretty much all other languages are
    higher level, easier, and have higher developer productivity than C -
    but the resulting binaries are almost always slower.


    I disagree.
    35-36 years ago I forgot about Pascal after few months of exposure to C.
    It didn't happen because of speed of resulting binaries, but due to my
    own improved productivity.
    Ada and Rust are two other exampled of languages that lag behinds C in productivity.
    And in all three cases above I still did not leave the realm of
    relatively good languages. Unfortunately, I have to use at least one
    bad language as well - tcl.


    I said "most", not "all" :-)

    And I meant this all in a general sense - people want different things
    from their languages and their tools, and will have different
    trade-offs. When I want fast, efficient code, I use C - when I want
    quick and easy development, I use Python. It does not surprise me that
    most people in comp.lang.c are not bothered about compile times for C -
    but it /would/ surprise me if they were not bothered about the run times
    for compiled C. When they /do/ have irritating compile times, they make
    sure they use good build tools and parallel builds to reduce the time. Similarly, people in comp.lang.python would no doubt complain if the .py
    to .pyc byte-compiling took any noticeable time at all, but understand
    fine that the run-time of Python is inefficient - and use "tricks" such
    numpy or pypy when that becomes an issue.

    Other things people might be happy to trade for build times include
    static analysis for error checking, or compact binaries. And equally
    they might be happy to trade run-time for dynamic error checking.

    Only a fool thinks that /one/ single aspect of software development
    should be overriding, especially when that aspect is rarely an issue in practice and easily solvable by using a computer that is not decades old.

    (I am sure that Rust and Ada developers consider their languages to be a
    lot more productive than C - especially when viewed as the time to
    produce fully debugged and working code. But I am not going to start an argument about that!)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Muttley@DastardlyHQ.org on Mon Mar 24 18:20:02 2025
    On 24/03/2025 16:56, Muttley@DastardlyHQ.org wrote:
    On Mon, 24 Mar 2025 16:49:35 +0000
    bart <bc@freeuk.com> wibbled:
    On 24/03/2025 16:17, Muttley@DastardlyHQ.org wrote:
    *GASP*! 12 whole seconds! How can you cope with your day being interrupted >>> like that for so long!

    So applying your 100:1 ratio, you'd spend 20 whole minutes pondering
    your next move before compiling again?

    No, because unlike you I understand the concept of splitting into modules
    and having a makefile that just rebuilds whats changed, not the entire codebase.

    I said the project used 30 modules. The generated C version is a single
    module. In that form, it allows gcc to perform whole-program
    optimisations, for an extra bit of speed, among other benefits like ease
    of distribution and deployment.

    When you have near-instant build times then it's completely different
    way of working.

    No it isn't. THat example I gave you built in 0.8 seconds. It would make
    zero difference to me if it took 8 or 80.

    Really? It could take 80 seconds extra and you would just sit there and
    take it? Boy you must really like stopping at red lights then.


    Do you think people who work with scripting languages (or even writing
    HTML) would tolerate an exasperating 12-second day between hitting Run,
    and their test-run starting?

    In the case of Python they tolerate hopeless performance so who knows.

    Perhaps they're sensible enough to use Python where that is not
    relevant. However when they make a change, they want to see the results
    NOW. They wouldn't even understand why there need be any delay.

    I used to sell an app where half the program and all the GUI was
    interpreted code. The user wouldn't be aware what they were running.

    You really haven't got a clue.

    Says the guy who rebuilds everything from scratch each time.

    You're seriously suggesting I should use a makefile so that I can save
    0.065 seconds by only compiling one module?

    For that matter, why do YOU use a makefile when your full build is only
    0.8 seconds?

    Fast compilers and also whole-program compilers open up lots of new possibilities. But clearly you're quite happy being stuck in the stone age.

    You also seem blissfully content to use slow tools without ever
    questioning whether they need be that slow. Be interesting to know how
    slow they can get before even you start complaining.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Chris M. Thomasson on Mon Mar 24 20:13:27 2025
    On 24/03/2025 18:27, Chris M. Thomasson wrote:
    On 3/24/2025 8:44 AM, Scott Lurndal wrote:
    David Brown <david.brown@hesbynett.no> writes:
    On 24/03/2025 15:07, bart wrote:
    On 24/03/2025 11:51, David Brown wrote:
    On 23/03/2025 02:34, bart wrote:


    C programmers are typically not bothered about build times because a)
    their build times are rarely high (Scott's projects are C++), and b),
    they are willing to sacrifice high build times if it means more
    efficient run times.

    C++ and C aren't that far apart.   I work with two codebases,
    one in C++ (several million SLOC) and one in C (linux kernel).

    I'd like to see Bart (try to) compile linux with his C compiler.

    That would be a good test of his tool base.

    It would be a good test of anyone's. Maybe even of MSVC - could that
    build Linux?

    This is after all a project which expects to be built with gcc, and for
    which gcc has doubtless been tweaked to be able to build. Any other tool
    would have to be compatible, and even then, extra support may be needed:

    "The Linux kernel has always traditionally been compiled with GNU
    toolchains such as GCC and binutils. Ongoing work has allowed for Clang
    and LLVM utilities to be used as viable substitutes"

    Anyway I can tell you know that, even if I had the faintest clue how to
    go about the job even with gcc, my compiler wouldn't work, for a long
    list of reasons, not least that it produces Windows executables that run
    under Windows.

    What's happening here is that SL is in a job where he has to haul huge
    amounts of stuff using a fleet of trucks, and he's asking how well the
    upstart with the self-built sports car could cope with the same task.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Scott Lurndal on Mon Mar 24 23:01:16 2025
    On Mon, 24 Mar 2025 15:44:10 GMT
    scott@slp53.sl.home (Scott Lurndal) wrote:

    David Brown <david.brown@hesbynett.no> writes:
    On 24/03/2025 15:07, bart wrote:
    On 24/03/2025 11:51, David Brown wrote:
    On 23/03/2025 02:34, bart wrote:


    C programmers are typically not bothered about build times because
    a) their build times are rarely high (Scott's projects are C++), and
    b), they are willing to sacrifice high build times if it means more >efficient run times.

    C++ and C aren't that far apart. I work with two codebases,
    one in C++ (several million SLOC) and one in C (linux kernel).


    With computee resources you use for compilation of your simulator,
    compilation of Linux kernel should finish rather quickly.
    At least, quickly by Muttley's standard. He said that 80 seconds are
    o.k. I suppose that it would take less than that.

    I'd like to see Bart (try to) compile linux with his C compiler.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Muttley@DastardlyHQ.org on Mon Mar 24 21:16:17 2025
    On 24/03/2025 16:17, Muttley@DastardlyHQ.org wrote:
    On Mon, 24 Mar 2025 16:02:57 +0000
    bart <bc@freeuk.com> wibbled:
    On 24/03/2025 15:00, Muttley@DastardlyHQ.org wrote:
    real 0m0.815s
    user 0m0.516s
    sys 0m0.252s


    So, your throughput is a whopping 9.5K lines/second?

    On a tired laptop with multiple files it seems pretty good to me.

    Build time is 1/15th of a second, for about 30K lines (so 450Klps). But
    I also want to compile this via C, to take advantage of gcc's superior
    optimiser. So first I transpile to C (that's also under 70ms):

    I'll paraphrase someone elses point - who fucking cares? If you need lightning
    fast compilation because you're constantly rebuilding your shit code to see if
    it'll even compile never mind work then that says a lot about you as a dev. The thinking/writing to compilation time ratio I imagine with most devs would probably be a minimum of 100 - 1, possibly much larger.

    If a build takes 0.1 seconds, then 100 times that is 10 seconds. I don't
    build that often on average, so I must do more thinking between
    compilations than you guys!

    And more work: even if I compile 1000 times a day, that is less than two minutes.

    It's funny actually that you are in favour of the incremental builds
    that makefiles help with. All about breaking things down into smaller
    steps ...

    ... EXCEPT when it comes to edit-run cycles. There you're in favour of
    bunching lots of changes together, and doing one big build followed by
    testing a dozen different things on that new version. (As though this
    was the 1960s and you only got one batch compilation per night.)

    I guess doing lots of small, incremental edit-run-test cycles, that test
    one detail at a time, goes out the window?

    What exactly is wrong with it?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From James Kuyper@21:1/5 to bart on Mon Mar 24 19:25:48 2025
    On 23/03/2025 02:34, bart wrote:

    It's strange: in one part of the computing world, the speed of building software is a very big deal. All sorts of efforts are going on to deal
    with it. Compilation speed for developers is always an issue. There is a general movement away from LLVM-based backends /because/ it is so slow.

    And yet in another part (namely comp.lang.c) it appears to be a total non-issue!

    Throughout my career, far less time was spent compiling my programs than
    was spent executing them. The overwhelming majority of the code I wrote
    was executed 24/7 on at least one machine, and usually hundreds, for
    several months at a time, for each version delivered. A single delivery
    might involve development and testing that might require a few dozen compilations. Thanks to effective use of makefiles, most compilations
    were of only a few modules at a time, but even a full compilation would
    take only a few minutes.

    And you find it odd that I consider the speed of the executable to be
    far more important that the speed of compilation? Why would you expect
    me to be so irrational as to have the opposite preference?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to James Kuyper on Tue Mar 25 00:53:56 2025
    On 24/03/2025 23:25, James Kuyper wrote:
    On 23/03/2025 02:34, bart wrote:

    It's strange: in one part of the computing world, the speed of building
    software is a very big deal. All sorts of efforts are going on to deal
    with it. Compilation speed for developers is always an issue. There is a
    general movement away from LLVM-based backends /because/ it is so slow.

    And yet in another part (namely comp.lang.c) it appears to be a total
    non-issue!

    Throughout my career, far less time was spent compiling my programs than
    was spent executing them.

    Whose time, yours, or the people that ran your programs?

    (I had 100s of customers some of whom were running my programs all day.
    One of my programs was run daily for 23 years.)

    The overwhelming majority of the code I wrote
    was executed 24/7 on at least one machine, and usually hundreds, for
    several months at a time, for each version delivered.

    I don't think I mentioned execution time. My remarks are about the
    developer experience. Yes, if you're going to make a production version
    or a long-running program, then it is worthwile optimising it to the hilt.

    I just find compile times of even seconds annoying: imagine if you
    clicked on something (after clicking 100 buttons with instant response)
    and nothing happens ... maybe it turns out to be 7 seconds, or 17, but
    you don't know that while waiting, as no progress bar is shown.

    It's a very frustrating delay that breaks your concentration and
    destroys fluency.

    (There used to be a bug in Thunderbird where it would hang for seconds
    at a time while you were typing, and you had to pause until it caught
    up. Don't tell me you wouldn't find it annoying because it's only a 'few seconds'.

    You don't expect just 'typing' to take a lot of computation, and I don't
    expect a simple translation which I know can be done in T time, to take
    one to two magnitudes longer.)


    A single delivery
    might involve development and testing that might require a few dozen compilations. Thanks to effective use of makefiles, most compilations
    were of only a few modules at a time,

    I used to do that without makefiles! If you've been working on a project
    for a year, then you know exactly what the dependencies are. And when I
    did have to compile everything, it [my IDE that invoked the compiler]
    would show what it was up to. Not that it took that long anyway, as it
    zoomed through the displayed list of files.

    I considered it part of my job to get a workable edit-build-run cycle on
    any project.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to bart on Mon Mar 24 21:50:46 2025
    bart <bc@freeuk.com> writes:

    I just find compile times of even seconds annoying: [...]

    What is important for you to understand is that other
    people do not.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Janis Papanagnou@21:1/5 to bart on Tue Mar 25 08:19:14 2025
    On 25.03.2025 01:53, bart wrote:

    I used to do that without makefiles! If you've been working on a project
    for a year, then you know exactly what the dependencies are. [...]

    This explains a lot (about you, about your projects). It's not what
    non-trivial real life projects look like, obviously. - I wonder why
    you insist (and re-iterate) that this mindset would be relevant for professional software projects. It may "work" (sort of) for simple
    things that you do, but to infer that your simple approach here is
    of any relevance, as a sophisticated software development principle,
    and ignoring the fundamental divide and conquer principle, which is
    scalable and applicable for all project sizes, and especially for
    the non-trivial ones, is beyond me. If you're working just on small
    and the same (small) projects all the time, and also if you have no
    co-workers, or both, then you have a very special case; suboptimal
    private development habits may work for you but that's all. There's
    just no reason to not use makefiles (or other dependency managing
    tools) and let them decide what needs to be compiled [in no time].

    Janis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Tue Mar 25 08:40:34 2025
    On Mon, 24 Mar 2025 18:20:02 +0000
    bart <bc@freeuk.com> wibbled:
    On 24/03/2025 16:56, Muttley@DastardlyHQ.org wrote:
    No, because unlike you I understand the concept of splitting into modules
    and having a makefile that just rebuilds whats changed, not the entire
    codebase.

    I said the project used 30 modules. The generated C version is a single >module. In that form, it allows gcc to perform whole-program

    And there's your problem. And FWIW you don't need a single source file for whole program optimisation you muppet.

    No it isn't. THat example I gave you built in 0.8 seconds. It would make
    zero difference to me if it took 8 or 80.

    Really? It could take 80 seconds extra and you would just sit there and
    take it? Boy you must really like stopping at red lights then.

    So you never stop, you're coding 100% of the time without a break? Actually
    I can believe that, you sound a bit obsessive. Still, in that 80 secs I'd
    spend some of it replying to you. Then in the other 70 secs I'd read the news.

    In the case of Python they tolerate hopeless performance so who knows.

    Perhaps they're sensible enough to use Python where that is not
    relevant. However when they make a change, they want to see the results
    NOW. They wouldn't even understand why there need be any delay.

    Umm, no. And FWIW any program thats not some toy plaything that you write will probably do things like connecting to a DB/server etc which will take its own time anyway so very little is immediate.

    Says the guy who rebuilds everything from scratch each time.

    You're seriously suggesting I should use a makefile so that I can save
    0.065 seconds by only compiling one module?

    Who knows what difference it would make with your toy language. I really
    don't care either.

    For that matter, why do YOU use a makefile when your full build is only
    0.8 seconds?

    Modularising code is far more than just about compilation speed which you'd know if you had anything approaching a clue.

    Fast compilers and also whole-program compilers open up lots of new >possibilities. But clearly you're quite happy being stuck in the stone age.

    The stone age is where they used one huge source file for a program.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Tue Mar 25 08:41:43 2025
    On Mon, 24 Mar 2025 21:16:17 +0000
    bart <bc@freeuk.com> wibbled:
    .... EXCEPT when it comes to edit-run cycles. There you're in favour of >bunching lots of changes together, and doing one big build followed by >testing a dozen different things on that new version. (As though this
    was the 1960s and you only got one batch compilation per night.)

    I'm sure that made some sense to you when you wrote it. I have no idea wtf you're talking about however.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Muttley@DastardlyHQ.org on Tue Mar 25 11:04:12 2025
    On 25/03/2025 08:41, Muttley@DastardlyHQ.org wrote:
    On Mon, 24 Mar 2025 21:16:17 +0000
    bart <bc@freeuk.com> wibbled:
    .... EXCEPT when it comes to edit-run cycles. There you're in favour of
    bunching lots of changes together, and doing one big build followed by
    testing a dozen different things on that new version. (As though this
    was the 1960s and you only got one batch compilation per night.)

    I'm sure that made some sense to you when you wrote it. I have no idea wtf you're talking about however.

    Then I have no idea WTF you're on about when you say that programmers
    should spend 100 times as long thinking as they do compiling. (Try
    suggesting that for any other kind of application!)

    Or WTF it is that you're 'on', assuming you're not just here to wind
    people up.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Muttley@DastardlyHQ.org on Tue Mar 25 11:09:21 2025
    On 25/03/2025 08:40, Muttley@DastardlyHQ.org wrote:


    For that matter, why do YOU use a makefile when your full build is only
    0.8 seconds?

    Modularising code is far more than just about compilation speed which you'd know if you had anything approaching a clue.

    You can modularise code without also needing a makefile!


    Fast compilers and also whole-program compilers open up lots of new
    possibilities. But clearly you're quite happy being stuck in the stone age.

    The stone age is where they used one huge source file for a program.

    Maybe they also used one huge binary for a program.

    You're either fucking stupid or an incredibly successful troll.

    No matter how many facts you're given you ignore the ones you don't like
    and twist things around to make some smart-ass comment.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Muttley@DastardlyHQ.org on Tue Mar 25 13:29:12 2025
    On Tue, 25 Mar 2025 08:40:34 -0000 (UTC)
    Muttley@DastardlyHQ.org wrote:

    On Mon, 24 Mar 2025 18:20:02 +0000
    bart <bc@freeuk.com> wibbled:

    Fast compilers and also whole-program compilers open up lots of new >possibilities. But clearly you're quite happy being stuck in the
    stone age.

    The stone age is where they used one huge source file for a program.


    Separate compilation process and modularization/# of source files are
    not directly related.
    When the speed (or size, or correctness) is paramount, separate
    compilation is inevitably inferior to whole-program compilation.
    If people still use separate compilation, it does not happen because
    it is good thing, but because compilers are too slow. Or, if we're
    talking about Unix world, out of inertia.
    In Windows world release builds of majority of C/C++ software is done
    LTCG. Most developers do not even think about it, its a default. Even
    in Linux world, projects that care about experience of their users,
    like for example Firefox, use LTCG as a minimum, sometimes even going
    for profile-guided whole program compilations.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Michael S on Tue Mar 25 11:17:30 2025
    On 24/03/2025 21:01, Michael S wrote:
    On Mon, 24 Mar 2025 15:44:10 GMT
    scott@slp53.sl.home (Scott Lurndal) wrote:

    David Brown <david.brown@hesbynett.no> writes:
    On 24/03/2025 15:07, bart wrote:
    On 24/03/2025 11:51, David Brown wrote:
    On 23/03/2025 02:34, bart wrote:


    C programmers are typically not bothered about build times because
    a) their build times are rarely high (Scott's projects are C++), and
    b), they are willing to sacrifice high build times if it means more
    efficient run times.

    C++ and C aren't that far apart. I work with two codebases,
    one in C++ (several million SLOC) and one in C (linux kernel).


    With computee resources you use for compilation of your simulator, compilation of Linux kernel should finish rather quickly.
    At least, quickly by Muttley's standard. He said that 80 seconds are
    o.k. I suppose that it would take less than that.

    He (I assume) actually said that 80 seconds would be OK even for his
    7.5Kloc project:

    "No it isn't. THat example I gave you built in 0.8 seconds. It would
    make zero difference to me if it took 8 or 80."

    But I think by this point he's saying anything just to be contrary.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Peter 'Shaggy' Haywood@21:1/5 to special case in the spec. I on Tue Mar 25 21:41:04 2025
    Groovy hepcat DFS was jivin' in comp.lang.c on Sun, 23 Mar 2025 01:29
    am. It's a cool scene! Dig it.

    On 3/22/2025 4:07 AM, Peter 'Shaggy' Haywood wrote:
    Groovy hepcat DFS was jivin' in comp.lang.c on Wed, 19 Mar 2025 03:42
    pm. It's a cool scene! Dig it.

    On 3/18/2025 11:26 PM, Keith Thompson wrote:
    DFS <nospam@dfs.com> writes:

    There's your problem.

    https://cses.fi/problemset/text/2433

    "In all problems you should read input from standard input and
    write output to standard output."

    ha! It usually helps to read the instructions first.

    The autotester expects your program to read arguments from stdin,
    not from command line arguments.

    It probably passes no arguments to your program, so argv[1] is a
    null
    pointer. It's likely your program compiles (assuming the NBSP
    characters were added during posting) and crashes at runtime,
    producing no output.

    I KNEW clc would come through!

    Pretty easy fixes:

    1 use scanf()

    Normally I'd say take care with scanf(). But in this case, since
    the
    program is intended to be executed in an automated environment, it
    should be fine.
    The reason scanf() can be a bit iffy is that you can't control
    what a
    user will enter. If you search Google or Duck Duck Go for
    "comp.lang.c faq" you can find more information on this and other
    issues. (The FAQ is still out there, people..., somewhere...)

    https://c-faq.com/

    There we go! :)

    I still see links to that document from time to time, like on
    university websites.


    2 update int to long
    3 handle special case of n = 1

    The problem definition doesn't mention any special case. You
    should, I
    think, treat 1 like any other number. So the output for 1 should be

    1 4 2 1


    It's a 'special case' because n is already 1.

    No, there was no special case mentioned in the specification.
    Therefore 1 is not a special case, and you still run the algorithm on
    that. At least, that's how it should be. If it's not how it is, then
    the spec differs from the actual requirement.

    Your code passed all CSES tests but this one.

    If the test failed due to this, then they should have mentioned this
    special case in the spec. I wrote my code based on what was there, not
    what they left out.

    4 instead of collecting the results in a char variable, I print
    them as they're calculated

    Yep, that's a more usual approach.
    Another suggestion I have is to use a separate function to do part
    of
    the work. But it's not vital.
    Also, since the specification says that only positive numbers are
    to
    be accepted, it makes sense (to me, at least) to use an unsigned type
    for n.
    One more thing: using while(1){...break;} is a bit pointless. You
    can
    use do{...}while(1 != n) instead.
    Here's my solution, for what it's worth:

    #include <stdio.h>

    unsigned long weird(unsigned long n)
    {
    printf("%lu", n);

    if(n & 1)
    {
    /* Odd - multiply by 3 & add 1. */
    n = n * 3 + 1;
    }
    else
    {
    /* Even - divide by 2. */
    n /= 2;
    }
    return n;
    }

    int main(void)
    {
    unsigned long n;

    /* Get n from stdin. */
    scanf("%lu", &n);

    /* Now feed it to the algorithm. */
    do
    {
    n = weird(n);
    putchar(' ');
    } while(1 != n);

    printf("%lu\n", n);
    return 0;
    }

    Cool.

    I tweaked my original and got it down to: --------------------------------------------------------
    #include <stdio.h>

    int main(void)
    {
    long n = 0;
    scanf("%ld", &n);
    while(n > 1) {
    printf("%ld ",n);
    n = (n % 2) ? (n * 3 + 1) : (n / 2);
    }
    printf("1\n");
    return 0;
    }
    --------------------------------------------------------

    I also liked the Number Spiral, Coin Piles and Palindrome Reorder
    problems.

    Thanks for the input!

    No wuckin' forries, pal! :)

    --


    ----- Dig the NEW and IMPROVED news sig!! -----


    -------------- Shaggy was here! ---------------
    Ain't I'm a dawg!!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to bart on Tue Mar 25 13:51:59 2025
    bart <bc@freeuk.com> writes:
    On 24/03/2025 16:17, Muttley@DastardlyHQ.org wrote:

    It's funny actually that you are in favour of the incremental builds
    that makefiles help with. All about breaking things down into smaller
    steps ...

    ... EXCEPT when it comes to edit-run cycles. There you're in favour of >bunching lots of changes together, and doing one big build followed by >testing a dozen different things on that new version. (As though this
    was the 1960s and you only got one batch compilation per night.)

    EXCEPT nobody has claimed to be in favor of that. You're making shit
    up again.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Tue Mar 25 14:43:09 2025
    On Tue, 25 Mar 2025 11:04:12 +0000
    bart <bc@freeuk.com> wibbled:
    On 25/03/2025 08:41, Muttley@DastardlyHQ.org wrote:
    On Mon, 24 Mar 2025 21:16:17 +0000
    bart <bc@freeuk.com> wibbled:
    .... EXCEPT when it comes to edit-run cycles. There you're in favour of
    bunching lots of changes together, and doing one big build followed by
    testing a dozen different things on that new version. (As though this
    was the 1960s and you only got one batch compilation per night.)

    I'm sure that made some sense to you when you wrote it. I have no idea wtf >> you're talking about however.

    Then I have no idea WTF you're on about when you say that programmers
    should spend 100 times as long thinking as they do compiling. (Try

    I think that says everything we need to know about you. Hope I never have to use any of the trash code you write.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Scott Lurndal on Tue Mar 25 14:22:45 2025
    On 25/03/2025 13:51, Scott Lurndal wrote:
    bart <bc@freeuk.com> writes:
    On 24/03/2025 16:17, Muttley@DastardlyHQ.org wrote:

    It's funny actually that you are in favour of the incremental builds
    that makefiles help with. All about breaking things down into smaller
    steps ...

    ... EXCEPT when it comes to edit-run cycles. There you're in favour of
    bunching lots of changes together, and doing one big build followed by
    testing a dozen different things on that new version. (As though this
    was the 1960s and you only got one batch compilation per night.)

    EXCEPT nobody has claimed to be in favor of that. You're making shit
    up again.

    This guy was:

    On 24/03/2025 16:17, Muttley@DastardlyHQ.org wrote:
    I'll paraphrase someone elses point - who fucking cares? If you need
    lightning
    fast compilation because you're constantly rebuilding your shit code
    to see if
    it'll even compile never mind work then that says a lot about you as
    a dev.
    The thinking/writing to compilation time ratio I imagine with most
    devs would
    probably be a minimum of 100 - 1, possibly much larg

    Yet, you have to get on /my/ back?

    Please stop that.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Tue Mar 25 14:46:22 2025
    On Tue, 25 Mar 2025 11:09:21 +0000
    bart <bc@freeuk.com> wibbled:
    On 25/03/2025 08:40, Muttley@DastardlyHQ.org wrote:


    For that matter, why do YOU use a makefile when your full build is only
    0.8 seconds?

    Modularising code is far more than just about compilation speed which you'd >> know if you had anything approaching a clue.

    You can modularise code without also needing a makefile!

    IYYM you can build modularised code without it. Sure , sometimes, so long as the modules don't have varying compilation dependencies. But then you end up rebuilding everything.

    The stone age is where they used one huge source file for a program.

    Maybe they also used one huge binary for a program.

    Listen sonny, in large projects in companies - ie not the toy code you work
    on in your bedroom - different people will have checked out seperate modules and be working on them at any one time. Thats a lot simpler than having one huge source file that then has a boatload of merge issues when a dozen people all try to check their changes back in.

    You're either fucking stupid or an incredibly successful troll.

    No matter how many facts you're given you ignore the ones you don't like
    and twist things around to make some smart-ass comment.

    When you get a dev job in the real world get back to me.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Muttley@DastardlyHQ.org on Tue Mar 25 15:04:56 2025
    On 25/03/2025 14:46, Muttley@DastardlyHQ.org wrote:
    On Tue, 25 Mar 2025 11:09:21 +0000
    bart <bc@freeuk.com> wibbled:
    On 25/03/2025 08:40, Muttley@DastardlyHQ.org wrote:


    For that matter, why do YOU use a makefile when your full build is only >>>> 0.8 seconds?

    Modularising code is far more than just about compilation speed which you'd >>> know if you had anything approaching a clue.

    You can modularise code without also needing a makefile!

    IYYM you can build modularised code without it. Sure , sometimes, so long as the modules don't have varying compilation dependencies. But then you end up rebuilding everything.

    And? I thought you said it didn't matter how long it took! Perhaps
    compilation time does matter after all...

    Listen sonny, in large projects in companies - ie not the toy code you work on in your bedroom - different people will have checked out seperate modules and be working on them at any one time. Thats a lot simpler than having one huge source file that then has a boatload of merge issues when a dozen people all try to check their changes back in.

    Fucking hell, you still don't get it. That single source file is MACHINE-GENERATED. Nobody's going to be even looking inside let alone
    trying to maintain it.

    You might as well complain that a single EXE file is difficult to maintain!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to bart on Tue Mar 25 17:14:51 2025
    On Tue, 25 Mar 2025 14:58:28 +0000
    bart <bc@freeuk.com> wrote:


    So, what exactly is released to end-users?

    I case of Firefox? An installer. After installer finished, I suppose it
    ends up as directory with exe file + few DLLs.
    In case of in-house software, either the same or just an exe. possibly
    with few accompanying file, like default options etc...

    Where is the final linking
    done? If on the user's machine, will the necessary tools also be
    bundled?


    It seems, you are thinking about source distribution rather than normal (==binary) distribution.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Tue Mar 25 15:09:17 2025
    On Tue, 25 Mar 2025 15:04:56 +0000
    bart <bc@freeuk.com> wibbled:
    On 25/03/2025 14:46, Muttley@DastardlyHQ.org wrote:
    On Tue, 25 Mar 2025 11:09:21 +0000
    bart <bc@freeuk.com> wibbled:
    On 25/03/2025 08:40, Muttley@DastardlyHQ.org wrote:


    For that matter, why do YOU use a makefile when your full build is only >>>>> 0.8 seconds?

    Modularising code is far more than just about compilation speed which you'd

    know if you had anything approaching a clue.

    You can modularise code without also needing a makefile!

    IYYM you can build modularised code without it. Sure , sometimes, so long as >> the modules don't have varying compilation dependencies. But then you end up >> rebuilding everything.

    And? I thought you said it didn't matter how long it took! Perhaps >compilation time does matter after all...

    It doesn't matter all that much unless its something huge like the linux kernel.
    But CPU usage on a busy machine does matter and modularisation for other reasons I've given is a good thing.

    Listen sonny, in large projects in companies - ie not the toy code you work >> on in your bedroom - different people will have checked out seperate modules >> and be working on them at any one time. Thats a lot simpler than having one >> huge source file that then has a boatload of merge issues when a dozen people

    all try to check their changes back in.

    Fucking hell, you still don't get it. That single source file is >MACHINE-GENERATED. Nobody's going to be even looking inside let alone
    trying to maintain it.

    You're the one extolling the virtues of a single source file, not me.

    Here's an idea - instead of outputting C why don't you make it output
    machine code instead. Might be more useful.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Michael S on Tue Mar 25 14:58:28 2025
    On 25/03/2025 11:29, Michael S wrote:
    On Tue, 25 Mar 2025 08:40:34 -0000 (UTC)
    Muttley@DastardlyHQ.org wrote:

    On Mon, 24 Mar 2025 18:20:02 +0000
    bart <bc@freeuk.com> wibbled:

    Fast compilers and also whole-program compilers open up lots of new
    possibilities. But clearly you're quite happy being stuck in the
    stone age.

    The stone age is where they used one huge source file for a program.


    Separate compilation process and modularization/# of source files are
    not directly related.
    When the speed (or size, or correctness) is paramount, separate
    compilation is inevitably inferior to whole-program compilation.
    If people still use separate compilation, it does not happen because
    it is good thing, but because compilers are too slow. Or, if we're
    talking about Unix world, out of inertia.
    In Windows world release builds of majority of C/C++ software is done
    LTCG. Most developers do not even think about it, its a default. Even
    in Linux world, projects that care about experience of their users,
    like for example Firefox, use LTCG as a minimum, sometimes even going
    for profile-guided whole program compilations.

    So, what exactly is released to end-users? Where is the final linking
    done? If on the user's machine, will the necessary tools also be bundled?

    The way I do it is illustrated below. M/MA/C/O represent individual
    source or object files. Groups like (C, C, C) represent the multiple
    files, perhaps in assorted locations, of the original source code.

    A typical C project is built like this, with -> representing compiling,
    linking etc:

    (C, C, C) -> (O, O, O) -> (EXE)

    For project to be locally built (compiled, optimised and linked) at a user-site, then (C, C, C) must be distributed (plus the usual junk in
    addition to the compiler and linker)

    My language works like this on my home machine:

    (M, M, M) -> (EXE)

    But if I wanted people to build from source on their own machine (and
    assuming they had my compiler), the process would be:

    (M, M, M) -> (MA) -> (EXE)

    MA is a one file source amalgamation which is what is provided. (So the
    user needs two files, that, and the compiler.)

    However nobody has my compiler, as such binaries are not trusted, so the process is often this instead:

    (M, M, M) -> (C) -> (O) -> (EXE)

    I provide the one-file C rendering. The process from that point on
    depends on their C compiler, but this is typical, even if O files are
    normally hidden, if a discrete linker is used.

    So whole-program-optimisation is useful by-product.

    (But it includes only the code I write; not libraries. Presumably LTCG
    works with part-compiled libraries that are to be statically linked? I
    only work with DLLs. However I don't see LTCG being practical with giant libraries like GTK.)

    Anyway it is this intermediate single C file that is causing many people
    here to have kittens and to accuse me of not caring about modularisation.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Muttley@DastardlyHQ.org on Tue Mar 25 16:16:23 2025
    On 2025-03-25, Muttley@DastardlyHQ.org <Muttley@DastardlyHQ.org> wrote:
    Listen sonny, in large projects in companies - ie not the toy code you work on in your bedroom - different people will have checked out seperate modules and be working on them at any one time.

    Splitting into files does make a big difference with centralized version control that requires you to maintain a lock on an object that you are modifying.

    It makes little difference with distributed versioning systems like git.

    You can catenate 10 .c files into a single one, and it would be just
    as easy to work on in parallel.

    However, git still has the concept of a file being one object.
    You can "git log" a single file, and that is very useful;
    If the file is a combination of what should be ten files, then
    it becomes less useful.

    I'm not suggesting people should combine files; it's a bad idea for
    multiple reasons. Just that you *can* pretty easily work concurrently on
    large files unless you're using a very outdated version control system.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Michael S on Tue Mar 25 16:37:01 2025
    On 25/03/2025 15:14, Michael S wrote:
    On Tue, 25 Mar 2025 14:58:28 +0000
    bart <bc@freeuk.com> wrote:


    So, what exactly is released to end-users?

    I case of Firefox? An installer. After installer finished, I suppose it
    ends up as directory with exe file + few DLLs.
    In case of in-house software, either the same or just an exe. possibly
    with few accompanying file, like default options etc...

    Where is the final linking
    done? If on the user's machine, will the necessary tools also be
    bundled?


    It seems, you are thinking about source distribution rather than normal (==binary) distribution.


    No, I want to distribute programs, but binaries are problematic, so I
    have to take one step back. Instead of one convenient binary file, I
    supply one convenient file in another format.

    If that format is HLL code then it needs a finalising process.

    I also have a private binary format, but while that may appear as data
    to AV, humans might still not trust it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Muttley@DastardlyHQ.org on Tue Mar 25 16:40:38 2025
    On 25/03/2025 15:09, Muttley@DastardlyHQ.org wrote:

    You're the one extolling the virtues of a single source file, not me.

    Here's an idea - instead of outputting C why don't you make it output
    machine code instead. Might be more useful.


    What makes you think I don't? The C code is mainly for people who can't
    or won't run Windows binaries. It also makes it incredibly easy to build
    from source (gcc prog.c).

    In my case I want to apply gcc-level optimisations that my compiler
    doesn't do. So it has to be C code if I want that final 25% extra speed.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to bart on Tue Mar 25 19:00:47 2025
    On Tue, 25 Mar 2025 16:37:01 +0000
    bart <bc@freeuk.com> wrote:

    On 25/03/2025 15:14, Michael S wrote:
    On Tue, 25 Mar 2025 14:58:28 +0000
    bart <bc@freeuk.com> wrote:


    So, what exactly is released to end-users?

    I case of Firefox? An installer. After installer finished, I
    suppose it ends up as directory with exe file + few DLLs.
    In case of in-house software, either the same or just an exe.
    possibly with few accompanying file, like default options etc...

    Where is the final linking
    done? If on the user's machine, will the necessary tools also be
    bundled?


    It seems, you are thinking about source distribution rather than
    normal (==binary) distribution.


    No, I want to distribute programs, but binaries are problematic, so I
    have to take one step back. Instead of one convenient binary file, I
    supply one convenient file in another format.

    If that format is HLL code then it needs a finalising process.

    I also have a private binary format, but while that may appear as
    data to AV, humans might still not trust it.



    Most people trust Mozilla binary distributions.
    Majority of those who don't wouldn't trust their source code
    distributions either.

    For in-house software, employee that does not trust binary created by
    his colleagues and approve by company's QA is unlikely to keep his job
    for long.

    For your case, I can believe that there exist people (World is a big
    place after all) that would not trust your binary, but would trust
    amalgamated source code, but I personally consider them naive to
    extreme. Ken Thompson's famous "Reflections on Trusting Trust" come to
    mind.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Muttley@DastardlyHQ.org on Wed Mar 26 10:07:05 2025
    On 26/03/2025 09:20, Muttley@DastardlyHQ.org wrote:
    On Tue, 25 Mar 2025 16:40:38 +0000
    bart <bc@freeuk.com> wibbled:
    On 25/03/2025 15:09, Muttley@DastardlyHQ.org wrote:

    You're the one extolling the virtues of a single source file, not me.

    Here's an idea - instead of outputting C why don't you make it output
    machine code instead. Might be more useful.


    What makes you think I don't? The C code is mainly for people who can't
    or won't run Windows binaries. It also makes it incredibly easy to build >>from source (gcc prog.c).

    In my case I want to apply gcc-level optimisations that my compiler
    doesn't do. So it has to be C code if I want that final 25% extra speed.

    Whats this? You mean your amazing zippy fast compiler can't optimise for shit?

    Well, it is compiled with itself and it manages that in 70ms despite
    lacking the optimiser. Otherwise it might do it it 60ms. It is amazingly
    fast either way.

    Maybe gcc isn't so bad after all eh?

    I'm not complaining about the quality of its code.

    However, it is 200 times bigger than my product, and can take 100 times
    longer to compile code, which in the current project might yield 10-25%
    extra speed.

    That is only needed here because I'm compiling benchmark results to
    compare with other products that will also be using the best possible optimisation.

    For everyday use however, that small boost is not relevant, and not
    worth the extra hassle.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Wed Mar 26 09:20:17 2025
    On Tue, 25 Mar 2025 16:40:38 +0000
    bart <bc@freeuk.com> wibbled:
    On 25/03/2025 15:09, Muttley@DastardlyHQ.org wrote:

    You're the one extolling the virtues of a single source file, not me.

    Here's an idea - instead of outputting C why don't you make it output
    machine code instead. Might be more useful.


    What makes you think I don't? The C code is mainly for people who can't
    or won't run Windows binaries. It also makes it incredibly easy to build
    from source (gcc prog.c).

    In my case I want to apply gcc-level optimisations that my compiler
    doesn't do. So it has to be C code if I want that final 25% extra speed.

    Whats this? You mean your amazing zippy fast compiler can't optimise for shit? Maybe gcc isn't so bad after all eh?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Muttley@DastardlyHQ.org on Wed Mar 26 18:06:09 2025
    On 2025-03-26, Muttley@DastardlyHQ.org <Muttley@DastardlyHQ.org> wrote:
    Whats this? You mean your amazing zippy fast compiler can't optimise for shit?
    Maybe gcc isn't so bad after all eh?

    Ah, but:


    - gcc doesn't produce better code than it did 25 years go, when it
    was at least an order of magnitude smaller and two orders faster.
    At least not for tightly written programs where the C programmer
    has done optimizing at the source level, so that the compiler has
    little more to think about beyond good instruction selection and
    cleaning up the pessimizations it has itself introduced.

    - gcc is still pretty slow when you have no optimization enabled (-O0).


    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Waldek Hebisch@21:1/5 to Kaz Kylheku on Thu Mar 27 00:22:44 2025
    Kaz Kylheku <643-408-1753@kylheku.com> wrote:
    On 2025-03-26, Muttley@DastardlyHQ.org <Muttley@DastardlyHQ.org> wrote:
    Whats this? You mean your amazing zippy fast compiler can't optimise for shit?
    Maybe gcc isn't so bad after all eh?

    Ah, but:


    - gcc doesn't produce better code than it did 25 years go, when it
    was at least an order of magnitude smaller and two orders faster.
    At least not for tightly written programs where the C programmer
    has done optimizing at the source level, so that the compiler has
    little more to think about beyond good instruction selection and
    cleaning up the pessimizations it has itself introduced.

    No, gcc produces better code. Both 25 years ago and now one
    can find situations where gcc output could be improved by
    rather simple transformations, but it is less frequent now.
    Concerning "optimizing at the source level", modern machines
    have instructions that can not be directly expressed at
    source level, but which can speed up resulting programs.
    In particular vector instructions. Modern gcc is smart enough
    to realize that code using separate scalar variables perform
    some (not all!) operations in parallel and use vector
    instructions.

    How much improvement? Probably in 5-15% range on average
    programs for hand-optimized programs.

    Concerning size, you are right, there is significant increase in
    size. Concerning speed, that is debatable. On the same machine
    the same somewhat silly file containing just declarations needs
    9.894s using gcc-12.2 and 10.668s using gcc-3.4.6. Different
    silly example containg trivial code needs 27.947s using gcc-12.2
    and 12.627s using gcc-3.4.6. Both were at default setting
    (no optimization). As you see, depending on content of the
    file gcc-12.2 can be slightly faster or few times slower than
    gcc-3.4.6 when doing non-optimizing compilation.

    Modern gcc can do whole-program optimization, which can take
    a lot more time than function-by-function optimization done
    by gcc-3.4.6. But IME on realistic programs (in particular
    split into moderately sized files) optimization increases
    time, but that is few times, both for gcc-12.2 and for
    older versions. In other words, typically optimization
    does not lead to catastrophic increase in compile time,
    both for modern and old gcc.

    --
    Waldek Hebisch

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Waldek Hebisch on Thu Mar 27 14:22:40 2025
    antispam@fricas.org (Waldek Hebisch) writes:
    Kaz Kylheku <643-408-1753@kylheku.com> wrote:
    On 2025-03-26, Muttley@DastardlyHQ.org <Muttley@DastardlyHQ.org> wrote:
    Whats this? You mean your amazing zippy fast compiler can't optimise for shit?
    Maybe gcc isn't so bad after all eh?

    Ah, but:


    - gcc doesn't produce better code than it did 25 years go, when it
    was at least an order of magnitude smaller and two orders faster.
    At least not for tightly written programs where the C programmer
    has done optimizing at the source level, so that the compiler has
    little more to think about beyond good instruction selection and
    cleaning up the pessimizations it has itself introduced.

    No, gcc produces better code. Both 25 years ago and now one
    can find situations where gcc output could be improved by
    rather simple transformations, but it is less frequent now.
    <snip>

    In other words, typically optimization
    does not lead to catastrophic increase in compile time,
    both for modern and old gcc.

    There are some pathological cases; I have one
    source file that takes almost 7 minutes to compile
    when using -O3 on a very-high-end xeon box. Mostly
    buried in the overall compile time when using parallel
    make.

    The code could be restructured to compile in a few seconds;
    but that would require substantial changes to the rest
    of the codebase. Compiling with -O0 for development
    testing reduces the compile time to a few seconds.

    Since the run time of the project is far, far, far more
    important than the compile time, we apply our resources
    to improving functionality and performance for the end
    user of the product rather than worry about a few minutes
    of compile time for the developers.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Scott Lurndal on Thu Mar 27 10:54:16 2025
    scott@slp53.sl.home (Scott Lurndal) writes:

    [...] I have one
    source file that takes almost 7 minutes to compile
    when using -O3 on a very-high-end xeon box. Mostly
    buried in the overall compile time when using parallel
    make.

    The code could be restructured to compile in a few seconds;
    but that would require substantial changes to the rest
    of the codebase. Compiling with -O0 for development
    testing reduces the compile time to a few seconds.

    How long does it take compiling with -O1?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Tim Rentsch on Fri Mar 28 16:13:29 2025
    Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
    scott@slp53.sl.home (Scott Lurndal) writes:

    [...] I have one
    source file that takes almost 7 minutes to compile
    when using -O3 on a very-high-end xeon box. Mostly
    buried in the overall compile time when using parallel
    make.

    The code could be restructured to compile in a few seconds;
    but that would require substantial changes to the rest
    of the codebase. Compiling with -O0 for development
    testing reduces the compile time to a few seconds.

    How long does it take compiling with -O1?

    Using -O1 saves 14 seconds on the long-pole.


    $ time mr -s -j96
    COMPILE g.cpp
    BUILD lib/lib_g.so
    BUILDSO libsim.so.1.0
    BUILD TARGET sim

    real 14m0.76s
    user 13m52.28s
    sys 0m20.13s

    $ time md -s -j96
    COMPILE g.cpp
    BUILD lib_g.so
    BUILDSO libsim.so.1.0
    BUILD TARGET sim

    real 13m46.49s
    user 13m42.17s
    sys 0m16.66s

    To be clear, we know that this is ridiculous, the generated
    header file totals 1.25 million lines, including a single
    function with over 200,000 SLOC. Feature creep, antique
    algorithms, screwed up third-party ip-xact collateral and
    tight development schedules.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Scott Lurndal on Fri Mar 28 16:40:52 2025
    On 28/03/2025 16:13, Scott Lurndal wrote:
    Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
    scott@slp53.sl.home (Scott Lurndal) writes:

    [...] I have one
    source file that takes almost 7 minutes to compile
    when using -O3 on a very-high-end xeon box. Mostly
    buried in the overall compile time when using parallel
    make.

    The code could be restructured to compile in a few seconds;
    but that would require substantial changes to the rest
    of the codebase. Compiling with -O0 for development
    testing reduces the compile time to a few seconds.

    How long does it take compiling with -O1?

    Using -O1 saves 14 seconds on the long-pole.


    $ time mr -s -j96
    COMPILE g.cpp
    BUILD lib/lib_g.so
    BUILDSO libsim.so.1.0
    BUILD TARGET sim

    real 14m0.76s
    user 13m52.28s
    sys 0m20.13s

    $ time md -s -j96
    COMPILE g.cpp
    BUILD lib_g.so
    BUILDSO libsim.so.1.0
    BUILD TARGET sim

    real 13m46.49s
    user 13m42.17s
    sys 0m16.66s

    To be clear, we know that this is ridiculous, the generated
    header file totals 1.25 million lines, including a single
    function with over 200,000 SLOC. Feature creep, antique
    algorithms, screwed up third-party ip-xact collateral and
    tight development schedules.

    So, 13:40 minutes for 1.25M lines? (I assume that header contains code
    not just declarations.)

    That would make it 1.5Kloc/second, but it also apparently over 96 cores
    (or threads)? That comes to 16 lines per second per thread.

    Yeah, ridiculous is almost an understatment.

    That 200K lines in one function looks suspicious. Modern compilers like
    to use SSA in functions, but that can yield huge numbers of temporaries.

    Compiling with -O0 for development
    testing reduces the compile time to a few seconds.

    That would need 100x speedup with -O0 (for, say, 8 seconds), but I've
    never seen that. However, it would give 150Klps over 96 threads, which
    is plausible.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Scott Lurndal on Fri Mar 28 10:57:42 2025
    scott@slp53.sl.home (Scott Lurndal) writes:

    Tim Rentsch <tr.17687@z991.linuxsc.com> writes:

    scott@slp53.sl.home (Scott Lurndal) writes:

    [...] I have one
    source file that takes almost 7 minutes to compile
    when using -O3 on a very-high-end xeon box. Mostly
    buried in the overall compile time when using parallel
    make.

    The code could be restructured to compile in a few seconds;
    but that would require substantial changes to the rest
    of the codebase. Compiling with -O0 for development
    testing reduces the compile time to a few seconds.

    How long does it take compiling with -O1?

    Using -O1 saves 14 seconds on the long-pole.


    $ time mr -s -j96
    COMPILE g.cpp
    BUILD lib/lib_g.so
    BUILDSO libsim.so.1.0
    BUILD TARGET sim

    real 14m0.76s
    user 13m52.28s
    sys 0m20.13s

    $ time md -s -j96
    COMPILE g.cpp
    BUILD lib_g.so
    BUILDSO libsim.so.1.0
    BUILD TARGET sim

    real 13m46.49s
    user 13m42.17s
    sys 0m16.66s

    To be clear, we know that this is ridiculous, the generated
    header file totals 1.25 million lines, including a single
    function with over 200,000 SLOC. Feature creep, antique
    algorithms, screwed up third-party ip-xact collateral and
    tight development schedules.

    Thank you. I was just curious.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to bart on Fri Mar 28 20:41:45 2025
    bart <bc@freeuk.com> writes:
    On 28/03/2025 16:13, Scott Lurndal wrote:
    Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
    scott@slp53.sl.home (Scott Lurndal) writes:

    [...] I have one
    source file that takes almost 7 minutes to compile
    when using -O3 on a very-high-end xeon box. Mostly
    buried in the overall compile time when using parallel
    make.

    The code could be restructured to compile in a few seconds;
    but that would require substantial changes to the rest
    of the codebase. Compiling with -O0 for development
    testing reduces the compile time to a few seconds.

    How long does it take compiling with -O1?

    Using -O1 saves 14 seconds on the long-pole.


    $ time mr -s -j96
    COMPILE g.cpp
    BUILD lib/lib_g.so
    BUILDSO libsim.so.1.0
    BUILD TARGET sim

    real 14m0.76s
    user 13m52.28s
    sys 0m20.13s

    $ time md -s -j96
    COMPILE g.cpp
    BUILD lib_g.so
    BUILDSO libsim.so.1.0
    BUILD TARGET sim

    real 13m46.49s
    user 13m42.17s
    sys 0m16.66s

    To be clear, we know that this is ridiculous, the generated
    header file totals 1.25 million lines, including a single
    function with over 200,000 SLOC. Feature creep, antique
    algorithms, screwed up third-party ip-xact collateral and
    tight development schedules.

    So, 13:40 minutes for 1.25M lines? (I assume that header contains code
    not just declarations.)

    That would make it 1.5Kloc/second, but it also apparently over 96 cores
    (or threads)? That comes to 16 lines per second per thread.

    The gnu compiler is not multithreaded. The single thread was
    compute bound for 13 minutes and 46 seconds.



    That 200K lines in one function looks suspicious

    Why?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Scott Lurndal on Fri Mar 28 22:18:48 2025
    On 28/03/2025 20:41, Scott Lurndal wrote:
    bart <bc@freeuk.com> writes:
    On 28/03/2025 16:13, Scott Lurndal wrote:
    Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
    scott@slp53.sl.home (Scott Lurndal) writes:

    [...] I have one
    source file that takes almost 7 minutes to compile
    when using -O3 on a very-high-end xeon box. Mostly
    buried in the overall compile time when using parallel
    make.

    The code could be restructured to compile in a few seconds;
    but that would require substantial changes to the rest
    of the codebase. Compiling with -O0 for development
    testing reduces the compile time to a few seconds.

    How long does it take compiling with -O1?

    Using -O1 saves 14 seconds on the long-pole.


    $ time mr -s -j96
    COMPILE g.cpp
    BUILD lib/lib_g.so
    BUILDSO libsim.so.1.0
    BUILD TARGET sim

    real 14m0.76s
    user 13m52.28s
    sys 0m20.13s

    $ time md -s -j96
    COMPILE g.cpp
    BUILD lib_g.so
    BUILDSO libsim.so.1.0
    BUILD TARGET sim

    real 13m46.49s
    user 13m42.17s
    sys 0m16.66s

    To be clear, we know that this is ridiculous, the generated
    header file totals 1.25 million lines, including a single
    function with over 200,000 SLOC. Feature creep, antique
    algorithms, screwed up third-party ip-xact collateral and
    tight development schedules.

    So, 13:40 minutes for 1.25M lines? (I assume that header contains code
    not just declarations.)

    That would make it 1.5Kloc/second, but it also apparently over 96 cores
    (or threads)? That comes to 16 lines per second per thread.

    The gnu compiler is not multithreaded. The single thread was
    compute bound for 13 minutes and 46 seconds.

    So what was that -j96 about?



    That 200K lines in one function looks suspicious

    Why?

    I thought I explained. Compile-time for a long function can increase non-linearly in a complex compiler. It could also use up more memory
    than the same line-count but split into separate functions.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Keith Thompson on Fri Mar 28 22:48:11 2025
    On 28/03/2025 22:33, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 28/03/2025 20:41, Scott Lurndal wrote:
    [...]
    The gnu compiler is not multithreaded. The single thread was
    compute bound for 13 minutes and 46 seconds.

    So what was that -j96 about?

    "-j96" is an option to GNU make, not to the compiler. It might invoke
    gcc multiple times in parallel, but each invocation of gcc will still be single-threaded.


    So, is there just once instance of gcc at work during those 13 minutes,
    or multiple?

    In other words, would it take longer than 13:40 mins without it, or does
    it help? If -j96 makes no difference, then why specify it?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Keith Thompson on Sat Mar 29 00:32:35 2025
    On 28/03/2025 23:53, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 28/03/2025 22:33, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 28/03/2025 20:41, Scott Lurndal wrote:
    [...]
    The gnu compiler is not multithreaded. The single thread was
    compute bound for 13 minutes and 46 seconds.

    So what was that -j96 about?
    "-j96" is an option to GNU make, not to the compiler. It might
    invoke
    gcc multiple times in parallel, but each invocation of gcc will still be >>> single-threaded.

    So, is there just once instance of gcc at work during those 13
    minutes, or multiple?

    In other words, would it take longer than 13:40 mins without it, or
    does it help? If -j96 makes no difference, then why specify it?

    I haven't done any measurements, but I don't know what's unclear.

    If a single thread was compute bound for 13:46, using "-j96"
    won't make that single thread run any faster, but it can enable
    "make" to do other things while that single thread is running.
    It's also common to use "-j" without an argument, to run as many
    jobs simultaneously as possible, or "-j$(nproc)" to run as many
    parallel jobs as the number of processing units available (if you
    have the "nproc" command; it's part of GNU coreutils).

    I can imagine "-j" causing problems if dependencies are expressed incorrectly, but I haven't run into such a problem myself.


    Are you saying that this job consists of single a C (or C++) source
    file, so it is not possible to parallelise the processes necessary to
    compile it? (I've not idea of gcc's capabilities there.)

    That would be funny given that I've had criticisms myself for attempting
    to compile monolithic C programs.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to bart on Sat Mar 29 13:37:09 2025
    On 29/03/2025 01:32, bart wrote:
    On 28/03/2025 23:53, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 28/03/2025 22:33, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 28/03/2025 20:41, Scott Lurndal wrote:
    [...]
    The gnu compiler is not multithreaded.  The single thread was
    compute bound for 13 minutes and 46 seconds.

    So what was that -j96 about?
    "-j96" is an option to GNU make, not to the compiler.  It might
    invoke
    gcc multiple times in parallel, but each invocation of gcc will
    still be
    single-threaded.

    So, is there just once instance of gcc at work during those 13
    minutes, or multiple?

    In other words, would it take longer than 13:40 mins without it, or
    does it help? If -j96 makes no difference, then why specify it?

    I haven't done any measurements, but I don't know what's unclear.

    If a single thread was compute bound for 13:46, using "-j96"
    won't make that single thread run any faster, but it can enable
    "make" to do other things while that single thread is running.
    It's also common to use "-j" without an argument, to run as many
    jobs simultaneously as possible, or "-j$(nproc)" to run as many
    parallel jobs as the number of processing units available (if you
    have the "nproc" command; it's part of GNU coreutils).

    I can imagine "-j" causing problems if dependencies are expressed
    incorrectly, but I haven't run into such a problem myself.


    Are you saying that this job consists of single a C (or C++) source
    file, so it is not possible to parallelise the processes necessary to
    compile it? (I've not idea of gcc's capabilities there.)

    That would be funny given that I've had criticisms myself for attempting
    to compile monolithic C programs.

    My guess - and only Scott can say for sure - is that his software
    contains a very large number of files, no doubt some C and some C++.

    Today's lesson comes in three parts.

    First, "make -j".

    When you use "make", the "make" program will coordinate all the programs
    needed to do the build - running gcc on source files, running
    pre-processing steps, post-processing steps, linkers, analysers,
    documentation programs, little utility programs - anything that needs to
    be done for the build. "make" does this in an order to match the
    dependencies - if action "A" depends on the output from action "B", then
    action "A" is not started until "B" is finished. And if the inputs
    needed for "B" have not changed since it's output was last generated,
    then action "B" doesn't need to be run at all. The tasks are collected together into a directed acyclic graph, using the partial ordering of dependencies.

    When you use "make -j", "make" will run all these tasks in parallel.
    The partial order of the DAG is preserved. So if you have a 96 core
    system, and you have hundreds of files that need compiled in this build,
    and you run "make -j 96", then "make" will coordinate 96 instances of
    the compiler (or other needed tasks) running at the same time. It won't
    run more than 96 of them - as compilations finish, they free up "job
    slots" in make's "job server", and other compilations or tasks are
    started. When there are not enough tasks that can be done (perhaps due
    to the dependencies), fewer tasks will run in parallel.

    As always with multi-tasking of any sort, if there is a long-running
    task, then it takes the time it takes - you can't speed it up, no matter
    how many cpu cores you have.

    So one of Scott's compiles takes 13 minutes. "make -j" won't speed that
    up. But it will mean that any other compilations can be done in
    parallel. Maybe he has 600 other files that each take 30 seconds to
    compile. With "make -j", the build takes the 13 minutes it has to for
    the one awkward file - all the rest are compiled while that is going on.
    With non-parallel "make", it would take 5 hours (if I've done my sums correctly).

    Thus "make -j" is a really good idea, even if you have a particularly long-running task (compilation or anything else).



    Second, why is gcc single-threaded when compiles can sometimes take a
    long time?

    The prime reason for that is that multi-threaded compilation is very
    difficult. There are some aspects of it that could be run in parallel,
    such as some of the analysis and optimisation could be split per
    function. But the overhead of multi-threading and keeping shared data
    and information safe and synchronous would be significant, and you would
    still typically run the big time-consuming part - the inter-procedural optimisations - as a single thread. In practice, most big pieces of
    software are build of many files, so parallelising at the build level
    (such as "make -j") is easier, safer, and more efficient.

    The bottleneck of many big builds is the link process. Traditionally,
    this needs to collect together all the object files and static
    libraries. In more modern systems, especially with C++, it also
    de-duplicates sections. Since linking is a task that usually can't
    begin until all the compilation is finished, and it is usually just one
    single task, it makes sense to focus on making linking multi-threaded.
    And this is what we see with modern linkers - a great deal of effort is
    put into multi-threading the linking process (especially when partitions
    from the link are passed back to the compiler for link-time optimisation
    and code generation).


    The third point from this thread, is why is gcc so slow on a particular
    C file? As you have noted before, some aspects of compilation and
    optimisation - particularly inter-procedural optimisation - increases super-linearly with size, both the size of individual functions and the
    number of functions. I don't know what this particular file is, but
    given what I know of Scotts work and my own experience, I think this
    could be a generated file for hardware simulation. These typically lead
    to very large files and very large functions, with a great many
    variables that are used in simple expressions or statements (like "if (node_1234.enabled && clock.rising_edge) node_1234.next =
    node_1235.output"). Tracking all these variables and their lifetimes,
    and re-arranging code in an efficient manner, becomes a very time
    consuming problem for the compiler. But that effort can make a
    significant difference to the run-time of the simulation, which will
    normally be orders of magnitude longer than the compilation time. Thus
    it can be worth having code structured this way.

    It is not a typical use-case for compilation, and thus not a major focus
    for compiler development, but it is used in real systems.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Keith Thompson on Sat Mar 29 16:24:27 2025
    Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:
    bart <bc@freeuk.com> writes:
    On 28/03/2025 23:53, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 28/03/2025 22:33, Keith Thompson wrote:
    bart <bc@freeuk.com> writes:
    On 28/03/2025 20:41, Scott Lurndal wrote:
    [...]
    The gnu compiler is not multithreaded. The single thread was
    compute bound for 13 minutes and 46 seconds.

    So what was that -j96 about?
    "-j96" is an option to GNU make, not to the compiler. It might
    invoke
    gcc multiple times in parallel, but each invocation of gcc will still be >>>>> single-threaded.

    So, is there just once instance of gcc at work during those 13
    minutes, or multiple?

    In other words, would it take longer than 13:40 mins without it, or
    does it help? If -j96 makes no difference, then why specify it?
    I haven't done any measurements, but I don't know what's unclear.
    If a single thread was compute bound for 13:46, using "-j96"
    won't make that single thread run any faster, but it can enable
    "make" to do other things while that single thread is running.
    It's also common to use "-j" without an argument, to run as many
    jobs simultaneously as possible, or "-j$(nproc)" to run as many
    parallel jobs as the number of processing units available (if you
    have the "nproc" command; it's part of GNU coreutils).
    I can imagine "-j" causing problems if dependencies are expressed
    incorrectly, but I haven't run into such a problem myself.

    Are you saying that this job consists of single a C (or C++) source
    file, so it is not possible to parallelise the processes necessary to
    compile it? (I've not idea of gcc's capabilities there.)

    Huh??

    No, I didn't say that. I was merely trying to explain how "make -j"
    works, since you seemed to be confused about it. I'll assume you
    understand it now. If you're curious about the source code, ask
    Scott Lurndal; I don't know anything about it (and I don't know where
    you got the idea that I do).

    Bart needs to argue about something. Anything.

    It is a single source file, compiled with a single instance
    of g++. That -j was provided to make is irrelevent.

    And this actually is a degenerate example of what happens when
    one tries to include an entire program in a single source file,
    something that bart seems to rather senseless continue to
    advocate.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to David Brown on Sat Mar 29 16:33:46 2025
    David Brown <david.brown@hesbynett.no> writes:
    On 29/03/2025 01:32, bart wrote:

    <snip>

    So one of Scott's compiles takes 13 minutes. "make -j" won't speed that
    up. But it will mean that any other compilations can be done in
    parallel. Maybe he has 600 other files that each take 30 seconds to
    compile. With "make -j", the build takes the 13 minutes it has to for
    the one awkward file - all the rest are compiled while that is going on.
    With non-parallel "make", it would take 5 hours (if I've done my sums
    correctly).

    Thus "make -j" is a really good idea, even if you have a particularly >long-running task (compilation or anything else).

    Indeed there are several hundred source files. The one under
    discussion in this thread happens to be the 'long pole' that
    dominates the overall build time.

    <snip>

    The bottleneck of many big builds is the link process. Traditionally,
    this needs to collect together all the object files and static
    libraries. In more modern systems, especially with C++, it also >de-duplicates sections. Since linking is a task that usually can't
    begin until all the compilation is finished, and it is usually just one >single task, it makes sense to focus on making linking multi-threaded.
    And this is what we see with modern linkers - a great deal of effort is
    put into multi-threading the linking process (especially when partitions
    from the link are passed back to the compiler for link-time optimisation
    and code generation).

    In this project, the main executable link time
    is inconsequential. The remainer of the project generates a few
    dozen unix shared objects (what bart calls DLLs) which are dynamically
    loaded at runtime based on run-time configuration of the application.

    $ size a.out
    text data bss dec hex filename
    6639477 85792 1861744 8587013 830705 a.out

    None of which take any significant amount of time to link.



    The third point from this thread, is why is gcc so slow on a particular
    C file? As you have noted before, some aspects of compilation and >optimisation - particularly inter-procedural optimisation - increases >super-linearly with size, both the size of individual functions and the >number of functions. I don't know what this particular file is, but
    given what I know of Scotts work and my own experience, I think this
    could be a generated file for hardware simulation.

    An excellent guess.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Scott Lurndal on Sat Mar 29 17:23:40 2025
    scott@slp53.sl.home (Scott Lurndal) writes:
    David Brown <david.brown@hesbynett.no> writes:
    On 29/03/2025 01:32, bart wrote:

    <snip>

    So one of Scott's compiles takes 13 minutes. "make -j" won't speed that >>up. But it will mean that any other compilations can be done in
    parallel. Maybe he has 600 other files that each take 30 seconds to >>compile. With "make -j", the build takes the 13 minutes it has to for
    the one awkward file - all the rest are compiled while that is going on.
    With non-parallel "make", it would take 5 hours (if I've done my sums >>correctly).

    Thus "make -j" is a really good idea, even if you have a particularly >>long-running task (compilation or anything else).

    Indeed there are several hundred source files. The one under
    discussion in this thread happens to be the 'long pole' that
    dominates the overall build time.

    So this discussion prompted me to manually break up that
    automatically generated large function into seven smaller functions.

    The compile time with -O3 dropped by a factor of almost
    seven: 2 minutes 30 seconds.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Scott Lurndal on Sat Mar 29 18:11:57 2025
    On 29/03/2025 17:23, Scott Lurndal wrote:
    scott@slp53.sl.home (Scott Lurndal) writes:
    David Brown <david.brown@hesbynett.no> writes:
    On 29/03/2025 01:32, bart wrote:

    <snip>

    So one of Scott's compiles takes 13 minutes. "make -j" won't speed that >>> up. But it will mean that any other compilations can be done in
    parallel. Maybe he has 600 other files that each take 30 seconds to
    compile. With "make -j", the build takes the 13 minutes it has to for
    the one awkward file - all the rest are compiled while that is going on. >>> With non-parallel "make", it would take 5 hours (if I've done my sums
    correctly).

    Thus "make -j" is a really good idea, even if you have a particularly
    long-running task (compilation or anything else).

    Indeed there are several hundred source files. The one under
    discussion in this thread happens to be the 'long pole' that
    dominates the overall build time.

    So this discussion prompted me to manually break up that
    automatically generated large function into seven smaller functions.

    The compile time with -O3 dropped by a factor of almost
    seven: 2 minutes 30 seconds.

    Was the final binary still usable? If so then that's a result of sorts;
    you just need to tweak the automatic generation to do the same.


    Bart needs to argue about something. Anything.

    What makes you think I was arguing? I was just trying to understand how
    the compilation time was spent.

    You however seem just to want to have a personal go at me, constantly.

    (Would you have been included to do that experiment if I hadn't said
    anything about it.)

    It is a single source file, compiled with a single instance
    of g++. That -j was provided to make is irrelevent.

    Yet here you say something different:

    Indeed there are several hundred source files. The one under
    discussion in this thread happens to be the 'long pole' that
    dominates the overall build time.

    So your presentation of it was confusing.

    And this actually is a degenerate example of what happens when
    one tries to include an entire program in a single source file,
    something that bart seems to rather senseless continue to
    advocate.

    I'm not advocating it. It just what I happen to use for /generated/
    source files, and what some people use for tidily packaging the sources
    for their programs or libraries for distribution.

    For this purpose, it is (1) usually a one-off build, as all sources have
    to be processed anyway; (2) usually there are no 200Kloc functions; (3)
    you get the side benefit of whole-program compilation.

    So there it makes sense.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)