• Re: Threads across programming languages

    From Paavo Helde@21:1/5 to Stefan Ram on Mon Apr 29 19:13:09 2024
    XPost: comp.lang.c++

    On 29.04.2024 18:19, Stefan Ram wrote:
    paavo512 <paavo@osa.pri.ee> wrote or quoted:
    |Anyway, multithreading performance is a non-issue for Python so far as
    |the Python interpreter runs in a single-threaded regime anyway, under a |global GIL lock. They are planning to get rid of GIL, but this work is |still in development AFAIK. I'm sure it will take years to stabilize the |whole Python zoo without GIL.

    The GIL only prevents multiple Python statements from being
    interpreted simultaneously, but if you're waiting on inputs (like
    sockets), it's not active, so that could be distributed across
    multiple cores.

    With asyncio, however, you can easily handle the application
    for threads to "wait in parallel" for thousands of sockets in a
    single thread, and there are fewer opportunities for errors than
    with multithreading.

    In C++, async io is provided e.g. by the asio library.

    Just for waiting on thousands on sockets I believe a single select()
    call would be sufficient, no threads or asio is needed. But you probably
    meant something more.


    Additionally, there are libraries like numpy that use true
    multithreading internally to distribute computational tasks
    across multiple cores. By using such libraries, you can take
    advantage of that. (Not to mention the AI libraries that have their
    work done in highly parallel fashion by graphics cards.)

    If you want real threads, you could probably work with Cython
    sometimes.

    Huh, my goal is to avoid Python, not to work with it. Unfortunately this (avoiding Python) becomes harder all the time.


    Other languages like JavaScript seem to have an advantage there
    because they don't know a GIL, but with JavaScript, for example,
    it's because it always runs in a single thread overall. And in
    the languages where there are threads without a GIL, you quickly
    realize that programming correct non-trivial programs with
    parallel processing is error-prone.

    Been there, done that, worked through it ... 15 years ago. Nowadays
    non-trivial multi-threaded parallel processing in C++ seems pretty easy
    for me, one just needs to follow some principles and take care to get
    the details correct. I guess it's about the same as memory management in
    C, one can get it correct with taking some care. In C++ I can forget
    about memory management as it is largely automatic, but for
    multithreading I still need to take care.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stefan Ram@21:1/5 to paavo@osa.pri.ee on Mon Apr 29 15:19:19 2024
    XPost: comp.lang.c++

    paavo512 <paavo@osa.pri.ee> wrote or quoted:
    |Anyway, multithreading performance is a non-issue for Python so far as
    |the Python interpreter runs in a single-threaded regime anyway, under a |global GIL lock. They are planning to get rid of GIL, but this work is
    |still in development AFAIK. I'm sure it will take years to stabilize the |whole Python zoo without GIL.

    The GIL only prevents multiple Python statements from being
    interpreted simultaneously, but if you're waiting on inputs (like
    sockets), it's not active, so that could be distributed across
    multiple cores.

    With asyncio, however, you can easily handle the application
    for threads to "wait in parallel" for thousands of sockets in a
    single thread, and there are fewer opportunities for errors than
    with multithreading.

    Additionally, there are libraries like numpy that use true
    multithreading internally to distribute computational tasks
    across multiple cores. By using such libraries, you can take
    advantage of that. (Not to mention the AI libraries that have their
    work done in highly parallel fashion by graphics cards.)

    If you want real threads, you could probably work with Cython
    sometimes.

    Other languages like JavaScript seem to have an advantage there
    because they don't know a GIL, but with JavaScript, for example,
    it's because it always runs in a single thread overall. And in
    the languages where there are threads without a GIL, you quickly
    realize that programming correct non-trivial programs with
    parallel processing is error-prone.

    Often in Python you can use "ThreadPoolExecutor" to start
    multiple threads. If the GIL then becomes a problem (which is
    not the case if you're waiting on I/O), you can easily swap it
    out for "ProcessPoolExecutor": Then processes are used instead
    of threads, and there is no GIL for those.

    If four cores are available, by dividing up compute-intensive tasks
    using "ProcessPoolExecutor", you can expect a speedup factor of two
    to eight.

    With the Celery library, tasks can be distributed across multiple
    processes that can also run on different computers. See, for
    example, "Parallel Programming with Python" by Jan Palach.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Bonita Montero on Mon Apr 29 20:31:15 2024
    XPost: comp.lang.c++

    On Mon, 29 Apr 2024 18:18:57 +0200, Bonita Montero wrote:

    But you need multithreading to have maximum throughput since you often process the data while other data is available.

    In a lot of applications, the bottleneck is the network I/O, or a GUI
    waiting for the next user event, that kind of thing. In this situation, multithreading is more trouble than it’s worth. This is why coroutines (in the form of async/await) have made a comeback over the last decade or so.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Stefan Ram on Mon Apr 29 20:36:57 2024
    XPost: comp.lang.c++

    On 29 Apr 2024 15:19:19 GMT, Stefan Ram wrote:

    With asyncio, however, you can easily handle the application for
    threads to "wait in parallel" for thousands of sockets in a single
    thread, and there are fewer opportunities for errors than with
    multithreading.

    It makes event-loop programming much more convenient. I posted a
    simple example here from some years ago <https://github.com/HamPUG/meetings/tree/master/2017/2017-05-08/ldo-generators-coroutines-asyncio>:
    compare the version based on callbacks, with the one using asyncio:
    the former is about 30% bigger.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Paavo Helde on Mon Apr 29 20:29:42 2024
    XPost: comp.lang.c++

    On Mon, 29 Apr 2024 19:13:09 +0300, Paavo Helde wrote:

    Just for waiting on thousands on sockets I believe a single select()
    call would be sufficient ...

    We use poll(2) or epoll(2) nowadays. select(2) is antiquated.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Chris M. Thomasson on Mon Apr 29 22:41:23 2024
    XPost: comp.lang.c++

    On Mon, 29 Apr 2024 13:33:22 -0700, Chris M. Thomasson wrote:

    On 4/29/2024 1:29 PM, Lawrence D'Oliveiro wrote:

    On Mon, 29 Apr 2024 19:13:09 +0300, Paavo Helde wrote:

    Just for waiting on thousands on sockets I believe a single select()
    call would be sufficient ...

    We use poll(2) or epoll(2) nowadays. select(2) is antiquated.

    AIO on Linux, IOCP on windows.

    AIO is for block I/O. Try io_uring instead.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Chris M. Thomasson on Tue Apr 30 00:11:43 2024
    XPost: comp.lang.c++

    On Mon, 29 Apr 2024 16:46:17 -0700, Chris M. Thomasson wrote:

    On 4/29/2024 3:41 PM, Lawrence D'Oliveiro wrote:

    On Mon, 29 Apr 2024 13:33:22 -0700, Chris M. Thomasson wrote:

    On 4/29/2024 1:29 PM, Lawrence D'Oliveiro wrote:

    On Mon, 29 Apr 2024 19:13:09 +0300, Paavo Helde wrote:

    Just for waiting on thousands on sockets I believe a single select() >>>>> call would be sufficient ...

    We use poll(2) or epoll(2) nowadays. select(2) is antiquated.

    AIO on Linux, IOCP on windows.

    AIO is for block I/O. Try io_uring instead.

    Afaict, AIO is analogous to IOCP.

    So not really analogous to io_uring, then?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Bonita Montero on Tue Apr 30 04:09:00 2024
    XPost: comp.lang.c++

    On Tue, 30 Apr 2024 05:58:31 +0200, Bonita Montero wrote:

    Am 29.04.2024 um 22:31 schrieb Lawrence D'Oliveiro:

    On Mon, 29 Apr 2024 18:18:57 +0200, Bonita Montero wrote:

    But you need multithreading to have maximum throughput since you often
    process the data while other data is available.

    In a lot of applications, the bottleneck is the network I/O, or a GUI
    waiting for the next user event, that kind of thing. In this situation,
    multithreading is more trouble than it’s worth. This is why coroutines
    (in the form of async/await) have made a comeback over the last decade
    or so.

    Having a single thread and using state machines is more effortz.

    It would indeed. That’s why coroutines (async/await) are so handy.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Bonita Montero on Tue Apr 30 06:42:58 2024
    XPost: comp.lang.c++

    On Tue, 30 Apr 2024 07:59:06 +0200, Bonita Montero wrote:

    Am 30.04.2024 um 06:09 schrieb Lawrence D'Oliveiro:

    On Tue, 30 Apr 2024 05:58:31 +0200, Bonita Montero wrote:

    Having a single thread and using state machines is more effortz.

    It would indeed. That’s why coroutines (async/await) are so handy.

    Using a thread is even more handy.

    Do you know what a “heisenbug” is?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Stefan Ram@21:1/5 to Stefan Ram on Tue Apr 30 09:04:48 2024
    XPost: comp.lang.c++

    ram@zedat.fu-berlin.de (Stefan Ram) wrote or quoted:
    The GIL only prevents multiple Python statements from being
    interpreted simultaneously, but if you're waiting on inputs (like
    sockets), it's not active, so that could be distributed across
    multiple cores.

    Disclaimer: This is not on-topic here as it discusses Python,
    not C or C++.

    FWIW, here's some multithreaded Python code modeled after what
    I use in an application.

    I am using Python to prepare a press review for me, getting article
    headers from several newssites, removing all headers matching a list
    of regexps, and integrating everything into a single HTML resource.
    (I do not like to read about Lindsay Lohan, for example, so articles
    with the text "Lindsay Lohan" will not show up on my HTML review.)

    I'm usually downloading all pages at once using Python threads,
    which will make sure that a thread uses the CPU while another
    thread is waiting for TCP/IP data. This is the code, taken from
    my Python program and a bit simplified:

    from multiprocessing.dummy import Pool

    ...

    with Pool( 9 if fast_internet else 1 )as pool:
    for i in range( 9 ):
    content[ i ] = pool.apply_async( fetch,[ uris[ i ] ])
    pool.close()
    pool.join()

    . I'm using my "fetch" function to fetch a single URI, and the
    loop starts nine threads within a thread pool to fetch the
    content of those nine URIs "in parallel". This is observably
    faster than corresponding sequential code.

    (However, sometimes I have a slow connection and have to download
    sequentially in order not to overload the slow connection, which
    would result in stalled downloads. To accomplish this, I just
    change the "9" to "1" in the first line above.)

    In case you wonder about the "dummy":

    |The multiprocessing.dummy module module provides a wrapper
    |for the multiprocessing module, except implemented using
    |thread-based concurrency.
    |
    |It provides a drop-in replacement for multiprocessing,
    |allowing a program that uses the multiprocessing API to
    |switch to threads with a single change to import statements.

    . So, this is an area where multithreading the Python way is easy
    to use and enhances performance even in the presence of the GIL!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Bonita Montero on Tue Apr 30 09:09:32 2024
    XPost: comp.lang.c++

    On Tue, 30 Apr 2024 09:37:18 +0200, Bonita Montero wrote:

    Am 30.04.2024 um 08:42 schrieb Lawrence D'Oliveiro:
    On Tue, 30 Apr 2024 07:59:06 +0200, Bonita Montero wrote:

    Am 30.04.2024 um 06:09 schrieb Lawrence D'Oliveiro:

    On Tue, 30 Apr 2024 05:58:31 +0200, Bonita Montero wrote:

    Having a single thread and using state machines is more effortz.

    It would indeed. That’s why coroutines (async/await) are so handy.

    Using a thread is even more handy.

    Do you know what a “heisenbug” is?

    [No]

    Do you know what a “race condition” is?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Paavo Helde on Tue Apr 30 16:48:51 2024
    XPost: comp.lang.c++

    Paavo Helde <eesnimi@osa.pri.ee> writes:
    On 29.04.2024 18:19, Stefan Ram wrote:
    paavo512 <paavo@osa.pri.ee> wrote or quoted:
    |Anyway, multithreading performance is a non-issue for Python so far as
    |the Python interpreter runs in a single-threaded regime anyway, under a
    |global GIL lock. They are planning to get rid of GIL, but this work is
    |still in development AFAIK. I'm sure it will take years to stabilize the
    |whole Python zoo without GIL.

    The GIL only prevents multiple Python statements from being
    interpreted simultaneously, but if you're waiting on inputs (like
    sockets), it's not active, so that could be distributed across
    multiple cores.

    With asyncio, however, you can easily handle the application
    for threads to "wait in parallel" for thousands of sockets in a
    single thread, and there are fewer opportunities for errors than
    with multithreading.

    In C++, async io is provided e.g. by the asio library.

    And the POSIX aio interfaces, on systems that support them.

    I used lio_listio heavily in Oracle's RDMS.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Bonita Montero on Tue Apr 30 20:22:40 2024
    XPost: comp.lang.c++

    On Tue, 30 Apr 2024 11:25:13 +0200, Bonita Montero wrote:

    Am 30.04.2024 um 11:09 schrieb Lawrence D'Oliveiro:
    On Tue, 30 Apr 2024 09:37:18 +0200, Bonita Montero wrote:

    Am 30.04.2024 um 08:42 schrieb Lawrence D'Oliveiro:
    On Tue, 30 Apr 2024 07:59:06 +0200, Bonita Montero wrote:

    Am 30.04.2024 um 06:09 schrieb Lawrence D'Oliveiro:

    On Tue, 30 Apr 2024 05:58:31 +0200, Bonita Montero wrote:

    Having a single thread and using state machines is more effortz.

    It would indeed. That’s why coroutines (async/await) are so handy. >>>>>
    Using a thread is even more handy.

    Do you know what a “heisenbug” is?

    [No]

    Do you know what a “race condition” is?

    [No]

    You haven’t actually done much thread programming, have you?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Blue-Maned_Hawk@21:1/5 to Lawrence D'Oliveiro on Wed May 1 06:29:40 2024
    XPost: comp.lang.c++

    Lawrence D'Oliveiro wrote:

    On Tue, 30 Apr 2024 07:59:06 +0200, Bonita Montero wrote:

    Am 30.04.2024 um 06:09 schrieb Lawrence D'Oliveiro:

    On Tue, 30 Apr 2024 05:58:31 +0200, Bonita Montero wrote:

    Having a single thread and using state machines is more effortz.

    It would indeed. That’s why coroutines (async/await) are so handy.

    Using a thread is even more handy.

    Do you know what a “heisenbug” is?

    You don't need threads to get heisenbugs.



    --
    Blue-Maned_Hawk│shortens to Hawk│/blu.mɛin.dʰak/│he/him/his/himself/Mr. blue-maned_hawk.srht.site
    1, 4, 2, 5, 3!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Bonita Montero on Wed May 1 07:10:09 2024
    XPost: comp.lang.c++

    On Wed, 1 May 2024 06:54:13 +0200, Bonita Montero wrote:

    Boost.ASIO does that all for you with a convenient interface.
    If enabled it even uses io_uring or the Windows' pendant.

    How many languages does it support?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Bonita Montero on Wed May 1 07:09:31 2024
    XPost: comp.lang.c++

    On Wed, 1 May 2024 06:53:21 +0200, Bonita Montero wrote:

    Am 29.04.2024 um 22:29 schrieb Lawrence D'Oliveiro:

    We use poll(2) or epoll(2) nowadays. select(2) is antiquated.

    Use Boost.ASIO.

    And what does that use?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to All on Wed May 1 07:09:09 2024
    XPost: comp.lang.c++

    On Wed, 1 May 2024 06:29:40 -0000 (UTC), Blue-Maned_Hawk wrote:

    You don't need threads to get heisenbugs.

    They are particularly prone to them.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Bonita Montero on Wed May 1 08:53:42 2024
    XPost: comp.lang.c++

    On Wed, 1 May 2024 10:11:14 +0200, Bonita Montero wrote:

    Am 01.05.2024 um 09:09 schrieb Lawrence D'Oliveiro:
    On Wed, 1 May 2024 06:53:21 +0200, Bonita Montero wrote:

    Am 29.04.2024 um 22:29 schrieb Lawrence D'Oliveiro:

    We use poll(2) or epoll(2) nowadays. select(2) is antiquated.

    Use Boost.ASIO.

    And what does that use?

    Boost.ASIO can even use io_uring if it is available.

    No async/await? Oh, they haven’t added that to C++--yet.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Bonita Montero on Wed May 1 20:34:25 2024
    XPost: comp.lang.c++

    On Wed, 1 May 2024 10:59:16 +0200, Bonita Montero wrote:

    Am 01.05.2024 um 10:53 schrieb Lawrence D'Oliveiro:

    On Wed, 1 May 2024 10:13:03 +0200, Bonita Montero wrote:

    Am 01.05.2024 um 09:10 schrieb Lawrence D'Oliveiro:

    On Wed, 1 May 2024 06:54:13 +0200, Bonita Montero wrote:

    Boost.ASIO does that all for you with a convenient interface.
    If enabled it even uses io_uring or the Windows' pendant.

    How many languages does it support?

    Just C++ ...

    Not much use, then.

    System-level programming is mostly made with C++.

    No, it is actually mostly C, with Rust making inroads these days.

    And you don’t have to be doing “system-level” programming to be needing event-driven paradigms.

    But functions and classes are not first-class objects in C++, ...

    Of course, since C++11.

    No they aren’t. You cannot easily define a C++ function that returns a general function or class as a result, just for example.

    You cannot define function factories and class factories, like you can
    in Python.

    Python is nothing for me since it is extremely slow.

    Remember, we’re talking about maximizing I/O throughput here, so CPU is
    not the bottleneck.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Lawrence D'Oliveiro on Wed May 1 21:00:19 2024
    XPost: comp.lang.c++

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Wed, 1 May 2024 11:00:04 +0200, Bonita Montero wrote:

    Am 01.05.2024 um 10:53 schrieb Lawrence D'Oliveiro:

    No async/await? Oh, they haven’t added that to C++--yet.

    No, Boost.ASIO is event driven with asynchronous callbacks in a foreign
    thread's context.

    Callbacks can be a clunky way of event handling, since they force you to >break up your logic sequence into discontinguous pieces. This is why >coroutines have become popular, since they keep the logic flow together.

    Callbacks work just fine, as the logic for submitting a request
    is quite different from the logic for completing a request; indeed,
    they more closely mirror the hardware interrupt that signals completion.

    I wouldn't call coroutines popular at all, outside of python generators.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Bonita Montero on Wed May 1 20:31:04 2024
    XPost: comp.lang.c++

    On Wed, 1 May 2024 11:00:04 +0200, Bonita Montero wrote:

    Am 01.05.2024 um 10:53 schrieb Lawrence D'Oliveiro:

    No async/await? Oh, they haven’t added that to C++--yet.

    No, Boost.ASIO is event driven with asynchronous callbacks in a foreign thread's context.

    Callbacks can be a clunky way of event handling, since they force you to
    break up your logic sequence into discontinguous pieces. This is why
    coroutines have become popular, since they keep the logic flow together.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Scott Lurndal on Thu May 2 00:05:24 2024
    XPost: comp.lang.c++

    On Wed, 01 May 2024 21:00:19 GMT
    scott@slp53.sl.home (Scott Lurndal) wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:
    On Wed, 1 May 2024 11:00:04 +0200, Bonita Montero wrote:

    Am 01.05.2024 um 10:53 schrieb Lawrence D'Oliveiro:

    No async/await? Oh, they haven’t added that to C++--yet.

    No, Boost.ASIO is event driven with asynchronous callbacks in a
    foreign thread's context.

    Callbacks can be a clunky way of event handling, since they force
    you to break up your logic sequence into discontinguous pieces. This
    is why coroutines have become popular, since they keep the logic
    flow together.

    Callbacks work just fine, as the logic for submitting a request
    is quite different from the logic for completing a request; indeed,
    they more closely mirror the hardware interrupt that signals
    completion.

    I wouldn't call coroutines popular at all, outside of python
    generators.

    My impression was that in golang world co-routines are relatively
    popular. But I can be wrong about it.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Scott Lurndal on Wed May 1 23:05:50 2024
    XPost: comp.lang.c++

    On Wed, 01 May 2024 21:00:19 GMT, Scott Lurndal wrote:

    Lawrence D'Oliveiro <ldo@nz.invalid> writes:

    Callbacks can be a clunky way of event handling, since they force you to >>break up your logic sequence into discontinguous pieces. This is why >>coroutines have become popular, since they keep the logic flow together.

    Callbacks work just fine, as the logic for submitting a request
    is quite different from the logic for completing a request ...

    They are typically part of the same logic flow. Having to break it up
    into separate callback pieces can make it harder to appreciate the
    continuity, making the code harder to maintain. It can also require
    more code.

    Have a look at the two versions of the “rocket launch” example I
    posted here <https://github.com/HamPUG/meetings/tree/master/2017/2017-05-08/ldo-generators-coroutines-asyncio>:
    not only is the callback version about 30% bigger, it is also harder
    to understand.

    Sure, it’s a toy example (44 versus 57 lines). But I think it does
    illustrate the issues involved.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Chris M. Thomasson on Thu May 2 05:39:15 2024
    XPost: comp.lang.c++

    On Wed, 1 May 2024 22:20:47 -0700, Chris M. Thomasson wrote:

    On 5/1/2024 1:34 PM, Lawrence D'Oliveiro wrote:

    Remember, we’re talking about maximizing I/O throughput here, so CPU is
    not the bottleneck.

    It can be if your thread synchronization scheme is sub par.

    Another reason to avoid threads. So long as your async tasks have an await
    call somewhere in their main loops, that should be sufficient to avoid
    most bottlenecks.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Ross Finlayson on Thu May 2 06:48:42 2024
    XPost: comp.lang.c++

    On Wed, 1 May 2024 20:09:48 -0700, Ross Finlayson wrote:

    So, the idea of the re-routine, is a sort of co-routine. That is, it
    fits the definition of being a co-routine, though as with that when its asynchronous filling of the memo of its operation is unfulfilled, it
    quits by throwing an exception, then is as expected to to called again,
    when its filling of the memo is fulfilled, thus that it returns.

    The normal, non-comedy way of handling this is to have the task await
    something variously called a “future” or “promise”: when that object is marked as completed, then the task is automatically woken again to fulfil
    its purpose.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Bonita Montero on Thu May 2 15:53:16 2024
    XPost: comp.lang.c++

    On 02/05/2024 05:45, Bonita Montero wrote:
    Am 01.05.2024 um 22:34 schrieb Lawrence D'Oliveiro:

    No, it is actually mostly C, with Rust making inroads these days.

    C++ has superseeded C with that for a long time with Job offers.
    Rust is a language a lot of people talk about and no one actually uses.

    And you don’t have to be doing “system-level” programming to be needing
    event-driven paradigms.

    If you make asnychronous I/O you need performance, and this isn't
    possible with Python.

    No they aren’t. You cannot easily define a C++ function that returns a
    general function or class as a result, just for example.

    function<void ()> fn();


    That is a /long/ way from treating functions as first-class objects.
    But it is certainly a step in that direction, as are lambdas.

    You also claimed that classes are first-class objects in C++. Have you anything to back that up?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Chris M. Thomasson on Thu May 2 23:15:16 2024
    XPost: comp.lang.c++

    On Thu, 2 May 2024 13:28:15 -0700, Chris M. Thomasson wrote:

    On 5/1/2024 10:39 PM, Lawrence D'Oliveiro wrote:

    On Wed, 1 May 2024 22:20:47 -0700, Chris M. Thomasson wrote:

    On 5/1/2024 1:34 PM, Lawrence D'Oliveiro wrote:

    Remember, we’re talking about maximizing I/O throughput here, so CPU >>>> is not the bottleneck.

    It can be if your thread synchronization scheme is sub par.

    Another reason to avoid threads.

    Why? Believe it or not, there are ways to create _highly_ scalable
    thread synchronization schemes.

    I’m sure there are. But none of that is relevant when the CPU isn’t the bottleneck anyway.

    So long as your async tasks have an await call somewhere in their main
    loops, that should be sufficient to avoid most bottlenecks.

    async tasks are using threads... No?

    No. They are built on coroutines. Specifically, the “stackless” variety.

    <https://gitlab.com/ldo/python_topics_notebooks/-/blob/master/Generators%20&%20Coroutines.ipynb?ref_type=heads>

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Bonita Montero on Thu May 2 23:21:57 2024
    XPost: comp.lang.c++

    On Thu, 2 May 2024 05:45:29 +0200, Bonita Montero wrote:

    Am 01.05.2024 um 22:34 schrieb Lawrence D'Oliveiro:

    No, it is actually mostly C, with Rust making inroads these days.

    C++ has superseeded C with that for a long time with Job offers.
    Rust is a language a lot of people talk about and no one actually uses.

    Fun fact: the Linux kernel (the world’s most successful software project), originally entirely C-based, is now incorporating Rust-based development.
    It never accepted C++.

    And you don’t have to be doing “system-level” programming to be needing
    event-driven paradigms.

    If you make asnychronous I/O you need performance, and this isn't
    possible with Python.

    I/O performance certainly is possible with Python, and it has the high- performance production-quality frameworks to prove it.

    No they aren’t. You cannot easily define a C++ function that returns a
    general function or class as a result, just for example.

    function<void ()> fn();

    Try it with something that has actual lexically-bound local variables in
    it:

    def factory(count : int) :

    def counter() :
    nonlocal count
    count += 1
    return count
    #end counter

    #begin
    return counter
    #end factory

    f1 = factory(3)
    f2 = factory(30)
    print(f1())
    print(f2())
    print(f1())
    print(f2())

    output:

    4
    31
    5
    32

    Remember, we’re talking about maximizing I/O throughput here, so CPU is
    not the bottleneck.

    With io_uring you can easily handle millions of I/Os with a single
    thread, but not with Python.

    Debunked above.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Bonita Montero on Thu May 2 23:24:26 2024
    XPost: comp.lang.c++

    On Thu, 2 May 2024 17:10:47 +0200, Bonita Montero wrote:

    Am 02.05.2024 um 15:53 schrieb David Brown:

    You also claimed that classes are first-class objects in C++.

    I never said that ...

    No, you just ignored the point.

    ... and having sth. like class Class in Java is beyond
    C++'s performance constraints.

    Java’s ”Class” object is a pretty pitiful, awkward and crippled attempt at
    run-time manipulation of classes. Still falls far short of Python’s full treatment of classes as first-class objects.

    That also extends to the fact that Python classes, being objects, must be instances of classes themselves. The class that a class is an instance of
    is called its “metaclass”.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Bonita Montero on Thu May 2 23:16:23 2024
    XPost: comp.lang.c++

    On Thu, 2 May 2024 07:53:21 +0200, Bonita Montero wrote:

    Am 02.05.2024 um 07:39 schrieb Lawrence D'Oliveiro:

    Another reason to avoid threads. So long as your async tasks have an
    await
    call somewhere in their main loops, that should be sufficient to avoid
    most bottlenecks.

    If you have a stream of individual I/Os and the processing of the I/Os
    takes more time than the time between the I/Os you need threads.

    That makes the CPU the bottleneck. Which is not the case we’re discussing here.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Kaz Kylheku@21:1/5 to Chris M. Thomasson on Fri May 3 00:15:52 2024
    XPost: comp.lang.c++

    On 2024-05-02, Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    The CPU can become a bottleneck.

    Unfortunately, not in a way that you could use for playing slide
    guitar, let alone actually drinking beer through it.

    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Chris M. Thomasson on Fri May 3 02:25:52 2024
    XPost: comp.lang.c++

    On Thu, 2 May 2024 16:58:54 -0700, Chris M. Thomasson wrote:

    The CPU can become a bottleneck.

    Then that becomes an entirely different situation from what we’re
    discussing.

    So, there is no way to take advantage of multiple threads on Python?

    There is, but the current scheme has limitations in CPU-intensive
    situations. They’re working on a fix, without turning it into a memory hog like Java.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Bonita Montero on Fri May 3 09:38:46 2024
    XPost: comp.lang.c++

    On 02/05/2024 17:10, Bonita Montero wrote:
    Am 02.05.2024 um 15:53 schrieb David Brown:

    That is a /long/ way from treating functions as first-class objects.

    A C-style function is also a function-object in C++ because it has
    a calling operator.

    No it is not. C-style functions (or C++ functions for that matter) are
    not objects, and do not have calling operators. Built-in operators do
    not belong to a type, in the way that class operators do.


    But it is certainly a step in that direction, as are lambdas.

    Lambdas can be assigned to function<>-object to make them runtime -polymorphic. Otherwise they can be generic types, which are compile
    -time polymorphic - like the function-object for std::sort();


    You missed the point entirely. Lambdas can be used in many ways like functions, and it is possible for one function (or lambda) to return a different function, and can be used for higher-order functions
    (functions that have functions as parameters or return types). They do
    not mean that C++ can treat functions as first-class objects, but they
    /do/ mean that you can get many of the effects you might want if C++
    functions really were first-class objects.

    You also claimed that classes are first-class objects in C++.

    I never said that and having sth. like class Class in Java is
    beyond C++'s performance constraints.

    You repeatedly replied to Lawrence's posts confirming that you believed
    they were. (Re-read your posts in this thread.) I was fairly sure you
    were making completely unsubstantiated claims, but it was always
    possible you had thought of something interesting.

    I like C++, but it is absurd and unhelpful to claim it is something that
    it is not. Neither functions nor classes are first-class objects in
    C++. C++ is not, by any stretch of the imagination, a "fully featured functional programming language". It supports some functional
    programming techniques, which is nice, but that does not make it a
    functional programming language.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Chris M. Thomasson on Fri May 3 10:34:11 2024
    XPost: comp.lang.c++

    On 03/05/2024 01:58, Chris M. Thomasson wrote:
    On 5/2/2024 4:15 PM, Lawrence D'Oliveiro wrote:
    On Thu, 2 May 2024 13:28:15 -0700, Chris M. Thomasson wrote:

    On 5/1/2024 10:39 PM, Lawrence D'Oliveiro wrote:

    On Wed, 1 May 2024 22:20:47 -0700, Chris M. Thomasson wrote:

    On 5/1/2024 1:34 PM, Lawrence D'Oliveiro wrote:

    Remember, we’re talking about maximizing I/O throughput here, so CPU >>>>>> is not the bottleneck.

    It can be if your thread synchronization scheme is sub par.

    Another reason to avoid threads.

    Why? Believe it or not, there are ways to create _highly_ scalable
    thread synchronization schemes.

    I’m sure there are. But none of that is relevant when the CPU isn’t the >> bottleneck anyway.

    The CPU can become a bottleneck. Depends on how the programmer
    implements things.


    So long as your async tasks have an await call somewhere in their main >>>> loops, that should be sufficient to avoid most bottlenecks.

    async tasks are using threads... No?

    No. They are built on coroutines. Specifically, the “stackless” variety. >>
    <https://gitlab.com/ldo/python_topics_notebooks/-/blob/master/Generators%20&%20Coroutines.ipynb?ref_type=heads>

    So, there is no way to take advantage of multiple threads on Python?
    Heck, even JavaScript has WebWorkers... ;^)

    Python supports multi-threading. It uses a global lock (the "GIL") in
    the Python interpreter - thus only one thread can be running Python code
    at a time. However, if you are doing anything serious with Python, much
    of the time will be spend either blocked (waiting for network, IO, etc.)
    or using compiled or external code (using your favourite gui toolkit,
    doing maths with numpy, etc.). The GIL is released while executing such
    code.

    Thus if you are using Python for cpu-intensive work (and doing so
    sensibly), you have full multi-threading. If you are using it for
    IO-intensive work, you have full multi-threading. It's not going to be
    as efficient as well-written compiled code, even with JIT and pypy, but
    in practice it gets pretty close while being very convenient and
    developer friendly.

    If you really need parallel running of Python code, or better separation between tasks, Python has a multi-processing module that makes it simple
    to control and pass data between separate Python processes, each with
    their own GIL.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Bonita Montero on Fri May 3 11:18:06 2024
    XPost: comp.lang.c++

    On 03/05/2024 09:58, Bonita Montero wrote:
    Am 03.05.2024 um 09:38 schrieb David Brown:

    No it is not.  C-style functions (or C++ functions for that matter)
    are not objects, and do not have calling operators.  Built-in
    operators do not belong to a type, in the way that class operators do.

    You can assign a C-style function pointer to an auto function-object.

    A C-style function /pointer/ is an object. A C-style /function/ is not.
    Do you understand the difference?

    That these function objects all have the same type doesn't metter.

    You missed the point entirely.  Lambdas can be used in many ways like
    functions, and it is possible for one function (or lambda) to return a
    different function, and can be used for higher-order functions
    (functions that have functions as parameters or return types).  They
    do not mean that C++ can treat functions as first-class objects, but
    they /do/ mean that you can get many of the effects you might want if
    C++ functions really were first-class objects.

    C-style functions and lambda-types are generically interchangeable.


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Bonita Montero on Fri May 3 18:01:02 2024
    XPost: comp.lang.c++

    On Fri, 3 May 2024 13:23:13 +0200
    Bonita Montero <Bonita.Montero@gmail.com> wrote:

    Am 03.05.2024 um 11:18 schrieb David Brown:
    On 03/05/2024 09:58, Bonita Montero wrote:
    Am 03.05.2024 um 09:38 schrieb David Brown:

    No it is not. C-style functions (or C++ functions for that
    matter) are not objects, and do not have calling operators.
    Built-in operators do not belong to a type, in the way that class
    operators do.

    You can assign a C-style function pointer to an auto
    function-object.

    A C-style function /pointer/ is an object. A C-style /function/ is
    not. Do you understand the difference?

    Practically there isn't a difference.


    For C, I agree, mostly because C has no nested functions.
    For C++ (after C++11) I am less sure, because of lambdas with
    non-empty captures.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to David Brown on Fri May 3 18:05:33 2024
    XPost: comp.lang.c++

    On Fri, 3 May 2024 10:34:11 +0200
    David Brown <david.brown@hesbynett.no> wrote:

    On 03/05/2024 01:58, Chris M. Thomasson wrote:
    On 5/2/2024 4:15 PM, Lawrence D'Oliveiro wrote:
    On Thu, 2 May 2024 13:28:15 -0700, Chris M. Thomasson wrote:

    On 5/1/2024 10:39 PM, Lawrence D'Oliveiro wrote:

    On Wed, 1 May 2024 22:20:47 -0700, Chris M. Thomasson wrote:

    On 5/1/2024 1:34 PM, Lawrence D'Oliveiro wrote:

    Remember, we’re talking about maximizing I/O throughput here,
    so CPU is not the bottleneck.

    It can be if your thread synchronization scheme is sub par.

    Another reason to avoid threads.

    Why? Believe it or not, there are ways to create _highly_ scalable
    thread synchronization schemes.

    I’m sure there are. But none of that is relevant when the CPU
    isn’t the bottleneck anyway.

    The CPU can become a bottleneck. Depends on how the programmer
    implements things.


    So long as your async tasks have an await call somewhere in
    their main loops, that should be sufficient to avoid most
    bottlenecks.

    async tasks are using threads... No?

    No. They are built on coroutines. Specifically, the “stackless”
    variety.

    <https://gitlab.com/ldo/python_topics_notebooks/-/blob/master/Generators%20&%20Coroutines.ipynb?ref_type=heads>


    So, there is no way to take advantage of multiple threads on
    Python? Heck, even JavaScript has WebWorkers... ;^)

    Python supports multi-threading. It uses a global lock (the "GIL")
    in the Python interpreter - thus only one thread can be running
    Python code at a time. However, if you are doing anything serious
    with Python, much of the time will be spend either blocked (waiting
    for network, IO, etc.) or using compiled or external code (using your favourite gui toolkit, doing maths with numpy, etc.). The GIL is
    released while executing such code.

    Thus if you are using Python for cpu-intensive work (and doing so
    sensibly), you have full multi-threading. If you are using it for IO-intensive work, you have full multi-threading. It's not going to
    be as efficient as well-written compiled code, even with JIT and
    pypy, but in practice it gets pretty close while being very
    convenient and developer friendly.

    If you really need parallel running of Python code, or better
    separation between tasks, Python has a multi-processing module that
    makes it simple to control and pass data between separate Python
    processes, each with their own GIL.




    A typical scenario is that you started you python program while
    thinking that it wouldn't e CPU-intensive. And then it grew and became CPU-intensive.
    That's actually a good case, because it means that your program is used
    and is doing something worthwhile.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Bonita Montero on Fri May 3 18:47:54 2024
    XPost: comp.lang.c++

    On Fri, 3 May 2024 17:20:00 +0200
    Bonita Montero <Bonita.Montero@gmail.com> wrote:

    Am 03.05.2024 um 17:05 schrieb Michael S:

    A typical scenario is that you started you python program while
    thinking that it wouldn't e CPU-intensive. And then it grew and
    became CPU-intensive.
    That's actually a good case, because it means that your program is
    used and is doing something worthwhile.

    I don't think it makes a big difference if Python has a GIL or
    not since it is interpreted and extremely slow with that anyway.


    64 times faster than slow wouldn't be fast, but could be acceptable.
    And 64 HW threads nowadays is almost low-end server, I have one at
    work, just in case.
    Also, I don't see why in the future Python could not be JITted.
    Javascript was also considered slow 15-20 years ago, now it's pretty
    fast.
    But then, my knowledge of Python is very shallow, Possibly, it's not
    JITted yet because of fundamental reasons rather than due to lack of
    demand.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Bonita Montero on Fri May 3 22:22:16 2024
    XPost: comp.lang.c++

    On Thu, 2 May 2024 05:46:59 +0200, Bonita Montero wrote:

    Am 01.05.2024 um 22:31 schrieb Lawrence D'Oliveiro:

    Callbacks can be a clunky way of event handling, since they force you
    to break up your logic sequence into discontinguous pieces.

    Callbacks are the most convenient use for asyncbronous I/O.

    Wonder why the C++ folks are proposing to add async/await, then ...

    C++ is already about 5× the complexity of Python, yet still nowhere near
    as expressive.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Michael S on Fri May 3 22:19:27 2024
    XPost: comp.lang.c++

    On Fri, 3 May 2024 18:47:54 +0300, Michael S wrote:

    Also, I don't see why in the future Python could not be JITted.

    It might require more use of static type annotations. Which some are
    adopting in their Python code.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Michael S on Fri May 3 22:20:59 2024
    XPost: comp.lang.c++

    On Fri, 3 May 2024 18:01:02 +0300, Michael S wrote:

    For C, I agree, mostly because C has no nested functions.

    GCC implements nested functions in the C compiler. Though oddly, not in C+
    +.

    I posted a C example using nested functions in the “Recursion, Yo” thread.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From bart@21:1/5 to Michael S on Sat May 4 00:27:53 2024
    XPost: comp.lang.c++

    On 03/05/2024 16:47, Michael S wrote:
    On Fri, 3 May 2024 17:20:00 +0200
    Bonita Montero <Bonita.Montero@gmail.com> wrote:

    Am 03.05.2024 um 17:05 schrieb Michael S:

    A typical scenario is that you started you python program while
    thinking that it wouldn't e CPU-intensive. And then it grew and
    became CPU-intensive.
    That's actually a good case, because it means that your program is
    used and is doing something worthwhile.

    I don't think it makes a big difference if Python has a GIL or
    not since it is interpreted and extremely slow with that anyway.


    64 times faster than slow wouldn't be fast, but could be acceptable.
    And 64 HW threads nowadays is almost low-end server, I have one at
    work, just in case.
    Also, I don't see why in the future Python could not be JITted.
    Javascript was also considered slow 15-20 years ago, now it's pretty
    fast.
    But then, my knowledge of Python is very shallow, Possibly, it's not
    JITted yet because of fundamental reasons rather than due to lack of
    demand.


    PyPy has been around for many years.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Bonita Montero on Sat May 4 02:30:38 2024
    XPost: comp.lang.c++

    On Fri, 3 May 2024 09:00:30 +0200, Bonita Montero wrote:

    Am 03.05.2024 um 01:16 schrieb Lawrence D'Oliveiro:

    On Thu, 2 May 2024 07:53:21 +0200, Bonita Montero wrote:

    If you have a stream of individual I/Os and the processing of the I/Os
    takes more time than the time between the I/Os you need threads.

    That makes the CPU the bottleneck. Which is not the case we’re
    discussing here.

    No, the processing beetween the I/O can mostly depend on other I/Os,
    which is the standard case for server applications.

    In that situation, multithreading isn’t going to speed things up.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Bonita Montero on Sat May 4 02:35:40 2024
    XPost: comp.lang.c++

    On Thu, 2 May 2024 13:22:39 +0200, Bonita Montero wrote:

    Am 02.05.2024 um 08:48 schrieb Lawrence D'Oliveiro:

    The normal, non-comedy way of handling this is to have the task await
    something variously called a “future” or “promise”: when that object is
    marked as completed, then the task is automatically woken again to
    fulfil its purpose.

    The problem with a future and a promise is that in most languages you
    can't wait for multiple futures at once to have out of order completion.

    Of course you can. Any decent event-loop framework will provide this capability. Python’s asyncio does. I use it all the time.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Bonita Montero on Sat May 4 02:33:59 2024
    XPost: comp.lang.c++

    On Fri, 3 May 2024 08:45:58 +0200, Bonita Montero wrote:

    Am 03.05.2024 um 01:21 schrieb Lawrence D'Oliveiro:

    I/O performance certainly is possible with Python, and it has the high-
    performance production-quality frameworks to prove it.

    I thought about high performance code with >= 1e5 IOs/s.
    That's not possible with Python.

    Sure it is. Try the “stress_test” script I wrote here <https://gitlab.com/ldo/inotipy_examples>. It can easily generate more I/O events than the Linux kernel can cope with, on whatever machine you’re on.

    Try it with something that has actual lexically-bound local variables
    in it:

    def factory(count : int) :

    def counter() :
    nonlocal count count += 1 return count
    #end counter

    #begin
    return counter
    #end factory

    f1 = factory(3)
    f2 = factory(30) print(f1())
    print(f2())
    print(f1())
    print(f2())

    output:

    4
    31
    5
    32

    That should be similar:

    #include <iostream>
    #include <functional>

    using namespace std;

    function<int ()> factory()
    {
    return []
    {
    static int count = 0;
    return ++count;
    };
    }

    int main()
    {
    auto
    f1 = factory(),
    f2 = factory();
    cout << f1() << endl;
    cout << f2() << endl;
    cout << f1() << endl;
    cout << f2() << endl;
    }

    Ahem, and what is the output from your C++ version?

    (Hint: I don’t think it’s correct.)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Chris M. Thomasson on Sat May 4 04:46:33 2024
    XPost: comp.lang.c++

    On Fri, 3 May 2024 20:36:58 -0700, Chris M. Thomasson wrote:

    On 5/3/2024 7:30 PM, Lawrence D'Oliveiro wrote:

    In that situation, multithreading isn’t going to speed things up.

    ummm, so what does the server do after getting an io completion...?

    Whatever it is, if it takes less time than the time to the next I/O
    completion, then it’s not going to be a bottleneck.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paavo Helde@21:1/5 to Lawrence D'Oliveiro on Sat May 4 20:41:42 2024
    XPost: comp.lang.c++

    On 04.05.2024 01:20, Lawrence D'Oliveiro wrote:
    On Fri, 3 May 2024 18:01:02 +0300, Michael S wrote:

    For C, I agree, mostly because C has no nested functions.

    GCC implements nested functions in the C compiler. Though oddly, not in C+
    +.


    C++ already has functions nested in namespaces, namespaces nested in namespaces, functions nested in classes (static and non-static member functions), and classes nested in classes. It's already a lot of
    nesting, no need to complicate the matters more.

    In Pascal, function nesting is used for better encapsulation of data. In
    C++, the same is achieved in a cleaner and more explicit way via classes
    and member functions, so no need for this kind of nesting.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Bonita Montero on Sat May 4 22:11:02 2024
    XPost: comp.lang.c++

    On Sat, 4 May 2024 17:36:12 +0200
    Bonita Montero <Bonita.Montero@gmail.com> wrote:

    Am 03.05.2024 um 17:47 schrieb Michael S:

    I don't think it makes a big difference if Python has a GIL or
    not since it is interpreted and extremely slow with that anyway.

    64 times faster than slow wouldn't be fast, but could be acceptable.
    ...

    With a GIL the code doesn't scale.


    So, do you take back what you said just one post above?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to bart on Sat May 4 22:04:37 2024
    XPost: comp.lang.c++

    On Sat, 4 May 2024 00:27:53 +0100
    bart <bc@freeuk.com> wrote:

    On 03/05/2024 16:47, Michael S wrote:
    On Fri, 3 May 2024 17:20:00 +0200
    Bonita Montero <Bonita.Montero@gmail.com> wrote:

    Am 03.05.2024 um 17:05 schrieb Michael S:

    A typical scenario is that you started you python program while
    thinking that it wouldn't e CPU-intensive. And then it grew and
    became CPU-intensive.
    That's actually a good case, because it means that your program is
    used and is doing something worthwhile.

    I don't think it makes a big difference if Python has a GIL or
    not since it is interpreted and extremely slow with that anyway.


    64 times faster than slow wouldn't be fast, but could be acceptable.
    And 64 HW threads nowadays is almost low-end server, I have one at
    work, just in case.
    Also, I don't see why in the future Python could not be JITted.
    Javascript was also considered slow 15-20 years ago, now it's pretty
    fast.
    But then, my knowledge of Python is very shallow, Possibly, it's not
    JITted yet because of fundamental reasons rather than due to lack of demand.


    PyPy has been around for many years.

    I see.
    So, why PyPy didn't replace interpreter as a default engine?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Bonita Montero on Sun May 5 01:40:08 2024
    XPost: comp.lang.c++

    On Sat, 4 May 2024 06:00:54 +0200, Bonita Montero wrote:

    Am 04.05.2024 um 04:33 schrieb Lawrence D'Oliveiro:

    On Fri, 3 May 2024 08:45:58 +0200, Bonita Montero wrote:

    Am 03.05.2024 um 01:21 schrieb Lawrence D'Oliveiro:

    I/O performance certainly is possible with Python, and it has the
    high-performance production-quality frameworks to prove it.

    I thought about high performance code with >= 1e5 IOs/s.
    That's not possible with Python.

    Sure it is. ....

    Absolutely not, Python is too slow for that.

    Try the code I posted for yourself. What are you afraid of? Discovering
    that you’re wrong?

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Paavo Helde on Sun May 5 01:41:49 2024
    XPost: comp.lang.c++

    On Sat, 4 May 2024 20:41:42 +0300, Paavo Helde wrote:

    On 04.05.2024 01:20, Lawrence D'Oliveiro wrote:

    On Fri, 3 May 2024 18:01:02 +0300, Michael S wrote:

    For C, I agree, mostly because C has no nested functions.

    GCC implements nested functions in the C compiler. Though oddly, not in
    C++.

    C++ already has functions nested in namespaces, namespaces nested in namespaces, functions nested in classes (static and non-static member functions), and classes nested in classes. It's already a lot of
    nesting, no need to complicate the matters more.

    In Pascal, function nesting is used for better encapsulation of data. In
    C++, the same is achieved in a cleaner and more explicit way via classes
    and member functions, so no need for this kind of nesting.

    Interesting, isn’t it? You mention all the complications of C++, and how
    it doesn’t need yet more complications tacked on top to support something
    as simple as lexical binding. Yet Pascal had lexical binding from the get-
    go, and managed it in a much simpler way.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paavo Helde@21:1/5 to Lawrence D'Oliveiro on Sun May 5 10:38:31 2024
    XPost: comp.lang.c++

    On 05.05.2024 04:41, Lawrence D'Oliveiro wrote:
    On Sat, 4 May 2024 20:41:42 +0300, Paavo Helde wrote:

    On 04.05.2024 01:20, Lawrence D'Oliveiro wrote:

    On Fri, 3 May 2024 18:01:02 +0300, Michael S wrote:

    For C, I agree, mostly because C has no nested functions.

    GCC implements nested functions in the C compiler. Though oddly, not in
    C++.

    C++ already has functions nested in namespaces, namespaces nested in
    namespaces, functions nested in classes (static and non-static member
    functions), and classes nested in classes. It's already a lot of
    nesting, no need to complicate the matters more.

    In Pascal, function nesting is used for better encapsulation of data. In
    C++, the same is achieved in a cleaner and more explicit way via classes
    and member functions, so no need for this kind of nesting.

    Interesting, isn’t it? You mention all the complications of C++, and how
    it doesn’t need yet more complications tacked on top to support something as simple as lexical binding. Yet Pascal had lexical binding from the get- go, and managed it in a much simpler way.

    A programming language is a tool. The goal is not to have the tool which
    is simple to make. The goal is not even to have the tool which is simple
    to use. The goal is to have a tool which is adequate for its job.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Lawrence D'Oliveiro on Sun May 5 12:37:18 2024
    XPost: comp.lang.c++

    On Sun, 5 May 2024 01:41:49 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Sat, 4 May 2024 20:41:42 +0300, Paavo Helde wrote:

    On 04.05.2024 01:20, Lawrence D'Oliveiro wrote:

    On Fri, 3 May 2024 18:01:02 +0300, Michael S wrote:

    For C, I agree, mostly because C has no nested functions.

    GCC implements nested functions in the C compiler. Though oddly,
    not in C++.

    C++ already has functions nested in namespaces, namespaces nested in namespaces, functions nested in classes (static and non-static
    member functions), and classes nested in classes. It's already a
    lot of nesting, no need to complicate the matters more.

    In Pascal, function nesting is used for better encapsulation of
    data. In C++, the same is achieved in a cleaner and more explicit
    way via classes and member functions, so no need for this kind of
    nesting.

    Interesting, isn’t it? You mention all the complications of C++, and
    how it doesn’t need yet more complications tacked on top to support something as simple as lexical binding. Yet Pascal had lexical
    binding from the get- go, and managed it in a much simpler way.

    Pascal pays huge price for its simplicity in terms of lack of locality.
    That is, the price is paid by users of Pascal, rather than by language
    itself.
    There are many other languages that have both nested functions and
    locality (i.e. allow declaration of variable at block or scope), but I
    find understanding of code written in this languages (my experience is
    mostly with Ada) way too hard.
    As a code reader, I very much prefer C, where nested function are not
    allowed at all. May be, I would like BCPL, where nested functions are
    allowed, but have no access to variables defined at outer scope, even
    better than C, but I never had an opportunity to use BCPL.
    Classic C++, where you can fake BCPL-style nested functions by static
    member functions of locally-defined classes is not bad,
    functionality-wise, but as is often a case with Classic C++, is ugly syntactically.
    Modern C++, with lambdas and captures, has the same or worse readability
    and comprehensibility problems that I disliked in the past with Ada.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From David Brown@21:1/5 to Michael S on Sun May 5 14:56:40 2024
    XPost: comp.lang.c++

    On 04/05/2024 21:04, Michael S wrote:
    On Sat, 4 May 2024 00:27:53 +0100
    bart <bc@freeuk.com> wrote:

    On 03/05/2024 16:47, Michael S wrote:
    On Fri, 3 May 2024 17:20:00 +0200
    Bonita Montero <Bonita.Montero@gmail.com> wrote:

    Am 03.05.2024 um 17:05 schrieb Michael S:

    A typical scenario is that you started you python program while
    thinking that it wouldn't e CPU-intensive. And then it grew and
    became CPU-intensive.
    That's actually a good case, because it means that your program is
    used and is doing something worthwhile.

    I don't think it makes a big difference if Python has a GIL or
    not since it is interpreted and extremely slow with that anyway.


    64 times faster than slow wouldn't be fast, but could be acceptable.
    And 64 HW threads nowadays is almost low-end server, I have one at
    work, just in case.
    Also, I don't see why in the future Python could not be JITted.
    Javascript was also considered slow 15-20 years ago, now it's pretty
    fast.
    But then, my knowledge of Python is very shallow, Possibly, it's not
    JITted yet because of fundamental reasons rather than due to lack of
    demand.


    PyPy has been around for many years.

    I see.
    So, why PyPy didn't replace interpreter as a default engine?


    There are different Python implementations, useful for different
    situations. JIT is helpful if you need high performance running code
    and are happy to pay the time and memory resources (often very large)
    for doing the compilation. But often you don't need high performance,
    at least not from the Python code itself, and JIT is just a waste.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Michael S on Mon May 13 00:43:38 2024
    XPost: comp.lang.c++

    On Sun, 5 May 2024 12:37:18 +0300, Michael S wrote:

    As a code reader, I very much prefer C, where nested function are not
    allowed at all.

    The GNU C compiler allows them: see my example in the “Recursion, Yo” thread.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Michael S@21:1/5 to Lawrence D'Oliveiro on Mon May 13 15:04:50 2024
    XPost: comp.lang.c++

    On Mon, 13 May 2024 00:43:38 -0000 (UTC)
    Lawrence D'Oliveiro <ldo@nz.invalid> wrote:

    On Sun, 5 May 2024 12:37:18 +0300, Michael S wrote:

    As a code reader, I very much prefer C, where nested function are
    not allowed at all.

    The GNU C compiler allows them: see my example in the “Recursion, Yo” thread.

    Which does not make it legal C. Or good ideea.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Lawrence D'Oliveiro on Wed May 15 08:53:36 2024
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:

    On Sun, 5 May 2024 12:37:18 +0300, Michael S wrote:

    As a code reader, I very much prefer C, where nested function are not
    allowed at all.

    The GNU C compiler allows them: see my example in the ?Recursion, Yo? thread.

    gcc accepts all sorts of things that aren't C. That doesn't
    make them part of the C language.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Bonita Montero on Fri May 17 22:18:02 2024
    XPost: comp.lang.c++

    On Mon, 13 May 2024 16:52:36 +0200, Bonita Montero wrote:

    If you target a certain platform relying on the compiler is the least problem.

    GCC is the closest we have to a de-facto-standard compiler, too.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Michael S on Fri May 17 22:17:34 2024
    XPost: comp.lang.c++

    On Mon, 13 May 2024 15:04:50 +0300, Michael S wrote:

    On Mon, 13 May 2024 00:43:38 -0000 (UTC) Lawrence D'Oliveiro
    <ldo@nz.invalid> wrote:

    On Sun, 5 May 2024 12:37:18 +0300, Michael S wrote:

    As a code reader, I very much prefer C, where nested function are not
    allowed at all.

    The GNU C compiler allows them: see my example in the “Recursion, Yo”
    thread.

    Which does not make it legal C. Or good ideea.

    Worthwhile comparing, though: the one using nested functions is 99 source lines; the one doing it in strictly standard C is 128 source lines. That’s nearly 30% more code to do the same thing.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Michael S on Sat May 18 08:11:41 2024
    Michael S <already5chosen@yahoo.com> writes:

    On Fri, 3 May 2024 13:23:13 +0200
    Bonita Montero <Bonita.Montero@gmail.com> wrote:

    Am 03.05.2024 um 11:18 schrieb David Brown:

    On 03/05/2024 09:58, Bonita Montero wrote:

    Am 03.05.2024 um 09:38 schrieb David Brown:

    No it is not. C-style functions (or C++ functions for that
    matter) are not objects, and do not have calling operators.
    Built-in operators do not belong to a type, in the way that class
    operators do.

    You can assign a C-style function pointer to an auto
    function-object.

    A C-style function /pointer/ is an object. A C-style /function/ is
    not. Do you understand the difference?

    Practically there isn't a difference.

    For C, I agree, mostly because C has no nested functions.
    For C++ (after C++11) I am less sure, because of lambdas with
    non-empty captures.

    First, a pointer is not an object. In both C and C++, any pointer,
    including a function pointer, is a scalar value. A pointer value
    might be held in an object but it doesn't have to be. In most cases
    function pointers are not stored in objects but simply used to call
    the function pointed to.

    A function is a static (not meant in the sense of the 'static'
    keyword) program entity, usually arising as the result of giving a
    function definition somewhere in the test of a program.

    An expression with function type is a function designator. In most
    cases function designators are simply the identifier used in the
    definition that defines the function in question.

    When the operand of an & operator is a function, the result of
    evaluating the & expression is the address of the function
    designated, or in other words a function pointer value. In most
    other contexts a function designator is automatically converted
    to a function pointer value when evaluated, as would have happened
    if an & operator had been applied. (Applying a * operator to a
    function pointer operand gives a function designator for the
    pointed-to function.)

    In the lambda world there isn't an exact analogue to the idea of a
    function. There is the static text of a lambda expression, but that
    doesn't define a "thing" any more than the text '3+4' defines a
    "thing". Rather, when evaluated, a lambda expression produces a
    lambda value, also known as a closure. A closure is a value like
    other more familiar values: it can be used directly in a larger
    expression, like the result of evaluating '3+4' might be used in an
    expression like 'x[3+4]'; or it can be assigned thru an lvalue of
    the appropriate type to be stored in an object.

    The key point here is to distinguish between lambda expressions,
    which are simply text strings and not "things", and the result of
    evaluating a lambda expression, which are lambda values (closures).
    Lambda values are like function pointers in that they are values
    and can be dealt with as such, but they are not like function
    pointers in that they don't point to anything; they are simply
    values that can be used in conjunction with certain operators to
    effect various program actions.

    Disclaimer: I have not consulted any C++ standard to see if my
    terminology here matches how terminology is used in C++.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Lawrence D'Oliveiro on Tue May 21 21:27:16 2024
    Lawrence D'Oliveiro <ldo@nz.invalid> writes:

    On Mon, 13 May 2024 16:52:36 +0200, Bonita Montero wrote:

    If you target a certain platform relying on the compiler is the least
    problem.

    GCC is the closest we have to a de-facto-standard compiler, too.

    Perhaps true but not in its default mode. gcc -std=c99 -pedantic
    is very close to being standard C99, and similarly for -std=C90
    and -std=C11 (in both cases with -pedantic). But gcc by itself,
    without any options and especially without -pedantic, is nowhere
    close to being a standard C compiler, de facto or otherwise.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Tim Rentsch@21:1/5 to Keith Thompson on Thu May 23 08:54:35 2024
    Keith Thompson <Keith.S.Thompson+u@gmail.com> writes:

    Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
    [...]

    First, a pointer is not an object. In both C and C++, any pointer,
    including a function pointer, is a scalar value. A pointer value
    might be held in an object but it doesn't have to be. In most cases
    function pointers are not stored in objects but simply used to call
    the function pointed to.

    [...]

    Certainly a pointer value is not an object. Certainly a pointer
    object *is* an object. It's not uncommon to informally refer to a
    pointer object as "a pointer". I presume you would consider such
    usage to be incorrect, and I don't disagree, but it is fairly
    common.

    Good usage will be precise and logically consistent, adhere to the
    rules of normal English usage, and align with how terminology and
    phrasing is used in the C standard. On the flip side, it is best
    to try to avoid phrasing that is haphazard, careless, confusing,
    or sloppy.

    I don't remember ever seeing the phrase "pointer object" used to
    mean a pointer value, either informally or otherwise. The C
    standard does use the phrase "pointer object" in some places, but
    it means something quite different from a pointer, namely, an
    object that has pointer type (and not a pointer itself). A local
    declaration such as

    int *p;

    introduces the identifier 'p' as (a way to designate) a pointer
    object, but there is no pointer value anywhere in sight.

    I often find it useful to avoid referring to "pointers", and
    instead refer to "pointer types", "pointer values", "pointer
    objects", and so on (likewise for arrays).

    I have the impression that you are someone who is unusually averse
    to ambiguity. I appreciate that your suggestions are given with
    the idea that they will help with clarity of communication.
    Unfortunately they don't always help, and sometimes make things
    worse rather than better. Your comment here is a case in point.
    The C standard uses the word pointer (or pointers) in a little
    over 900 places. In most of those, pointer is used as a simple
    noun; whether in each case it refers to a type or a run-time
    value is clear from context. Notably, the C standard uses the
    phrase "pointer value" in just a few places (I see five in n1570).
    It seems clear that the Standard's use of this phrase is meant
    to connote something different than just "pointer" by itself, and
    also that this (rather rare) usage wasn't chosen by accident. So
    the rule you describe muddies the water rather than helping,
    because it's not consistent with usage followed in the C standard.

    The C standard does not, as far as I can tell, provide a
    definition for the standalone term "pointer". (I could have
    missed something; I checked section 3, "Terms, definitions, and
    symbols", and the index.) But the standard does, in several
    places, use the term "pointer" to refer to a pointer value. I
    don't know whether it's consistent.

    Yes, AFAICT the C standard doesn't define "pointer" as a standalone
    noun. But it is clear from context that "pointer" by itself means
    basically the same thing as an address, as can be seen from the
    description of unary &, in 6.5.3.2 p3, or in the second sentence
    of the second paragraph of the footnote to 6.3.2.1 p1.

    The C standard does, however, define the word "object" to mean
    a region of storage in the execution environment. Whatever it
    is that a pointer might be, it's clear that it isn't a region
    of storage in the execution environment. The idea that a pointer
    object might be a pointer, rather than an object, is nonsensical
    on its face, and deserves to be called out as such.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)