paavo512 <paavo@osa.pri.ee> wrote or quoted:
|Anyway, multithreading performance is a non-issue for Python so far as
|the Python interpreter runs in a single-threaded regime anyway, under a |global GIL lock. They are planning to get rid of GIL, but this work is |still in development AFAIK. I'm sure it will take years to stabilize the |whole Python zoo without GIL.
The GIL only prevents multiple Python statements from being
interpreted simultaneously, but if you're waiting on inputs (like
sockets), it's not active, so that could be distributed across
multiple cores.
With asyncio, however, you can easily handle the application
for threads to "wait in parallel" for thousands of sockets in a
single thread, and there are fewer opportunities for errors than
with multithreading.
Additionally, there are libraries like numpy that use true
multithreading internally to distribute computational tasks
across multiple cores. By using such libraries, you can take
advantage of that. (Not to mention the AI libraries that have their
work done in highly parallel fashion by graphics cards.)
If you want real threads, you could probably work with Cython
sometimes.
Other languages like JavaScript seem to have an advantage there
because they don't know a GIL, but with JavaScript, for example,
it's because it always runs in a single thread overall. And in
the languages where there are threads without a GIL, you quickly
realize that programming correct non-trivial programs with
parallel processing is error-prone.
But you need multithreading to have maximum throughput since you often process the data while other data is available.
With asyncio, however, you can easily handle the application for
threads to "wait in parallel" for thousands of sockets in a single
thread, and there are fewer opportunities for errors than with
multithreading.
Just for waiting on thousands on sockets I believe a single select()
call would be sufficient ...
On 4/29/2024 1:29 PM, Lawrence D'Oliveiro wrote:
On Mon, 29 Apr 2024 19:13:09 +0300, Paavo Helde wrote:
Just for waiting on thousands on sockets I believe a single select()
call would be sufficient ...
We use poll(2) or epoll(2) nowadays. select(2) is antiquated.
AIO on Linux, IOCP on windows.
On 4/29/2024 3:41 PM, Lawrence D'Oliveiro wrote:
On Mon, 29 Apr 2024 13:33:22 -0700, Chris M. Thomasson wrote:
On 4/29/2024 1:29 PM, Lawrence D'Oliveiro wrote:
On Mon, 29 Apr 2024 19:13:09 +0300, Paavo Helde wrote:
Just for waiting on thousands on sockets I believe a single select() >>>>> call would be sufficient ...
We use poll(2) or epoll(2) nowadays. select(2) is antiquated.
AIO on Linux, IOCP on windows.
AIO is for block I/O. Try io_uring instead.
Afaict, AIO is analogous to IOCP.
Am 29.04.2024 um 22:31 schrieb Lawrence D'Oliveiro:
On Mon, 29 Apr 2024 18:18:57 +0200, Bonita Montero wrote:
But you need multithreading to have maximum throughput since you often
process the data while other data is available.
In a lot of applications, the bottleneck is the network I/O, or a GUI
waiting for the next user event, that kind of thing. In this situation,
multithreading is more trouble than it’s worth. This is why coroutines
(in the form of async/await) have made a comeback over the last decade
or so.
Having a single thread and using state machines is more effortz.
Am 30.04.2024 um 06:09 schrieb Lawrence D'Oliveiro:
On Tue, 30 Apr 2024 05:58:31 +0200, Bonita Montero wrote:
Having a single thread and using state machines is more effortz.
It would indeed. That’s why coroutines (async/await) are so handy.
Using a thread is even more handy.
The GIL only prevents multiple Python statements from being
interpreted simultaneously, but if you're waiting on inputs (like
sockets), it's not active, so that could be distributed across
multiple cores.
Am 30.04.2024 um 08:42 schrieb Lawrence D'Oliveiro:
On Tue, 30 Apr 2024 07:59:06 +0200, Bonita Montero wrote:
Am 30.04.2024 um 06:09 schrieb Lawrence D'Oliveiro:
On Tue, 30 Apr 2024 05:58:31 +0200, Bonita Montero wrote:
Having a single thread and using state machines is more effortz.
It would indeed. That’s why coroutines (async/await) are so handy.
Using a thread is even more handy.
Do you know what a “heisenbug” is?
[No]
On 29.04.2024 18:19, Stefan Ram wrote:
paavo512 <paavo@osa.pri.ee> wrote or quoted:
|Anyway, multithreading performance is a non-issue for Python so far as
|the Python interpreter runs in a single-threaded regime anyway, under a
|global GIL lock. They are planning to get rid of GIL, but this work is
|still in development AFAIK. I'm sure it will take years to stabilize the
|whole Python zoo without GIL.
The GIL only prevents multiple Python statements from being
interpreted simultaneously, but if you're waiting on inputs (like
sockets), it's not active, so that could be distributed across
multiple cores.
With asyncio, however, you can easily handle the application
for threads to "wait in parallel" for thousands of sockets in a
single thread, and there are fewer opportunities for errors than
with multithreading.
In C++, async io is provided e.g. by the asio library.
Am 30.04.2024 um 11:09 schrieb Lawrence D'Oliveiro:
On Tue, 30 Apr 2024 09:37:18 +0200, Bonita Montero wrote:[No]
Am 30.04.2024 um 08:42 schrieb Lawrence D'Oliveiro:
On Tue, 30 Apr 2024 07:59:06 +0200, Bonita Montero wrote:
Am 30.04.2024 um 06:09 schrieb Lawrence D'Oliveiro:
Using a thread is even more handy.
On Tue, 30 Apr 2024 05:58:31 +0200, Bonita Montero wrote:
Having a single thread and using state machines is more effortz.
It would indeed. That’s why coroutines (async/await) are so handy. >>>>>
Do you know what a “heisenbug” is?
[No]
Do you know what a “race condition” is?
On Tue, 30 Apr 2024 07:59:06 +0200, Bonita Montero wrote:
Am 30.04.2024 um 06:09 schrieb Lawrence D'Oliveiro:
On Tue, 30 Apr 2024 05:58:31 +0200, Bonita Montero wrote:
Having a single thread and using state machines is more effortz.
It would indeed. That’s why coroutines (async/await) are so handy.
Using a thread is even more handy.
Do you know what a “heisenbug” is?
Boost.ASIO does that all for you with a convenient interface.
If enabled it even uses io_uring or the Windows' pendant.
Am 29.04.2024 um 22:29 schrieb Lawrence D'Oliveiro:
We use poll(2) or epoll(2) nowadays. select(2) is antiquated.
Use Boost.ASIO.
You don't need threads to get heisenbugs.
Am 01.05.2024 um 09:09 schrieb Lawrence D'Oliveiro:
On Wed, 1 May 2024 06:53:21 +0200, Bonita Montero wrote:
Am 29.04.2024 um 22:29 schrieb Lawrence D'Oliveiro:
We use poll(2) or epoll(2) nowadays. select(2) is antiquated.
Use Boost.ASIO.
And what does that use?
Boost.ASIO can even use io_uring if it is available.
Am 01.05.2024 um 10:53 schrieb Lawrence D'Oliveiro:
On Wed, 1 May 2024 10:13:03 +0200, Bonita Montero wrote:
Am 01.05.2024 um 09:10 schrieb Lawrence D'Oliveiro:
On Wed, 1 May 2024 06:54:13 +0200, Bonita Montero wrote:
Boost.ASIO does that all for you with a convenient interface.
If enabled it even uses io_uring or the Windows' pendant.
How many languages does it support?
Just C++ ...
Not much use, then.
System-level programming is mostly made with C++.
But functions and classes are not first-class objects in C++, ...
Of course, since C++11.
You cannot define function factories and class factories, like you can
in Python.
Python is nothing for me since it is extremely slow.
On Wed, 1 May 2024 11:00:04 +0200, Bonita Montero wrote:
Am 01.05.2024 um 10:53 schrieb Lawrence D'Oliveiro:
No async/await? Oh, they haven’t added that to C++--yet.
No, Boost.ASIO is event driven with asynchronous callbacks in a foreign
thread's context.
Callbacks can be a clunky way of event handling, since they force you to >break up your logic sequence into discontinguous pieces. This is why >coroutines have become popular, since they keep the logic flow together.
Am 01.05.2024 um 10:53 schrieb Lawrence D'Oliveiro:
No async/await? Oh, they haven’t added that to C++--yet.
No, Boost.ASIO is event driven with asynchronous callbacks in a foreign thread's context.
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
On Wed, 1 May 2024 11:00:04 +0200, Bonita Montero wrote:
Am 01.05.2024 um 10:53 schrieb Lawrence D'Oliveiro:
No async/await? Oh, they haven’t added that to C++--yet.
No, Boost.ASIO is event driven with asynchronous callbacks in a
foreign thread's context.
Callbacks can be a clunky way of event handling, since they force
you to break up your logic sequence into discontinguous pieces. This
is why coroutines have become popular, since they keep the logic
flow together.
Callbacks work just fine, as the logic for submitting a request
is quite different from the logic for completing a request; indeed,
they more closely mirror the hardware interrupt that signals
completion.
I wouldn't call coroutines popular at all, outside of python
generators.
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
Callbacks can be a clunky way of event handling, since they force you to >>break up your logic sequence into discontinguous pieces. This is why >>coroutines have become popular, since they keep the logic flow together.
Callbacks work just fine, as the logic for submitting a request
is quite different from the logic for completing a request ...
On 5/1/2024 1:34 PM, Lawrence D'Oliveiro wrote:
Remember, we’re talking about maximizing I/O throughput here, so CPU is
not the bottleneck.
It can be if your thread synchronization scheme is sub par.
So, the idea of the re-routine, is a sort of co-routine. That is, it
fits the definition of being a co-routine, though as with that when its asynchronous filling of the memo of its operation is unfulfilled, it
quits by throwing an exception, then is as expected to to called again,
when its filling of the memo is fulfilled, thus that it returns.
Am 01.05.2024 um 22:34 schrieb Lawrence D'Oliveiro:
No, it is actually mostly C, with Rust making inroads these days.
C++ has superseeded C with that for a long time with Job offers.
Rust is a language a lot of people talk about and no one actually uses.
And you don’t have to be doing “system-level” programming to be needing
event-driven paradigms.
If you make asnychronous I/O you need performance, and this isn't
possible with Python.
No they aren’t. You cannot easily define a C++ function that returns a
general function or class as a result, just for example.
function<void ()> fn();
On 5/1/2024 10:39 PM, Lawrence D'Oliveiro wrote:
On Wed, 1 May 2024 22:20:47 -0700, Chris M. Thomasson wrote:
On 5/1/2024 1:34 PM, Lawrence D'Oliveiro wrote:
Remember, we’re talking about maximizing I/O throughput here, so CPU >>>> is not the bottleneck.
It can be if your thread synchronization scheme is sub par.
Another reason to avoid threads.
Why? Believe it or not, there are ways to create _highly_ scalable
thread synchronization schemes.
So long as your async tasks have an await call somewhere in their main
loops, that should be sufficient to avoid most bottlenecks.
async tasks are using threads... No?
Am 01.05.2024 um 22:34 schrieb Lawrence D'Oliveiro:
No, it is actually mostly C, with Rust making inroads these days.
C++ has superseeded C with that for a long time with Job offers.
Rust is a language a lot of people talk about and no one actually uses.
And you don’t have to be doing “system-level” programming to be needing
event-driven paradigms.
If you make asnychronous I/O you need performance, and this isn't
possible with Python.
No they aren’t. You cannot easily define a C++ function that returns a
general function or class as a result, just for example.
function<void ()> fn();
Remember, we’re talking about maximizing I/O throughput here, so CPU is
not the bottleneck.
With io_uring you can easily handle millions of I/Os with a single
thread, but not with Python.
Am 02.05.2024 um 15:53 schrieb David Brown:
You also claimed that classes are first-class objects in C++.
I never said that ...
... and having sth. like class Class in Java is beyond
C++'s performance constraints.
Am 02.05.2024 um 07:39 schrieb Lawrence D'Oliveiro:await
Another reason to avoid threads. So long as your async tasks have an
call somewhere in their main loops, that should be sufficient to avoid
most bottlenecks.
If you have a stream of individual I/Os and the processing of the I/Os
takes more time than the time between the I/Os you need threads.
The CPU can become a bottleneck.
The CPU can become a bottleneck.
So, there is no way to take advantage of multiple threads on Python?
Am 02.05.2024 um 15:53 schrieb David Brown:
That is a /long/ way from treating functions as first-class objects.
A C-style function is also a function-object in C++ because it has
a calling operator.
But it is certainly a step in that direction, as are lambdas.
Lambdas can be assigned to function<>-object to make them runtime -polymorphic. Otherwise they can be generic types, which are compile
-time polymorphic - like the function-object for std::sort();
You also claimed that classes are first-class objects in C++.
I never said that and having sth. like class Class in Java is
beyond C++'s performance constraints.
On 5/2/2024 4:15 PM, Lawrence D'Oliveiro wrote:
On Thu, 2 May 2024 13:28:15 -0700, Chris M. Thomasson wrote:
On 5/1/2024 10:39 PM, Lawrence D'Oliveiro wrote:
On Wed, 1 May 2024 22:20:47 -0700, Chris M. Thomasson wrote:
On 5/1/2024 1:34 PM, Lawrence D'Oliveiro wrote:
Remember, we’re talking about maximizing I/O throughput here, so CPU >>>>>> is not the bottleneck.
It can be if your thread synchronization scheme is sub par.
Another reason to avoid threads.
Why? Believe it or not, there are ways to create _highly_ scalable
thread synchronization schemes.
I’m sure there are. But none of that is relevant when the CPU isn’t the >> bottleneck anyway.
The CPU can become a bottleneck. Depends on how the programmer
implements things.
So long as your async tasks have an await call somewhere in their main >>>> loops, that should be sufficient to avoid most bottlenecks.
async tasks are using threads... No?
No. They are built on coroutines. Specifically, the “stackless” variety. >>
<https://gitlab.com/ldo/python_topics_notebooks/-/blob/master/Generators%20&%20Coroutines.ipynb?ref_type=heads>
So, there is no way to take advantage of multiple threads on Python?
Heck, even JavaScript has WebWorkers... ;^)
Am 03.05.2024 um 09:38 schrieb David Brown:
No it is not. C-style functions (or C++ functions for that matter)
are not objects, and do not have calling operators. Built-in
operators do not belong to a type, in the way that class operators do.
You can assign a C-style function pointer to an auto function-object.
That these function objects all have the same type doesn't metter.
You missed the point entirely. Lambdas can be used in many ways like
functions, and it is possible for one function (or lambda) to return a
different function, and can be used for higher-order functions
(functions that have functions as parameters or return types). They
do not mean that C++ can treat functions as first-class objects, but
they /do/ mean that you can get many of the effects you might want if
C++ functions really were first-class objects.
C-style functions and lambda-types are generically interchangeable.
Am 03.05.2024 um 11:18 schrieb David Brown:
On 03/05/2024 09:58, Bonita Montero wrote:
Am 03.05.2024 um 09:38 schrieb David Brown:
No it is not. C-style functions (or C++ functions for that
matter) are not objects, and do not have calling operators.
Built-in operators do not belong to a type, in the way that class
operators do.
You can assign a C-style function pointer to an auto
function-object.
A C-style function /pointer/ is an object. A C-style /function/ is
not. Do you understand the difference?
Practically there isn't a difference.
On 03/05/2024 01:58, Chris M. Thomasson wrote:
On 5/2/2024 4:15 PM, Lawrence D'Oliveiro wrote:
On Thu, 2 May 2024 13:28:15 -0700, Chris M. Thomasson wrote:
On 5/1/2024 10:39 PM, Lawrence D'Oliveiro wrote:
On Wed, 1 May 2024 22:20:47 -0700, Chris M. Thomasson wrote:
On 5/1/2024 1:34 PM, Lawrence D'Oliveiro wrote:
Remember, we’re talking about maximizing I/O throughput here,
so CPU is not the bottleneck.
It can be if your thread synchronization scheme is sub par.
Another reason to avoid threads.
Why? Believe it or not, there are ways to create _highly_ scalable
thread synchronization schemes.
I’m sure there are. But none of that is relevant when the CPU
isn’t the bottleneck anyway.
The CPU can become a bottleneck. Depends on how the programmer
implements things.
So long as your async tasks have an await call somewhere in
their main loops, that should be sufficient to avoid most
bottlenecks.
async tasks are using threads... No?
No. They are built on coroutines. Specifically, the “stackless”
variety.
<https://gitlab.com/ldo/python_topics_notebooks/-/blob/master/Generators%20&%20Coroutines.ipynb?ref_type=heads>
So, there is no way to take advantage of multiple threads on
Python? Heck, even JavaScript has WebWorkers... ;^)
Python supports multi-threading. It uses a global lock (the "GIL")
in the Python interpreter - thus only one thread can be running
Python code at a time. However, if you are doing anything serious
with Python, much of the time will be spend either blocked (waiting
for network, IO, etc.) or using compiled or external code (using your favourite gui toolkit, doing maths with numpy, etc.). The GIL is
released while executing such code.
Thus if you are using Python for cpu-intensive work (and doing so
sensibly), you have full multi-threading. If you are using it for IO-intensive work, you have full multi-threading. It's not going to
be as efficient as well-written compiled code, even with JIT and
pypy, but in practice it gets pretty close while being very
convenient and developer friendly.
If you really need parallel running of Python code, or better
separation between tasks, Python has a multi-processing module that
makes it simple to control and pass data between separate Python
processes, each with their own GIL.
Am 03.05.2024 um 17:05 schrieb Michael S:
A typical scenario is that you started you python program while
thinking that it wouldn't e CPU-intensive. And then it grew and
became CPU-intensive.
That's actually a good case, because it means that your program is
used and is doing something worthwhile.
I don't think it makes a big difference if Python has a GIL or
not since it is interpreted and extremely slow with that anyway.
Am 01.05.2024 um 22:31 schrieb Lawrence D'Oliveiro:
Callbacks can be a clunky way of event handling, since they force you
to break up your logic sequence into discontinguous pieces.
Callbacks are the most convenient use for asyncbronous I/O.
Also, I don't see why in the future Python could not be JITted.
For C, I agree, mostly because C has no nested functions.
On Fri, 3 May 2024 17:20:00 +0200
Bonita Montero <Bonita.Montero@gmail.com> wrote:
Am 03.05.2024 um 17:05 schrieb Michael S:
A typical scenario is that you started you python program while
thinking that it wouldn't e CPU-intensive. And then it grew and
became CPU-intensive.
That's actually a good case, because it means that your program is
used and is doing something worthwhile.
I don't think it makes a big difference if Python has a GIL or
not since it is interpreted and extremely slow with that anyway.
64 times faster than slow wouldn't be fast, but could be acceptable.
And 64 HW threads nowadays is almost low-end server, I have one at
work, just in case.
Also, I don't see why in the future Python could not be JITted.
Javascript was also considered slow 15-20 years ago, now it's pretty
fast.
But then, my knowledge of Python is very shallow, Possibly, it's not
JITted yet because of fundamental reasons rather than due to lack of
demand.
Am 03.05.2024 um 01:16 schrieb Lawrence D'Oliveiro:
On Thu, 2 May 2024 07:53:21 +0200, Bonita Montero wrote:
If you have a stream of individual I/Os and the processing of the I/Os
takes more time than the time between the I/Os you need threads.
That makes the CPU the bottleneck. Which is not the case we’re
discussing here.
No, the processing beetween the I/O can mostly depend on other I/Os,
which is the standard case for server applications.
Am 02.05.2024 um 08:48 schrieb Lawrence D'Oliveiro:
The normal, non-comedy way of handling this is to have the task await
something variously called a “future” or “promise”: when that object is
marked as completed, then the task is automatically woken again to
fulfil its purpose.
The problem with a future and a promise is that in most languages you
can't wait for multiple futures at once to have out of order completion.
Am 03.05.2024 um 01:21 schrieb Lawrence D'Oliveiro:
I/O performance certainly is possible with Python, and it has the high-
performance production-quality frameworks to prove it.
I thought about high performance code with >= 1e5 IOs/s.
That's not possible with Python.
Try it with something that has actual lexically-bound local variables
in it:
def factory(count : int) :
def counter() :
nonlocal count count += 1 return count
#end counter
#begin
return counter
#end factory
f1 = factory(3)
f2 = factory(30) print(f1())
print(f2())
print(f1())
print(f2())
output:
4
31
5
32
That should be similar:
#include <iostream>
#include <functional>
using namespace std;
function<int ()> factory()
{
return []
{
static int count = 0;
return ++count;
};
}
int main()
{
auto
f1 = factory(),
f2 = factory();
cout << f1() << endl;
cout << f2() << endl;
cout << f1() << endl;
cout << f2() << endl;
}
On 5/3/2024 7:30 PM, Lawrence D'Oliveiro wrote:
In that situation, multithreading isn’t going to speed things up.
ummm, so what does the server do after getting an io completion...?
On Fri, 3 May 2024 18:01:02 +0300, Michael S wrote:
For C, I agree, mostly because C has no nested functions.
GCC implements nested functions in the C compiler. Though oddly, not in C+
+.
Am 03.05.2024 um 17:47 schrieb Michael S:
I don't think it makes a big difference if Python has a GIL or
not since it is interpreted and extremely slow with that anyway.
64 times faster than slow wouldn't be fast, but could be acceptable.
...
With a GIL the code doesn't scale.
On 03/05/2024 16:47, Michael S wrote:
On Fri, 3 May 2024 17:20:00 +0200
Bonita Montero <Bonita.Montero@gmail.com> wrote:
Am 03.05.2024 um 17:05 schrieb Michael S:
A typical scenario is that you started you python program while
thinking that it wouldn't e CPU-intensive. And then it grew and
became CPU-intensive.
That's actually a good case, because it means that your program is
used and is doing something worthwhile.
I don't think it makes a big difference if Python has a GIL or
not since it is interpreted and extremely slow with that anyway.
64 times faster than slow wouldn't be fast, but could be acceptable.
And 64 HW threads nowadays is almost low-end server, I have one at
work, just in case.
Also, I don't see why in the future Python could not be JITted.
Javascript was also considered slow 15-20 years ago, now it's pretty
fast.
But then, my knowledge of Python is very shallow, Possibly, it's not
JITted yet because of fundamental reasons rather than due to lack of demand.
PyPy has been around for many years.
Am 04.05.2024 um 04:33 schrieb Lawrence D'Oliveiro:
On Fri, 3 May 2024 08:45:58 +0200, Bonita Montero wrote:
Am 03.05.2024 um 01:21 schrieb Lawrence D'Oliveiro:
I/O performance certainly is possible with Python, and it has the
high-performance production-quality frameworks to prove it.
I thought about high performance code with >= 1e5 IOs/s.
That's not possible with Python.
Sure it is. ....
Absolutely not, Python is too slow for that.
On 04.05.2024 01:20, Lawrence D'Oliveiro wrote:
On Fri, 3 May 2024 18:01:02 +0300, Michael S wrote:C++ already has functions nested in namespaces, namespaces nested in namespaces, functions nested in classes (static and non-static member functions), and classes nested in classes. It's already a lot of
For C, I agree, mostly because C has no nested functions.
GCC implements nested functions in the C compiler. Though oddly, not in
C++.
nesting, no need to complicate the matters more.
In Pascal, function nesting is used for better encapsulation of data. In
C++, the same is achieved in a cleaner and more explicit way via classes
and member functions, so no need for this kind of nesting.
On Sat, 4 May 2024 20:41:42 +0300, Paavo Helde wrote:
On 04.05.2024 01:20, Lawrence D'Oliveiro wrote:
On Fri, 3 May 2024 18:01:02 +0300, Michael S wrote:C++ already has functions nested in namespaces, namespaces nested in
For C, I agree, mostly because C has no nested functions.
GCC implements nested functions in the C compiler. Though oddly, not in
C++.
namespaces, functions nested in classes (static and non-static member
functions), and classes nested in classes. It's already a lot of
nesting, no need to complicate the matters more.
In Pascal, function nesting is used for better encapsulation of data. In
C++, the same is achieved in a cleaner and more explicit way via classes
and member functions, so no need for this kind of nesting.
Interesting, isn’t it? You mention all the complications of C++, and how
it doesn’t need yet more complications tacked on top to support something as simple as lexical binding. Yet Pascal had lexical binding from the get- go, and managed it in a much simpler way.
On Sat, 4 May 2024 20:41:42 +0300, Paavo Helde wrote:
On 04.05.2024 01:20, Lawrence D'Oliveiro wrote:
On Fri, 3 May 2024 18:01:02 +0300, Michael S wrote:C++ already has functions nested in namespaces, namespaces nested in namespaces, functions nested in classes (static and non-static
For C, I agree, mostly because C has no nested functions.
GCC implements nested functions in the C compiler. Though oddly,
not in C++.
member functions), and classes nested in classes. It's already a
lot of nesting, no need to complicate the matters more.
In Pascal, function nesting is used for better encapsulation of
data. In C++, the same is achieved in a cleaner and more explicit
way via classes and member functions, so no need for this kind of
nesting.
Interesting, isn’t it? You mention all the complications of C++, and
how it doesn’t need yet more complications tacked on top to support something as simple as lexical binding. Yet Pascal had lexical
binding from the get- go, and managed it in a much simpler way.
On Sat, 4 May 2024 00:27:53 +0100
bart <bc@freeuk.com> wrote:
On 03/05/2024 16:47, Michael S wrote:
On Fri, 3 May 2024 17:20:00 +0200
Bonita Montero <Bonita.Montero@gmail.com> wrote:
Am 03.05.2024 um 17:05 schrieb Michael S:
A typical scenario is that you started you python program while
thinking that it wouldn't e CPU-intensive. And then it grew and
became CPU-intensive.
That's actually a good case, because it means that your program is
used and is doing something worthwhile.
I don't think it makes a big difference if Python has a GIL or
not since it is interpreted and extremely slow with that anyway.
64 times faster than slow wouldn't be fast, but could be acceptable.
And 64 HW threads nowadays is almost low-end server, I have one at
work, just in case.
Also, I don't see why in the future Python could not be JITted.
Javascript was also considered slow 15-20 years ago, now it's pretty
fast.
But then, my knowledge of Python is very shallow, Possibly, it's not
JITted yet because of fundamental reasons rather than due to lack of
demand.
PyPy has been around for many years.
I see.
So, why PyPy didn't replace interpreter as a default engine?
As a code reader, I very much prefer C, where nested function are not
allowed at all.
On Sun, 5 May 2024 12:37:18 +0300, Michael S wrote:
As a code reader, I very much prefer C, where nested function are
not allowed at all.
The GNU C compiler allows them: see my example in the “Recursion, Yo” thread.
On Sun, 5 May 2024 12:37:18 +0300, Michael S wrote:
As a code reader, I very much prefer C, where nested function are not
allowed at all.
The GNU C compiler allows them: see my example in the ?Recursion, Yo? thread.
If you target a certain platform relying on the compiler is the least problem.
On Mon, 13 May 2024 00:43:38 -0000 (UTC) Lawrence D'Oliveiro
<ldo@nz.invalid> wrote:
On Sun, 5 May 2024 12:37:18 +0300, Michael S wrote:
As a code reader, I very much prefer C, where nested function are not
allowed at all.
The GNU C compiler allows them: see my example in the “Recursion, Yo”
thread.
Which does not make it legal C. Or good ideea.
On Fri, 3 May 2024 13:23:13 +0200
Bonita Montero <Bonita.Montero@gmail.com> wrote:
Am 03.05.2024 um 11:18 schrieb David Brown:
On 03/05/2024 09:58, Bonita Montero wrote:
Am 03.05.2024 um 09:38 schrieb David Brown:
No it is not. C-style functions (or C++ functions for that
matter) are not objects, and do not have calling operators.
Built-in operators do not belong to a type, in the way that class
operators do.
You can assign a C-style function pointer to an auto
function-object.
A C-style function /pointer/ is an object. A C-style /function/ is
not. Do you understand the difference?
Practically there isn't a difference.
For C, I agree, mostly because C has no nested functions.
For C++ (after C++11) I am less sure, because of lambdas with
non-empty captures.
On Mon, 13 May 2024 16:52:36 +0200, Bonita Montero wrote:
If you target a certain platform relying on the compiler is the least
problem.
GCC is the closest we have to a de-facto-standard compiler, too.
Tim Rentsch <tr.17687@z991.linuxsc.com> writes:
[...]
First, a pointer is not an object. In both C and C++, any pointer,
including a function pointer, is a scalar value. A pointer value
might be held in an object but it doesn't have to be. In most cases
function pointers are not stored in objects but simply used to call
the function pointed to.
[...]
Certainly a pointer value is not an object. Certainly a pointer
object *is* an object. It's not uncommon to informally refer to a
pointer object as "a pointer". I presume you would consider such
usage to be incorrect, and I don't disagree, but it is fairly
common.
I often find it useful to avoid referring to "pointers", and
instead refer to "pointer types", "pointer values", "pointer
objects", and so on (likewise for arrays).
The C standard does not, as far as I can tell, provide a
definition for the standalone term "pointer". (I could have
missed something; I checked section 3, "Terms, definitions, and
symbols", and the index.) But the standard does, in several
places, use the term "pointer" to refer to a pointer value. I
don't know whether it's consistent.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 546 |
Nodes: | 16 (2 / 14) |
Uptime: | 05:17:47 |
Calls: | 10,386 |
Calls today: | 1 |
Files: | 14,058 |
Messages: | 6,416,630 |