• Re: fun with a mutex...

    From Muttley@DastardlyHQ.org@21:1/5 to All on Mon Aug 4 08:48:33 2025
    On Sat, 2 Aug 2025 12:58:23 -0700
    "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> wibbled:
    On 7/29/2025 10:54 PM, Chris M. Thomasson wrote:
    This is using the idea of trying to do other work while a mutex is
    contended. Why spin or wait when we don't really have to? The per-thread
    random numbers in this experiment are only there to try to simulate when
    a contended thread has any other work to do. This can be check per-
    thread stacks, queues, ect... Here is my example code. Can you get it to
    run, and can you show me the output? Now, this can work in real world
    environments. It's a stare to an adaptive mutex that just spins doing
    nothing... lol... ;^)

    I compiled it using c++20

    (read all...)
    [snip code]
    Any luck? Its fun to see how many work items were completed when the
    mutex was contended...

    I guess the main point is that if a try_lock() fails, then we can _try_
    to do "other" work... That can be checking thread local queues, stacks,

    I never really understood the point of this semi-sequential paradigm. If you have other work to do then do it in another thread, thats the whole point of threading! Meanwhile if a thread needs to wait for a lock then just let it wait.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Tue Aug 5 07:27:17 2025
    On Mon, 4 Aug 2025 12:34:08 -0700
    "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> wibbled:
    On 8/4/2025 1:48 AM, Muttley@DastardlyHQ.org wrote:
    I never really understood the point of this semi-sequential paradigm. If you >> have other work to do then do it in another thread, thats the whole point of >> threading!

    This already works with multiple threads, no problem. But, if a thread
    fails to acquire the mutex, it can check to see if it has any other work
    to do instead of just waiting in the kernel doing nothing, or spinning
    doing nothing... Make sense to you?

    Yes, I understand the concept, its not rocket science.

    Meanwhile if a thread needs to wait for a lock then just let it wait.


    The idea is instead of letting it wait, why not let it try to do some
    other work? Think of an adaptive mutex. On contention it will spin for a >bounded number of times in user space before having to wait in the
    kernel. Well, instead of spinning, try to do something else, some real
    work? It will keep the thread hot, doing work instead of just waiting...

    Because while its doing something else the lock maybe become free then
    become acquired again leading to a kind of race condition of some threads constantly bouncing off locks, checking for other work (which takes cpu
    cycles) , finding nothing then going back to the lock, rinse and repeat.
    Far simpler to have thread waiting on locks while other threads get on with work. Its not only more efficient it also leads to simpler code.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Fred. Zwarts@21:1/5 to All on Tue Aug 5 10:27:21 2025
    Op 04.aug.2025 om 21:34 schreef Chris M. Thomasson:
    On 8/4/2025 1:48 AM, Muttley@DastardlyHQ.org wrote:
    On Sat, 2 Aug 2025 12:58:23 -0700
    "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> wibbled:
    On 7/29/2025 10:54 PM, Chris M. Thomasson wrote:
    This is using the idea of trying to do other work while a mutex is
    contended. Why spin or wait when we don't really have to? The per-
    thread
    random numbers in this experiment are only there to try to simulate
    when
    a contended thread has any other work to do. This can be check per-
    thread stacks, queues, ect... Here is my example code. Can you get
    it to
    run, and can you show me the output? Now, this can work in real world
    environments. It's a stare to an adaptive mutex that just spins doing
    nothing... lol... ;^)

    I compiled it using c++20

    (read all...)
    [snip code]
    Any luck? Its fun to see how many work items were completed when the
    mutex was contended...

    I guess the main point is that if a try_lock() fails, then we can _try_
    to do "other" work... That can be checking thread local queues, stacks,

    I never really understood the point of this semi-sequential paradigm.
    If you
    have other work to do then do it in another thread, thats the whole
    point of
    threading!

    This already works with multiple threads, no problem. But, if a thread
    fails to acquire the mutex, it can check to see if it has any other work
    to do instead of just waiting in the kernel doing nothing, or spinning
    doing nothing... Make sense to you?


    Meanwhile if a thread needs to wait for a lock then just let it wait.


    The idea is instead of letting it wait, why not let it try to do some
    other work? Think of an adaptive mutex. On contention it will spin for a bounded number of times in user space before having to wait in the
    kernel. Well, instead of spinning, try to do something else, some real
    work? It will keep the thread hot, doing work instead of just waiting...


    Maybe in old single processor systems it could be useful to avoid
    spinning and make the processor available for other work.
    Nowadays each system has many processors and if one of them spins for a
    brief period, that is not a problem. The work that can be done when one
    thread is spinning, can be done in other threads.
    So, the need for such a complicated solution might be present only in
    very exceptional cases.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Cross@21:1/5 to Chris M. Thomasson on Wed Aug 6 00:38:54 2025
    In article <106u31a$31k65$2@dont-email.me>,
    Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 8/5/2025 12:27 AM, Muttley@DastardlyHQ.org wrote:
    On Mon, 4 Aug 2025 12:34:08 -0700
    "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> wibbled:
    [snip]
    This already works with multiple threads, no problem. But, if a thread
    fails to acquire the mutex, it can check to see if it has any other work >>> to do instead of just waiting in the kernel doing nothing, or spinning
    doing nothing... Make sense to you?

    The idea behind what you're trying to do is well-understood.

    In fact, this is what generally happens when you try to acquire
    a contended, blocking mutex: if the mutex is currently locked by
    some other thread, you mark the acquiring thread as blocked
    waiting on the mutex, and then yield control back to a scheduler
    or something; eventually the other thread will finish what it
    was doing and release the mutex, but part of releasing the mutex
    and making it available again is marking threads that were
    blocked on it runnable, thus opening them up to be scheduled
    again, where they can once again try to acquire the mutex.

    The twist here seems to be that, instead of immediately
    yielding, for some bounded number of times, you look at a work
    queue and try to find something else for the acquiring thread to
    do while the lock you want is unavailable. Regardless, the
    effect is more or less the same.

    Personally, I see little benefit, and a number of potential
    pitfalls. In particular, I see no reason why the bundled bits
    of work in this scheme can't cause any number of lock ordering
    violations and lead to all sorts of deadlock scenarios. Suppose
    for example, that I acquire some mutex, A, and then go to
    acquire a second mutex, B: B is currently held by some other
    thread, so I begin to spin, but I see that there is an item in
    my work queue, so I go to do that. The first thing that work
    item does is try to acquire mutex A, which I'm currently
    holding, leading to deadlock against A.

    One could perhaps assert that these work items are not allowed
    to take or hold locks, which could address this particular
    failure mode, but it's unclear how one might enforce this in
    general, and there may well be other, similar failure modes.

    [snip]
    The idea is instead of letting it wait, why not let it try to do some
    other work? Think of an adaptive mutex. On contention it will spin for a >>> bounded number of times in user space before having to wait in the
    kernel. Well, instead of spinning, try to do something else, some real
    work? It will keep the thread hot, doing work instead of just waiting...

    Because while its doing something else the lock maybe become free then
    become acquired again leading to a kind of race condition of some threads

    Nope. A thread can only do something else for a bounded number of times.
    So, its starvation free. No race condition.

    This is simply not true. Suppose that the "something else" is
    computationally intensive, or blocks the thread somehow (say, by
    executing a blocking system call). What if the "something else"
    just spins in an infinte loop, waiting for some event? Just
    because you bound the number of times you do "something else"
    does not mean you are free of starvation, because what the other
    thing actual does matters.

    - Dan C.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Fred. Zwarts@21:1/5 to All on Wed Aug 6 10:56:59 2025
    Op 06.aug.2025 om 01:07 schreef Chris M. Thomasson:
    On 8/5/2025 1:27 AM, Fred. Zwarts wrote:
    Op 04.aug.2025 om 21:34 schreef Chris M. Thomasson:
    On 8/4/2025 1:48 AM, Muttley@DastardlyHQ.org wrote:
    On Sat, 2 Aug 2025 12:58:23 -0700
    "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> wibbled:
    On 7/29/2025 10:54 PM, Chris M. Thomasson wrote:
    This is using the idea of trying to do other work while a mutex is >>>>>> contended. Why spin or wait when we don't really have to? The per- >>>>>> thread
    random numbers in this experiment are only there to try to
    simulate when
    a contended thread has any other work to do. This can be check per- >>>>>> thread stacks, queues, ect... Here is my example code. Can you get >>>>>> it to
    run, and can you show me the output? Now, this can work in real world >>>>>> environments. It's a stare to an adaptive mutex that just spins doing >>>>>> nothing... lol... ;^)

    I compiled it using c++20

    (read all...)
    [snip code]
    Any luck? Its fun to see how many work items were completed when the >>>>>> mutex was contended...

    I guess the main point is that if a try_lock() fails, then we can
    _try_
    to do "other" work... That can be checking thread local queues,
    stacks,

    I never really understood the point of this semi-sequential
    paradigm. If you
    have other work to do then do it in another thread, thats the whole
    point of
    threading!

    This already works with multiple threads, no problem. But, if a
    thread fails to acquire the mutex, it can check to see if it has any
    other work to do instead of just waiting in the kernel doing nothing,
    or spinning doing nothing... Make sense to you?


    Meanwhile if a thread needs to wait for a lock then just let it wait.


    The idea is instead of letting it wait, why not let it try to do some
    other work? Think of an adaptive mutex. On contention it will spin
    for a bounded number of times in user space before having to wait in
    the kernel. Well, instead of spinning, try to do something else, some
    real work? It will keep the thread hot, doing work instead of just
    waiting...


    Maybe in old single processor systems it could be useful to avoid
    spinning and make the processor available for other work.
    Nowadays each system has many processors and if one of them spins for
    a brief period, that is not a problem. The work that can be done when
    one thread is spinning, can be done in other threads.
    So, the need for such a complicated solution might be present only in
    very exceptional cases.

    Say the mutex is not all that "critical" to grab. Also, the backoff
    limit is there to make sure that the mutex shall be acquired. Also, a
    lot of this other work can be thread local, so another thread cannot
    start working on it? When the mutex is acquired a thread can do some interesting things. Check global state, check for update commands, to
    update the server, ect... But, the point is that when the mutex is not
    able to be locked, the thread will at least try to do something else? Is
    this okay, or just pie in the sky crap?


    As I said, i can see some very exceptional cases where this may be needed.
    But in the general case it is too vague, because 'something else' is not
    well defined. In the general case I would prefer a redesign, where the 'something else' is done in another thread.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Cross@21:1/5 to Chris M. Thomasson on Wed Aug 6 11:29:37 2025
    In article <106udpt$3400b$1@dont-email.me>,
    Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 8/5/2025 5:38 PM, Dan Cross wrote:
    In article <106u31a$31k65$2@dont-email.me>,
    Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 8/5/2025 12:27 AM, Muttley@DastardlyHQ.org wrote:
    On Mon, 4 Aug 2025 12:34:08 -0700
    "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> wibbled:
    [snip]
    This already works with multiple threads, no problem. But, if a thread >>>>> fails to acquire the mutex, it can check to see if it has any other work >>>>> to do instead of just waiting in the kernel doing nothing, or spinning >>>>> doing nothing... Make sense to you?

    The idea behind what you're trying to do is well-understood.

    In fact, this is what generally happens when you try to acquire
    a contended, blocking mutex: if the mutex is currently locked by
    some other thread, you mark the acquiring thread as blocked
    waiting on the mutex, and then yield control back to a scheduler
    or something; eventually the other thread will finish what it
    was doing and release the mutex, but part of releasing the mutex
    and making it available again is marking threads that were
    blocked on it runnable, thus opening them up to be scheduled
    again, where they can once again try to acquire the mutex.

    The twist here seems to be that, instead of immediately
    yielding, for some bounded number of times, you look at a work
    queue and try to find something else for the acquiring thread to
    do while the lock you want is unavailable. Regardless, the
    effect is more or less the same.

    Personally, I see little benefit, and a number of potential
    pitfalls. In particular, I see no reason why the bundled bits
    of work in this scheme can't cause any number of lock ordering
    violations and lead to all sorts of deadlock scenarios. Suppose
    for example, that I acquire some mutex, A, and then go to
    acquire a second mutex, B: B is currently held by some other
    thread, so I begin to spin, but I see that there is an item in
    my work queue, so I go to do that. The first thing that work
    item does is try to acquire mutex A, which I'm currently
    holding, leading to deadlock against A.

    One could perhaps assert that these work items are not allowed
    to take or hold locks, which could address this particular
    failure mode, but it's unclear how one might enforce this in
    general, and there may well be other, similar failure modes.

    [snip]
    The idea is instead of letting it wait, why not let it try to do some >>>>> other work? Think of an adaptive mutex. On contention it will spin for a >>>>> bounded number of times in user space before having to wait in the
    kernel. Well, instead of spinning, try to do something else, some real >>>>> work? It will keep the thread hot, doing work instead of just waiting... >>>>
    Because while its doing something else the lock maybe become free then >>>> become acquired again leading to a kind of race condition of some threads >>>
    Nope. A thread can only do something else for a bounded number of times. >>> So, its starvation free. No race condition.

    This is simply not true. Suppose that the "something else" is
    computationally intensive, or blocks the thread somehow (say, by
    executing a blocking system call).

    This is where try to do other work, well. Also, this is where the
    nightmare condition can occur. Try to do other work, non-blocking.

    If you are talking about some kind of general mechanism, how
    would you even know in advance what the "something else" is?

    Say
    trying to take a IOCP completion using a timeout of zero? If we got
    data, we add it to a local queue or something. No blocking.

    You're talking about Win32, consider how
    `GetQueuedCompletionStatus` might be implemented. You may well
    be taking a round-trip through the kernel to do that, which is
    very expensive compared to, say, incrementing a register.

    What if the "something else"
    just spins in an infinte loop, waiting for some event? Just
    because you bound the number of times you do "something else"
    does not mean you are free of starvation, because what the other
    thing actual does matters.

    No. If there is no other work to do, it falls back to lock(). If if did
    too much work, it falls back to lock.

    You're missing the point: there's no good way to determine
    whether _any_ amount of work is "too much".

    - Dan C.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Cross@21:1/5 to Chris M. Thomasson on Wed Aug 6 12:04:49 2025
    In article <106udu9$3400b$2@dont-email.me>,
    Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 8/5/2025 5:38 PM, Dan Cross wrote:
    In article <106u31a$31k65$2@dont-email.me>,
    Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 8/5/2025 12:27 AM, Muttley@DastardlyHQ.org wrote:
    On Mon, 4 Aug 2025 12:34:08 -0700
    "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> wibbled:
    [snip]
    This already works with multiple threads, no problem. But, if a thread >>>>> fails to acquire the mutex, it can check to see if it has any other work >>>>> to do instead of just waiting in the kernel doing nothing, or spinning >>>>> doing nothing... Make sense to you?

    The idea behind what you're trying to do is well-understood.

    In fact, this is what generally happens when you try to acquire
    a contended, blocking mutex: if the mutex is currently locked by
    some other thread, you mark the acquiring thread as blocked
    waiting on the mutex, and then yield control back to a scheduler
    or something; eventually the other thread will finish what it
    was doing and release the mutex, but part of releasing the mutex
    and making it available again is marking threads that were
    blocked on it runnable, thus opening them up to be scheduled
    again, where they can once again try to acquire the mutex.

    The twist here seems to be that, instead of immediately
    yielding, for some bounded number of times, you look at a work
    queue and try to find something else for the acquiring thread to
    do while the lock you want is unavailable. Regardless, the
    effect is more or less the same.

    Personally, I see little benefit, and a number of potential
    pitfalls. In particular, I see no reason why the bundled bits
    of work in this scheme can't cause any number of lock ordering
    violations and lead to all sorts of deadlock scenarios. Suppose
    for example, that I acquire some mutex, A, and then go to
    acquire a second mutex, B: B is currently held by some other
    thread, so I begin to spin, but I see that there is an item in
    my work queue, so I go to do that. The first thing that work
    item does is try to acquire mutex A, which I'm currently
    holding, leading to deadlock against A.

    One could perhaps assert that these work items are not allowed
    to take or hold locks, which could address this particular
    failure mode, but it's unclear how one might enforce this in
    general, and there may well be other, similar failure modes.

    [snip]
    The idea is instead of letting it wait, why not let it try to do some >>>>> other work? Think of an adaptive mutex. On contention it will spin for a >>>>> bounded number of times in user space before having to wait in the
    kernel. Well, instead of spinning, try to do something else, some real >>>>> work? It will keep the thread hot, doing work instead of just waiting... >>>>
    Because while its doing something else the lock maybe become free then >>>> become acquired again leading to a kind of race condition of some threads >>>
    Nope. A thread can only do something else for a bounded number of times. >>> So, its starvation free. No race condition.

    This is simply not true. Suppose that the "something else" is
    computationally intensive, or blocks the thread somehow (say, by
    executing a blocking system call). What if the "something else"
    just spins in an infinte loop, waiting for some event? Just
    because you bound the number of times you do "something else"
    does not mean you are free of starvation, because what the other
    thing actual does matters.

    Using this pattern, well, can be a nightmare, but can be somewhat
    beneficial at the same time?

    Yeah, so beneficial that it's been codified many times over:
    acquiring a contended blocking mutex yields, so that something
    else can run instead.

    If a programmer knew that a class of
    functions are compatible with the try_lock failed condition, well, why
    not give them a go? Better than spinning doing nothing, or god forbid, >waiting in the kernel?

    The whole premise of a spinlock is that the critical section
    that it protects is short (like, a few instructions), so the
    overhead of spinning is smaller than the overhead of context
    switching to something else. If that is not true, you shouldn't
    be using a spinlock as your mutex primitive in the first place,
    and a blocking or adaptive mutex is more appropriate. By the
    way, there's nothing at all wrong with blocking in the kernel;
    given a reasonable OS, it's not like if *YOUR THREAD* isn't
    running the CPU will be idle or something.

    But if you still want to avoid blocking when a mutex is
    unavailable, then you've got the idea inverted. Focusing on the
    locking protocol as some kind of generic place to do "other
    work" is too general to be useful, as has been shown. To be
    truly useful, this sort of approach must be coupled with
    knowledge of what a thread can safely do in context, which means
    that you really need to specialize the behavior for the
    _critical section_, not the _mutex_: the programmer knows why
    the thread is trying to acquire a mutex in that case, and what
    other work may or may not be acceptable to do if the mutex isn't
    immediately available. So they can craft the critical section
    to take advantage of that knowledge themselves. This sort of
    technique is well-known, by the way.

    But then it's not going to be particularly general. And you may
    argue that the critical section might be buried deep down in a
    call stack, very far away from the place where the programmer
    can reasonably make a decision about whether it's safe to do
    something or not, so that's a more appropriate place: but that
    reinforces the objections: if the critical section is so far
    away from the point where the programmer knows whether it's safe
    for the thread to do something else, then the programmer doesn't
    really know whether that thing is safe, because the intervening
    calls might have changed conditions so that it's not, so trying
    to plumb that knowledge down to the mutex code is again too
    dangerous to be useful.

    But even if the programmer could safely interpose doing
    "something else" in this way, they may not have full knowledge
    to know whether or not that will cause the thread to block.
    Consider, for example, a program that tries to do something
    innocuous, like zero a a few byte's worth of memory while
    trying to take a lock; what if the page holding the memory to be
    zero'ed has been stolen by the OS in the meantime, and now the
    program (totally unbeknownst to it) incurs a page fault that
    requires reading data from secondary storage to resolve? Not
    only has a single store now blocked your thread in the big scary
    kernel, but it's blocked it for many, many cycles.

    So while this technique is known and used in existing systems,
    it must be done so carefully that it really isn't appropriate as
    a general mechanism.

    - Dan C.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Muttley@DastardlyHQ.org@21:1/5 to All on Tue Aug 5 07:29:18 2025
    On Mon, 4 Aug 2025 12:38:52 -0700
    "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> wibbled:
    Lol. I asked an AI about it. Well, it seemed to "like" it and spit out
    some code:

    AI generated:

    I'm not even going to attempt to figure out wtf is going on there. Looks
    like it ripped off code from some project on github wholesale.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Cross@21:1/5 to Chris M. Thomasson on Fri Aug 8 01:41:31 2025
    In article <1070lq1$3lg2n$3@dont-email.me>,
    Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 8/6/2025 4:29 AM, Dan Cross wrote:
    In article <106udpt$3400b$1@dont-email.me>,
    Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    [snip]
    No. If there is no other work to do, it falls back to lock(). If if did
    too much work, it falls back to lock.

    You're missing the point: there's no good way to determine
    whether _any_ amount of work is "too much".

    Fair enough. But, there is a chance to hit the fast path, so to speak.

    You literally have no way of knowing that beforehand.

    try_lock fails, then the thread does a call to say, >GetQueuedCompletionStatusEx with a timeout of zero, (aka try), if it >succeeds, it can push the batch into thread local queues, then try the >try_lock again and succeeds.

    You are, again, missing the point. Not only does making a call
    to `GetQueuedCompletionStatusEx` involve a round trip through
    the kernel, in which case you may as well just block in the
    kernel and let another thread run, you seem to be missing the
    specific that IOCP completion is irrelevant, because it is just
    one example.

    So, instead of waiting in the kernel for
    the mutex it did actual work, and god knows how many other threads got
    the mutex in the mean time. So, forward progress is guaranteed. The
    nature of the "tried" work must be known, and if its compatible, so be
    it. If not, so be it. Fair enough?

    No. You keep saying that "forward progress is guaranteed"
    because you bound the total number of times that your locking
    thread can do "something else." But again, the point that you
    are missing is that any given "something else" can itself be
    completely unbounded, possibly in a way that is totally
    unknowable at the time you go to do it; see the lock ordering
    constraint violation I gave earlier as an example: there's no
    way you can know whether that will be problematic until runtime,
    and there's no way you can meaningfully assess "compatibility"
    in general.

    So even if you bound the number of times you'll do "something
    else", since you have no guarantee that ANY instance of other
    work is itself bounded, you can't make any guarantees about
    whether the scheme as a whole is bounded.

    And if it's really just, "for this specific critical section, if
    you fail to immediately acquire the lock, go try this other,
    very specific thing known to be safe in this specific context,
    and then try again...." then as I've said, this is well-known,
    but is also wholly unoriginal and thus uninteresting.

    - Dan C.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Cross@21:1/5 to Chris M. Thomasson on Fri Aug 8 01:51:57 2025
    In article <1070laa$3lg2n$1@dont-email.me>,
    Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 8/6/2025 6:51 AM, Bonita Montero wrote:
    Am 30.07.2025 um 07:54 schrieb Chris M. Thomasson:

    This is using the idea of trying to do other work while a mutex is
    contended. ...

    This situation almost never happens.
    It requires that the locking can be delayed an arbitrary time.

    Not an arbitrary time. Its bounded.

    The number of times you try to do "other work" may be bounded.
    But I don't see any why you think you can meaningfully assert
    that the work itself is usefully bounded. So no, it is not
    guaranteed to be bounded, in which case you must assume it is
    unbounded.

    - Dan C.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Cross@21:1/5 to Chris M. Thomasson on Fri Aug 8 01:49:29 2025
    In article <1070m2s$3lg2n$4@dont-email.me>,
    Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 8/6/2025 5:04 AM, Dan Cross wrote:
    In article <106udu9$3400b$2@dont-email.me>,
    Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 8/5/2025 5:38 PM, Dan Cross wrote:
    In article <106u31a$31k65$2@dont-email.me>,
    Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 8/5/2025 12:27 AM, Muttley@DastardlyHQ.org wrote:
    On Mon, 4 Aug 2025 12:34:08 -0700
    "Chris M. Thomasson" <chris.m.thomasson.1@gmail.com> wibbled:
    [snip]
    This already works with multiple threads, no problem. But, if a thread >>>>>>> fails to acquire the mutex, it can check to see if it has any other work
    to do instead of just waiting in the kernel doing nothing, or spinning >>>>>>> doing nothing... Make sense to you?

    The idea behind what you're trying to do is well-understood.

    In fact, this is what generally happens when you try to acquire
    a contended, blocking mutex: if the mutex is currently locked by
    some other thread, you mark the acquiring thread as blocked
    waiting on the mutex, and then yield control back to a scheduler
    or something; eventually the other thread will finish what it
    was doing and release the mutex, but part of releasing the mutex
    and making it available again is marking threads that were
    blocked on it runnable, thus opening them up to be scheduled
    again, where they can once again try to acquire the mutex.

    The twist here seems to be that, instead of immediately
    yielding, for some bounded number of times, you look at a work
    queue and try to find something else for the acquiring thread to
    do while the lock you want is unavailable. Regardless, the
    effect is more or less the same.

    Personally, I see little benefit, and a number of potential
    pitfalls. In particular, I see no reason why the bundled bits
    of work in this scheme can't cause any number of lock ordering
    violations and lead to all sorts of deadlock scenarios. Suppose
    for example, that I acquire some mutex, A, and then go to
    acquire a second mutex, B: B is currently held by some other
    thread, so I begin to spin, but I see that there is an item in
    my work queue, so I go to do that. The first thing that work
    item does is try to acquire mutex A, which I'm currently
    holding, leading to deadlock against A.

    One could perhaps assert that these work items are not allowed
    to take or hold locks, which could address this particular
    failure mode, but it's unclear how one might enforce this in
    general, and there may well be other, similar failure modes.

    [snip]
    The idea is instead of letting it wait, why not let it try to do some >>>>>>> other work? Think of an adaptive mutex. On contention it will spin for a
    bounded number of times in user space before having to wait in the >>>>>>> kernel. Well, instead of spinning, try to do something else, some real >>>>>>> work? It will keep the thread hot, doing work instead of just waiting...

    Because while its doing something else the lock maybe become free then >>>>>> become acquired again leading to a kind of race condition of some threads

    Nope. A thread can only do something else for a bounded number of times. >>>>> So, its starvation free. No race condition.

    This is simply not true. Suppose that the "something else" is
    computationally intensive, or blocks the thread somehow (say, by
    executing a blocking system call). What if the "something else"
    just spins in an infinte loop, waiting for some event? Just
    because you bound the number of times you do "something else"
    does not mean you are free of starvation, because what the other
    thing actual does matters.

    Using this pattern, well, can be a nightmare, but can be somewhat
    beneficial at the same time?

    Yeah, so beneficial that it's been codified many times over:
    acquiring a contended blocking mutex yields, so that something
    else can run instead.

    If a programmer knew that a class of
    functions are compatible with the try_lock failed condition, well, why
    not give them a go? Better than spinning doing nothing, or god forbid,
    waiting in the kernel?

    The whole premise of a spinlock is that the critical section
    that it protects is short (like, a few instructions), so the
    overhead of spinning is smaller than the overhead of context
    switching to something else. If that is not true, you shouldn't
    be using a spinlock as your mutex primitive in the first place,
    and a blocking or adaptive mutex is more appropriate. By the
    way, there's nothing at all wrong with blocking in the kernel;
    given a reasonable OS, it's not like if *YOUR THREAD* isn't
    running the CPU will be idle or something.

    My try other work thing still works with short and/or long critical
    sections. Try other work replaces the spin aspect of an adaptive mutex
    with trying to do other "compatible" work. If no other work can be found
    to do, it falls back to a lock() anyway. If it does "too much" work, say
    5 or 10 work items, it locks. try_lock is just there to keep the pump >running, so to speak...

    Again, just bounding the number of times you do "other work" is
    insufficient to guarantee forward progress. See the example of
    trying to recursively acquire a held mutex, leading to deadlock.
    The issue here is not that you only do things 5, 10, or 1000
    times, but that there is no meaningful way you could assess
    "compatibility" (not without solving the halting problem anyway)
    and so any given bit of "other work" could basically deadlock
    the whole thing.

    If your "other work" can take arbitrarily long, how is this
    mechanism any better than just letting the thread that wants to
    acquire the mutex block and letting another thread chew through
    your work queue?

    But if you still want to avoid blocking when a mutex is
    unavailable, then you've got the idea inverted. Focusing on the
    locking protocol as some kind of generic place to do "other
    work" is too general to be useful, as has been shown. To be
    truly useful, this sort of approach must be coupled with
    knowledge of what a thread can safely do in context, which means
    that you really need to specialize the behavior for the
    _critical section_, not the _mutex_: the programmer knows why
    the thread is trying to acquire a mutex in that case, and what
    other work may or may not be acceptable to do if the mutex isn't
    immediately available. So they can craft the critical section
    to take advantage of that knowledge themselves. This sort of
    technique is well-known, by the way.

    But then it's not going to be particularly general. And you may
    argue that the critical section might be buried deep down in a
    call stack, very far away from the place where the programmer
    can reasonably make a decision about whether it's safe to do
    something or not, so that's a more appropriate place: but that
    reinforces the objections: if the critical section is so far
    away from the point where the programmer knows whether it's safe
    for the thread to do something else, then the programmer doesn't
    really know whether that thing is safe, because the intervening
    calls might have changed conditions so that it's not, so trying
    to plumb that knowledge down to the mutex code is again too
    dangerous to be useful.

    But even if the programmer could safely interpose doing
    "something else" in this way, they may not have full knowledge
    to know whether or not that will cause the thread to block.
    Consider, for example, a program that tries to do something
    innocuous, like zero a a few byte's worth of memory while
    trying to take a lock; what if the page holding the memory to be
    zero'ed has been stolen by the OS in the meantime, and now the
    program (totally unbeknownst to it) incurs a page fault that
    requires reading data from secondary storage to resolve? Not
    only has a single store now blocked your thread in the big scary
    kernel, but it's blocked it for many, many cycles.

    So while this technique is known and used in existing systems,
    it must be done so carefully that it really isn't appropriate as
    a general mechanism.

    carefully is the key concept! Yikes... Shit... However, basically, it
    still boils down to trying other "work" on a spin can keep the thread
    "hot" doing other work, instead of spinning on an adaptive logic of a
    mutex, nothing nothing?

    Keeping a thread "hot" without being able to make any sort of
    meaningful guarantee about the viability of the scheme as a
    general mechanism is not useful, when considering the scheme as
    a general mechanism.

    - Dan C.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Cross@21:1/5 to Chris M. Thomasson on Sat Aug 9 09:17:25 2025
    In article <1073snp$en4a$3@dont-email.me>,
    Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 8/7/2025 6:49 PM, Dan Cross wrote:
    In article <1070m2s$3lg2n$4@dont-email.me>,
    Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    [snip]
    carefully is the key concept! Yikes... Shit... However, basically, it
    still boils down to trying other "work" on a spin can keep the thread
    "hot" doing other work, instead of spinning on an adaptive logic of a
    mutex, nothing nothing?

    Keeping a thread "hot" without being able to make any sort of
    meaningful guarantee about the viability of the scheme as a
    general mechanism is not useful, when considering the scheme as
    a general mechanism.

    general mechanism? No. Oh no. I am no trying to do that at all. How
    about a sometimes it might be okay type of scenario? Argh! My mind keeps
    on thinking of other work that will not spin off into never never land,
    the nightmare thing I mentioned. It will not deadlock, actually, not
    sure how it could deadlock in this highly special realm, anyway... That
    call to a try IOCP means that is has the "chance" to gain data, checking
    a thread local queues has a chance to have other work while the mutex is
    in contention anyway via try_lock fail? Oh my.... I am grasping at
    straws here.. But, it still might be okay for certain scenarios?

    Yes, you are grasping at straws.

    People have written code that tries to take a lock, and if that
    fails, does something else, for decades. The idea is neither
    new, nor terribly interesting. Sorry.

    Actually, way back, 20-25 years ago when I was messing around with proxy >collectors... I had a base case that used mutexes only. No fancy
    atomics, ect... Pre C++11, ect... It worked. If a per thread mutex
    try_lock failed it meant a reaper thread got a hold of it, and it should
    lock its next generation of mutexes. all per thread. This would allow
    for a RCU type of setup using mutexes only. Pretty fun.

    That vague, incomplete description sounds nothing like how RCU
    works.

    - Dan C.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Cross@21:1/5 to Bonita.Montero@gmail.com on Sat Aug 9 16:08:27 2025
    In article <10775om$17ms4$1@raubtier-asyl.eternal-september.org>,
    Bonita Montero <Bonita.Montero@gmail.com> wrote:
    Am 09.08.2025 um 11:17 schrieb Dan Cross:

    People have written code that tries to take a lock, and if that
    fails, does something else, for decades. The idea is neither
    new, nor terribly interesting. Sorry.

    I think this pattern is extremely rare because the task that needs to
    be done elsewhere must be able to be omitted completely if necessary,
    and if not, it must be short enough that it doesn't interfere with
    waiting for the mutex.

    Yeah. That's why I mentioned concentrating on the specific
    critical section the mutex is protecting, not the locking
    protocol. Where this has any utility at all, it's in places
    where the "something else" is well-known, which implies it being
    a function of the critsec. It's going to be things like, "if I
    fail to acquire the scheduler lock, then increment this per-core
    atomic counter that's measuring contention...." sort of thing.

    - Dan C.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Cross@21:1/5 to Chris M. Thomasson on Sun Aug 10 03:08:44 2025
    In article <10781l0$1eamj$2@dont-email.me>,
    Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 8/9/2025 9:08 AM, Dan Cross wrote:
    In article <10775om$17ms4$1@raubtier-asyl.eternal-september.org>,
    Bonita Montero <Bonita.Montero@gmail.com> wrote:
    Am 09.08.2025 um 11:17 schrieb Dan Cross:

    People have written code that tries to take a lock, and if that
    fails, does something else, for decades. The idea is neither
    new, nor terribly interesting. Sorry.

    I think this pattern is extremely rare because the task that needs to
    be done elsewhere must be able to be omitted completely if necessary,
    and if not, it must be short enough that it doesn't interfere with
    waiting for the mutex.

    Yeah. That's why I mentioned concentrating on the specific
    critical section the mutex is protecting, not the locking
    protocol. Where this has any utility at all, it's in places
    where the "something else" is well-known, which implies it being
    a function of the critsec. It's going to be things like, "if I
    fail to acquire the scheduler lock, then increment this per-core
    atomic counter that's measuring contention...." sort of thing.

    More grasping at straws and/or trying to breath underwater without scuba >gear... The other work is very specialized. It can be as simple as
    processing a per-thread local queue or something. Just something that
    can act like a backoff that does real work in a case of contention. Sigh.

    That's the sort of thing where you cannot know what's on that
    "per-thread local queue" that makes your scheme fail.

    If your lock is contended, you could just sleep on it, and let
    the kernel schedule something else instead. There's no value to
    keeping your thread "hot" unless you've got a profile showing
    that it's an issue.

    - Dan C.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Cross@21:1/5 to Chris M. Thomasson on Wed Aug 13 11:45:42 2025
    In article <107eu5g$33jcr$5@dont-email.me>,
    Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 8/12/2025 1:23 AM, Bonita Montero wrote:
    Am 11.08.2025 um 22:30 schrieb Chris M. Thomasson:

    I gave some quick examples, perhaps you missed them. It could be
    batching thread local queues, checking to see if an
    GetQueuedCompletionStatusEx (timeout 0) succeeds, ...

    GetQueuedCompletionStatusEx() is an expensive kernel call, not something
    you do casually. Even a Sleep( 0 ), which is equivalent to a std::
    this_thread::yield(), costs about 1,000 clock cycles on a modern CPU.
    And GetQueuedCompletionStatusEx() isn't a function whose execution can
    be arbitrarily delayed or even prevented altogether. In other words:
    You're simply overthinking.

    Its seems quite fine though. try_lock fails... Sometimes go for a >GetQueuedCompletionStatusEx with a 0 timeout. Vs just going into the
    kernel anyway for the contention. Try some local queues before trying >GetQueuedCompletionStatusEx a couple of times in the backoff, Or
    spinning ala adaptive mutex logic doing nothing anyway... If it succeeds
    we get to sort the results into their per thread queues, if not, we
    lock. If it succeeds too (too much work) many times we lock. If try lock
    does not fail, well we got the damn lock anyway. So be it?

    I don't think you have a good handle on the relative cost of
    these operations.

    You seem to be against spending a few cycles spinning, or
    against paying the one time cost of going to sleep in the kernel
    until your mutex is available, but willing to burn thousands of
    cycles going into the kernel over and over again to see if an IO
    port becomes ready so that you can keep your thread "hot".

    That's, frankly, not at all a good tradeoff.

    - Dan C.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Cross@21:1/5 to Chris M. Thomasson on Wed Aug 13 11:41:35 2025
    In article <107dkai$2rg26$2@dont-email.me>,
    Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 8/9/2025 8:08 PM, Dan Cross wrote:
    In article <10781l0$1eamj$2@dont-email.me>,
    Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 8/9/2025 9:08 AM, Dan Cross wrote:
    In article <10775om$17ms4$1@raubtier-asyl.eternal-september.org>,
    Bonita Montero <Bonita.Montero@gmail.com> wrote:
    Am 09.08.2025 um 11:17 schrieb Dan Cross:

    People have written code that tries to take a lock, and if that
    fails, does something else, for decades. The idea is neither
    new, nor terribly interesting. Sorry.

    I think this pattern is extremely rare because the task that needs to >>>>> be done elsewhere must be able to be omitted completely if necessary, >>>>> and if not, it must be short enough that it doesn't interfere with
    waiting for the mutex.

    Yeah. That's why I mentioned concentrating on the specific
    critical section the mutex is protecting, not the locking
    protocol. Where this has any utility at all, it's in places
    where the "something else" is well-known, which implies it being
    a function of the critsec. It's going to be things like, "if I
    fail to acquire the scheduler lock, then increment this per-core
    atomic counter that's measuring contention...." sort of thing.

    More grasping at straws and/or trying to breath underwater without scuba >>> gear... The other work is very specialized. It can be as simple as
    processing a per-thread local queue or something. Just something that
    can act like a backoff that does real work in a case of contention. Sigh. >>
    That's the sort of thing where you cannot know what's on that
    "per-thread local queue" that makes your scheme fail.

    Well, that per-thread local queue can be for organization, maintenance, >sending the results to a "logic" thread, ect. Think of a server io
    thread that takes the io completions and organizes them into local
    queues. These io threads do not call into user logic, or processing wrt
    say, parsing HTTP, or some protocol. So, some work that can be used
    after a try_lock fails can be to push the local queues to the logic threads.

    This, yet again, seems to be where you don't understand the
    issue at stake here.

    You can _probably_ find _any number of things_ that you can do
    while looping over `try_lock`. I don't care about the specific
    examples: they obviously exist.

    But what you _cannot_ do, and what makes this sort of scheme
    fail, is know that _all_ such work is safe. Unless you can wave
    a magic wand and solve the halting problem, you won't be able to
    know that some jerk won't send you "work" that amounts to a
    non-terminating infinite loop. You won't be able to know that
    some other jerk hasn't sent you "work" that causes you to
    deadlock trying to take a mutex you already hold. Etc.

    If you _ever_ open yourself up to accepting "work" on some kind
    of queue, where the types of all work that can possibly be sent
    to you is not known in advance _in some way that can be enforced
    in code_, then your scheme fails. And if you do have some way
    to enforce those constraints, then the scheme is neither novel
    nor particularly interesting.

    If your lock is contended, you could just sleep on it, and let
    the kernel schedule something else instead. There's no value to
    keeping your thread "hot" unless you've got a profile showing
    that it's an issue.

    But why sleep when the thread can do some real work?

    Because the OS (or even a user-level thread scheduler, or
    language runtime, or whatever) can find work to usefully do on
    that CPU, _without_ the downsides mentioned above. So you lose
    nothing but gain correctness and stability.

    Argh. Grasping
    again. Feels like I am drowning. Shit! ;^o

    Perhaps stop doing that, then. It's clear you are not going to
    find a way to make whatever partially developed idea you have
    work.

    It's ok to be wrong. It's less ok to keep trying to convince
    others that you are right.

    - Dan C.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Scott Lurndal@21:1/5 to Bonita Montero on Wed Aug 13 17:31:20 2025
    Bonita Montero <Bonita.Montero@gmail.com> writes:
    Am 13.08.2025 um 13:45 schrieb Dan Cross:

    I don't think you have a good handle on the relative cost of
    these operations.

    You seem to be against spending a few cycles spinning, or
    against paying the one time cost of going to sleep in the kernel
    until your mutex is available, but willing to burn thousands of
    cycles going into the kernel over and over again to see if an IO
    port becomes ready so that you can keep your thread "hot".

    That's, frankly, not at all a good tradeoff.

    Chris has too much phantasy.

    Du bist zu kritisch.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Cross@21:1/5 to Chris M. Thomasson on Mon Aug 18 20:26:10 2025
    In article <107ojfk$1e84t$2@dont-email.me>,
    Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 8/13/2025 4:41 AM, Dan Cross wrote:
    In article <107dkai$2rg26$2@dont-email.me>,
    Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 8/9/2025 8:08 PM, Dan Cross wrote:
    In article <10781l0$1eamj$2@dont-email.me>,
    Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 8/9/2025 9:08 AM, Dan Cross wrote:
    In article <10775om$17ms4$1@raubtier-asyl.eternal-september.org>,
    Bonita Montero <Bonita.Montero@gmail.com> wrote:
    Am 09.08.2025 um 11:17 schrieb Dan Cross:

    People have written code that tries to take a lock, and if that >>>>>>>> fails, does something else, for decades. The idea is neither
    new, nor terribly interesting. Sorry.

    I think this pattern is extremely rare because the task that needs to >>>>>>> be done elsewhere must be able to be omitted completely if necessary, >>>>>>> and if not, it must be short enough that it doesn't interfere with >>>>>>> waiting for the mutex.

    Yeah. That's why I mentioned concentrating on the specific
    critical section the mutex is protecting, not the locking
    protocol. Where this has any utility at all, it's in places
    where the "something else" is well-known, which implies it being
    a function of the critsec. It's going to be things like, "if I
    fail to acquire the scheduler lock, then increment this per-core
    atomic counter that's measuring contention...." sort of thing.

    More grasping at straws and/or trying to breath underwater without scuba >>>>> gear... The other work is very specialized. It can be as simple as
    processing a per-thread local queue or something. Just something that >>>>> can act like a backoff that does real work in a case of contention. Sigh. >>>>
    That's the sort of thing where you cannot know what's on that
    "per-thread local queue" that makes your scheme fail.

    Well, that per-thread local queue can be for organization, maintenance,
    sending the results to a "logic" thread, ect. Think of a server io
    thread that takes the io completions and organizes them into local
    queues. These io threads do not call into user logic, or processing wrt
    say, parsing HTTP, or some protocol. So, some work that can be used
    after a try_lock fails can be to push the local queues to the logic threads.

    This, yet again, seems to be where you don't understand the
    issue at stake here.

    You can _probably_ find _any number of things_ that you can do
    while looping over `try_lock`. I don't care about the specific
    examples: they obviously exist.

    But what you _cannot_ do, and what makes this sort of scheme
    fail, is know that _all_ such work is safe. Unless you can wave
    a magic wand and solve the halting problem, you won't be able to
    know that some jerk won't send you "work" that amounts to a
    non-terminating infinite loop. You won't be able to know that
    some other jerk hasn't sent you "work" that causes you to
    deadlock trying to take a mutex you already hold. Etc.

    If you _ever_ open yourself up to accepting "work" on some kind
    of queue, where the types of all work that can possibly be sent
    to you is not known in advance _in some way that can be enforced
    in code_, then your scheme fails. And if you do have some way
    to enforce those constraints, then the scheme is neither novel
    nor particularly interesting.

    If your lock is contended, you could just sleep on it, and let
    the kernel schedule something else instead. There's no value to
    keeping your thread "hot" unless you've got a profile showing
    that it's an issue.

    But why sleep when the thread can do some real work?

    Because the OS (or even a user-level thread scheduler, or
    language runtime, or whatever) can find work to usefully do on
    that CPU, _without_ the downsides mentioned above. So you lose
    nothing but gain correctness and stability.

    Argh. Grasping
    again. Feels like I am drowning. Shit! ;^o

    Perhaps stop doing that, then. It's clear you are not going to
    find a way to make whatever partially developed idea you have
    work.

    It should be okay if the other work is okay with it?

    Again: you have no way of knowing that "the other work is okay
    with it".

    It's ok to be wrong. It's less ok to keep trying to convince
    others that you are right.

    It just seems to be okay, well, after a try_lock fails try to do
    something that is _compatible_ instead of something like an adaptive
    mutex? Shit. Yikes!

    There I go again. Sorry.

    [...]

    Please stop.

    - Dan C.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Cross@21:1/5 to Chris M. Thomasson on Mon Aug 18 20:39:49 2025
    In article <10802os$3bdl8$1@dont-email.me>,
    Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 8/18/2025 1:25 PM, Dan Cross wrote:
    In article <107ojb9$1e84t$1@dont-email.me>,
    Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 8/13/2025 4:41 AM, Dan Cross wrote:
    In article <107dkai$2rg26$2@dont-email.me>,
    Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:

    When you get bored, give my lock-free proxy collector a spin...
    https://pastebin.com/raw/CYZ78gVj

    No need, I'm afraid.

    You don't like proxy collectors? Or you think mine is shit?

    Sorry, I'm just not sure you're a useful source of code like
    this. It's subtle and requires a lot of in-depth understanding
    of very thorny issues that it doesn't strike me you have a good
    handle on.

    - Dan C.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Cross@21:1/5 to Chris M. Thomasson on Mon Aug 18 20:44:42 2025
    In article <108031c$3bdl8$2@dont-email.me>,
    Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 8/18/2025 1:26 PM, Dan Cross wrote:
    In article <107ojfk$1e84t$2@dont-email.me>,
    Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 8/13/2025 4:41 AM, Dan Cross wrote:
    In article <107dkai$2rg26$2@dont-email.me>,
    Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 8/9/2025 8:08 PM, Dan Cross wrote:
    In article <10781l0$1eamj$2@dont-email.me>,
    Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 8/9/2025 9:08 AM, Dan Cross wrote:
    In article <10775om$17ms4$1@raubtier-asyl.eternal-september.org>, >>>>>>>> Bonita Montero <Bonita.Montero@gmail.com> wrote:
    Am 09.08.2025 um 11:17 schrieb Dan Cross:

    People have written code that tries to take a lock, and if that >>>>>>>>>> fails, does something else, for decades. The idea is neither >>>>>>>>>> new, nor terribly interesting. Sorry.

    I think this pattern is extremely rare because the task that needs to >>>>>>>>> be done elsewhere must be able to be omitted completely if necessary, >>>>>>>>> and if not, it must be short enough that it doesn't interfere with >>>>>>>>> waiting for the mutex.

    Yeah. That's why I mentioned concentrating on the specific
    critical section the mutex is protecting, not the locking
    protocol. Where this has any utility at all, it's in places
    where the "something else" is well-known, which implies it being >>>>>>>> a function of the critsec. It's going to be things like, "if I >>>>>>>> fail to acquire the scheduler lock, then increment this per-core >>>>>>>> atomic counter that's measuring contention...." sort of thing.

    More grasping at straws and/or trying to breath underwater without scuba
    gear... The other work is very specialized. It can be as simple as >>>>>>> processing a per-thread local queue or something. Just something that >>>>>>> can act like a backoff that does real work in a case of contention. Sigh.

    That's the sort of thing where you cannot know what's on that
    "per-thread local queue" that makes your scheme fail.

    Well, that per-thread local queue can be for organization, maintenance, >>>>> sending the results to a "logic" thread, ect. Think of a server io
    thread that takes the io completions and organizes them into local
    queues. These io threads do not call into user logic, or processing wrt >>>>> say, parsing HTTP, or some protocol. So, some work that can be used
    after a try_lock fails can be to push the local queues to the logic threads.

    This, yet again, seems to be where you don't understand the
    issue at stake here.

    You can _probably_ find _any number of things_ that you can do
    while looping over `try_lock`. I don't care about the specific
    examples: they obviously exist.

    But what you _cannot_ do, and what makes this sort of scheme
    fail, is know that _all_ such work is safe. Unless you can wave
    a magic wand and solve the halting problem, you won't be able to
    know that some jerk won't send you "work" that amounts to a
    non-terminating infinite loop. You won't be able to know that
    some other jerk hasn't sent you "work" that causes you to
    deadlock trying to take a mutex you already hold. Etc.

    If you _ever_ open yourself up to accepting "work" on some kind
    of queue, where the types of all work that can possibly be sent
    to you is not known in advance _in some way that can be enforced
    in code_, then your scheme fails. And if you do have some way
    to enforce those constraints, then the scheme is neither novel
    nor particularly interesting.

    If your lock is contended, you could just sleep on it, and let
    the kernel schedule something else instead. There's no value to
    keeping your thread "hot" unless you've got a profile showing
    that it's an issue.

    But why sleep when the thread can do some real work?

    Because the OS (or even a user-level thread scheduler, or
    language runtime, or whatever) can find work to usefully do on
    that CPU, _without_ the downsides mentioned above. So you lose
    nothing but gain correctness and stability.

    Argh. Grasping
    again. Feels like I am drowning. Shit! ;^o

    Perhaps stop doing that, then. It's clear you are not going to
    find a way to make whatever partially developed idea you have
    work.

    It should be okay if the other work is okay with it?

    Again: you have no way of knowing that "the other work is okay
    with it".

    Sure you can in highly controlled scenarios? A programmer can say, this
    type of work can be done in this condition because the programmer knows
    what the work consists of... I listed a few of them before. I am not
    going for some sort of general case type of thing.


    It's ok to be wrong. It's less ok to keep trying to convince
    others that you are right.

    It just seems to be okay, well, after a try_lock fails try to do
    something that is _compatible_ instead of something like an adaptive
    mutex? Shit. Yikes!

    There I go again. Sorry.

    [...]

    Please stop.

    It sure seems like it cannot be 100% wrong all the time. Having trouble >seeing that.

    It's already in the above quoted text. The relevant section is
    here:

    |You can _probably_ find _any number of things_ that you can do
    |while looping over `try_lock`. I don't care about the specific
    |examples: they obviously exist.
    |
    |But what you _cannot_ do, and what makes this sort of scheme
    |fail, is know that _all_ such work is safe. Unless you can wave
    |a magic wand and solve the halting problem, you won't be able to
    |know that some jerk won't send you "work" that amounts to a
    |non-terminating infinite loop. You won't be able to know that
    |some other jerk hasn't sent you "work" that causes you to
    |deadlock trying to take a mutex you already hold. Etc.
    |
    |If you _ever_ open yourself up to accepting "work" on some kind
    |of queue, where the types of all work that can possibly be sent
    |to you is not known in advance _in some way that can be enforced
    |in code_, then your scheme fails. And if you do have some way
    |to enforce those constraints, then the scheme is neither novel
    |nor particularly interesting.

    Just saying, "yay! I can do something while spinning on a lock!"
    is so vaccuously obvious that it's simply uninteresting; people
    have done such things for many, many years. Your error is in
    trying to extrapolate that to "other" work.

    - Dan C.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Cross@21:1/5 to Chris M. Thomasson on Mon Aug 18 20:25:11 2025
    In article <107ojb9$1e84t$1@dont-email.me>,
    Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 8/13/2025 4:41 AM, Dan Cross wrote:
    In article <107dkai$2rg26$2@dont-email.me>,
    Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:

    When you get bored, give my lock-free proxy collector a spin... >https://pastebin.com/raw/CYZ78gVj

    No need, I'm afraid.

    - Dan C.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Cross@21:1/5 to Chris M. Thomasson on Mon Aug 18 22:56:50 2025
    In article <108067e$3ccku$2@dont-email.me>,
    Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 8/18/2025 1:39 PM, Dan Cross wrote:
    In article <10802os$3bdl8$1@dont-email.me>,
    Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 8/18/2025 1:25 PM, Dan Cross wrote:
    In article <107ojb9$1e84t$1@dont-email.me>,
    Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 8/13/2025 4:41 AM, Dan Cross wrote:
    In article <107dkai$2rg26$2@dont-email.me>,
    Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:

    When you get bored, give my lock-free proxy collector a spin...
    https://pastebin.com/raw/CYZ78gVj

    No need, I'm afraid.

    You don't like proxy collectors? Or you think mine is shit?

    Sorry, I'm just not sure you're a useful source of code like
    this. It's subtle and requires a lot of in-depth understanding
    of very thorny issues that it doesn't strike me you have a good
    handle on.

    Fwiw, some of my math is taking notice. An example:

    https://paulbourke.net/fractals/multijulia/

    Also, I made it into the cover of the AMS calendar. They even put my
    work on their webpage:

    https://www.ams.org/publicoutreach/math-imagery/math-imagery

    Notice my name?

    No, I don't. Using their integrated search feature for the
    string, "Thomasson" only turns up a few links to an actual
    mathematician working on graph theory.

    In fact, most of the links you are posting don't seem to involve
    you at all. The only one that actually seems to refer to you is
    a short note on Vyukov's page; I have no idea why that's there.
    It seems rather old.

    Bluntly, I'm really not all that interested.

    - Dan C.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Cross@21:1/5 to Chris M. Thomasson on Mon Aug 18 23:57:15 2025
    In article <1080ehn$3ek5d$1@dont-email.me>,
    Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 8/18/2025 4:44 PM, Dan Cross wrote:
    In article <1080bs0$3dqgt$3@dont-email.me>,
    Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 8/18/2025 3:56 PM, Dan Cross wrote:
    In article <108067e$3ccku$2@dont-email.me>,
    Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    Notice my name?

    No, I don't.

    Look again. Notice the graphic on the webpage:

    https://i.ibb.co/23gKYK0w/image.png

    They give me credit for my field line render called iWire.

    Sorry, I'm just not that interested.

    I don't really see what any of this has to do with C++.

    The only thing is I used C++ to implement iWire. ;^) Anyway, I have a
    lot of experience with lock/wait/obstruction-free algos. A fun part was >helping Dmitriy find bugs in his Relacy project when he first created it
    back on c.p.t.

    Like I said, I'm not really interested. And besides, this is
    comp.lang.c++; none of this feels at all relevant.

    - Dan C.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Cross@21:1/5 to Chris M. Thomasson on Mon Aug 18 23:44:22 2025
    In article <1080bs0$3dqgt$3@dont-email.me>,
    Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    On 8/18/2025 3:56 PM, Dan Cross wrote:
    In article <108067e$3ccku$2@dont-email.me>,
    Chris M. Thomasson <chris.m.thomasson.1@gmail.com> wrote:
    Notice my name?

    No, I don't.

    Look again. Notice the graphic on the webpage:

    https://i.ibb.co/23gKYK0w/image.png

    They give me credit for my field line render called iWire.

    Sorry, I'm just not that interested.

    I don't really see what any of this has to do with C++.

    - Dan C.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)