• Re: Proxy test code...

    From Louis Krupp@21:1/5 to Chris M. Thomasson on Mon Dec 2 12:44:32 2024
    On 12/1/2024 3:12 PM, Chris M. Thomasson wrote:
    This is not using any thread locals. It's a proxy collector that does
    not use any CAS. Also, it's interesting to test against other
    asymmetric proxy algorithms under heavy load. Can you get it to
    compile and run on your end? Thanks.

    https://pastebin.com/raw/CYZ78gVj
    (raw text link, no ads... :^)

    ____________________________________

    // Chris M. Thomassons Poor Mans RCU... Example 456...

    <snip>
    ____________________________________

    Using g++ (GCC) 14.2.1 20240912 (Red Hat 14.2.1-3):

    ===
    Chris M. Thomassons Proxy Collector Port ver .0.0.2... _______________________________________

    Booting threads...
    Threads running...
    Threads completed!

    node_allocations = 92400000
    node_deallocations = 92400000

    dtor_collect = 7
    release_collect = 140
    quiesce_complete = 147
    quiesce_begin = 147
    quiesce_complete_nodes = 92200000


    Test Completed!

    ===

    Louis

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Paavo Helde@21:1/5 to Chris M. Thomasson on Mon Dec 2 22:26:01 2024
    On 02.12.2024 00:12, Chris M. Thomasson wrote:
    This is not using any thread locals. It's a proxy collector that does
    not use any CAS. Also, it's interesting to test against other asymmetric proxy algorithms under heavy load. Can you get it to compile and run on
    your end? Thanks.

    https://pastebin.com/raw/CYZ78gVj
    (raw text link, no ads... :^)


    On Windows x86_64 with VS2022:

    Chris M. Thomassons Proxy Collector Port ver .0.0.2... _______________________________________

    Booting threads...
    Threads running...
    Threads completed!

    node_allocations = 92400000
    node_deallocations = 92400000

    dtor_collect = 3
    release_collect = 121
    quiesce_complete = 124
    quiesce_begin = 124
    quiesce_complete_nodes = 92400000


    Test Completed!

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From jseigh@21:1/5 to Chris M. Thomasson on Tue Dec 3 10:03:10 2024
    On 12/2/24 18:41, Chris M. Thomasson wrote:
    On 12/2/2024 3:36 PM, Chris M. Thomasson wrote:
    On 12/2/2024 12:26 PM, Paavo Helde wrote:
    On 02.12.2024 00:12, Chris M. Thomasson wrote:
    This is not using any thread locals. It's a proxy collector that
    does not use any CAS. Also, it's interesting to test against other
    asymmetric proxy algorithms under heavy load. Can you get it to
    compile and run on your end? Thanks.

    https://pastebin.com/raw/CYZ78gVj
    (raw text link, no ads... :^)


    On Windows x86_64 with VS2022:

    Chris M. Thomassons Proxy Collector Port ver .0.0.2...
    _______________________________________

    Booting threads...
    Threads running...
    Threads completed!

    node_allocations = 92400000
    node_deallocations = 92400000

    dtor_collect = 3
    release_collect = 121
    quiesce_complete = 124
    quiesce_begin = 124
    quiesce_complete_nodes = 92400000


    Test Completed!


    Thanks! The numbers are more in line wrt what I usually get (refer to
    my response to Louis Krupp) wrt quiesce_complete_nodes ==
    node_deallocations. Only some of my test experiments result in
    quiesce_complete_nodes != node_deallocations. Nothing is wrong, but I
    forgot to account for nodes that are still there during the dtors of
    the proxy and its collector objects.

    To clarify, all nodes that were allocated are deallocated, so no node
    leak. It's interesting to me when quiesce_complete_nodes != node_deallocations. It means there are nodes left in the defer lists
    that get destroyed during the proxy and collector dtors where they dump
    their defer lists. I need to add in a new debug/sanity counter to
    account for that condition. My sanity check should account for
    everything, not just the node allocations and deallocations. They help
    me "see" how the system is being used during various use cases.


    Your run helped me for sure. Thank you Paavo and Louis.

    :^)

    I use the following kind of output

    testcase: smr
    Statistics:
    reader thread count = 4
    read_count = 400,000,000
    elapsed cpu read_time = 21,235,330,270 nsecs
    avg cpu read_time = 53.088 nsecs
    elapsed read_time = 5,319,022,275 nsecs
    avg elapsed read time = 53.190 nsecs
    data state counts:
    live = 399,998,095
    stale = 1,905
    invalid = 0
    other = 0
    retire_count = 1,053
    elapsed retire_time = 5,323,396,964 nsecs
    avg retire_time = 5,055.458 usecs
    data allocs = 1,053
    data deletes = 1,053
    voluntary context switches = 7
    involuntary context switches = 37
    user cpu time = 21,215,620,000 nsecs
    system cpu time = 19,971,000 nsecs

    The data state counts tell me if the reads were valid,
    invalid and other being not.

    The read time is for lock/access data/unlock. You need to
    compare it to unlocked access to measure the locking overhead.

    testcase: unsafe
    Statistics:
    reader thread count = 4
    read_count = 400,000,000
    elapsed cpu read_time = 20,833,077,755 nsecs
    avg cpu read_time = 52.083 nsecs
    elapsed read_time = 5,220,600,403 nsecs
    avg elapsed read time = 52.206 nsecs
    data state counts:
    live = 399,998,426
    stale = 1,567
    invalid = 0
    other = 7
    retire_count = 1,034
    elapsed retire_time = 39,858 nsecs
    avg retire_time = 0.039 usecs
    data allocs = 1,034
    data deletes = 1,034
    voluntary context switches = 4
    involuntary context switches = 21
    user cpu time = 20,813,376,000 nsecs
    system cpu time = 19,974,000 nsecs


    Joe Seigh

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)