This is not using any thread locals. It's a proxy collector that does
not use any CAS. Also, it's interesting to test against other
asymmetric proxy algorithms under heavy load. Can you get it to
compile and run on your end? Thanks.
https://pastebin.com/raw/CYZ78gVj
(raw text link, no ads... :^)
____________________________________
// Chris M. Thomassons Poor Mans RCU... Example 456...
____________________________________
This is not using any thread locals. It's a proxy collector that does
not use any CAS. Also, it's interesting to test against other asymmetric proxy algorithms under heavy load. Can you get it to compile and run on
your end? Thanks.
https://pastebin.com/raw/CYZ78gVj
(raw text link, no ads... :^)
On 12/2/2024 3:36 PM, Chris M. Thomasson wrote:
On 12/2/2024 12:26 PM, Paavo Helde wrote:
On 02.12.2024 00:12, Chris M. Thomasson wrote:
This is not using any thread locals. It's a proxy collector that
does not use any CAS. Also, it's interesting to test against other
asymmetric proxy algorithms under heavy load. Can you get it to
compile and run on your end? Thanks.
https://pastebin.com/raw/CYZ78gVj
(raw text link, no ads... :^)
On Windows x86_64 with VS2022:
Chris M. Thomassons Proxy Collector Port ver .0.0.2...
_______________________________________
Booting threads...
Threads running...
Threads completed!
node_allocations = 92400000
node_deallocations = 92400000
dtor_collect = 3
release_collect = 121
quiesce_complete = 124
quiesce_begin = 124
quiesce_complete_nodes = 92400000
Test Completed!
Thanks! The numbers are more in line wrt what I usually get (refer to
my response to Louis Krupp) wrt quiesce_complete_nodes ==
node_deallocations. Only some of my test experiments result in
quiesce_complete_nodes != node_deallocations. Nothing is wrong, but I
forgot to account for nodes that are still there during the dtors of
the proxy and its collector objects.
To clarify, all nodes that were allocated are deallocated, so no node
leak. It's interesting to me when quiesce_complete_nodes != node_deallocations. It means there are nodes left in the defer lists
that get destroyed during the proxy and collector dtors where they dump
their defer lists. I need to add in a new debug/sanity counter to
account for that condition. My sanity check should account for
everything, not just the node allocations and deallocations. They help
me "see" how the system is being used during various use cases.
Your run helped me for sure. Thank you Paavo and Louis.
:^)
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 546 |
Nodes: | 16 (2 / 14) |
Uptime: | 55:17:57 |
Calls: | 10,397 |
Calls today: | 5 |
Files: | 14,067 |
Messages: | 6,417,425 |
Posted today: | 1 |