On 9/15/2024 11:54 AM, Paavo Helde wrote:
[...]
So, what do you think? Should I just use std::atomic<std::shared_ptr>
instead? Any other suggestions? Did I get the memory order parameters
right in compare_exchange_weak()?
Keep in mind that you need to make sure that
std::atomic<std::shared_ptr> is actually lock-free...
Make sure to investigate is_always_lock_free on your various arch's:
https://en.cppreference.com/w/cpp/atomic/atomic/is_always_lock_free
So, what do you think? Should I just use std::atomic<std::shared_ptr> >instead? Any other suggestions? Did I get the memory order parameters
right in compare_exchange_weak()?
On Sun, 15 Sep 2024 21:54:50 +0300
Paavo Helde <eesnimi@osa.pri.ee> boringly babbled:
So, what do you think? Should I just use std::atomic<std::shared_ptr>
instead? Any other suggestions? Did I get the memory order parameters
right in compare_exchange_weak()?
Looks to me like you're simply protecting the pointers rather than the data they're actually pointing to but I only skim read it.
Meanwhile I think I found a bug in my posted code, I should probably use compare_exchange_strong() instead of compare_exchange_weak().
I somehow
thought the latter would err on the safe side, but it does not. In my
test harness there seems to be no difference.
On 9/15/2024 11:54 AM, Paavo Helde wrote:
I am thinking of developing some lock-free data structures for better
scaling on multi-core hardware and avoiding potential deadlocks. In
particular, I have got a lot of classes which are mostly immutable
after construction, except for some cached data members which are
calculated on demand only, then stored in the object for later use.
Caching single numeric values is easy. However, some cached data is
large and accessed via a std::shared_ptr type refcounted
smartpointers. Updating such a smartpointer in a thread-shared object
is a bit more tricky. There is a std::atomic<std::shared_ptr> in C+
+20, but I wonder if I can do a bit better by providing my own
implementation which uses CAS on a single pointer (instead of DCAS
with additional data fields or other trickery).
This is assuming that
a) the cached value will not change any more after assigned, and will
stay intact until the containing object destruction;
b) it's ok if multiple threads calculate the value at the same time;
the first one stored will be the one which gets used.
My current prototype code is as follows (Ptr<T> is similar to
std::shared_ptr<T>, but is using an internal atomic refcounter; using
an internal counter allows me to generate additional smartpointers
from a raw pointer).
template<typename T>
class CachedAtomicPtr {
public:
CachedAtomicPtr(): ptr_(nullptr) {}
/// Store p in *this if *this is not yet assigned.
/// Return pointer stored in *this, which can be \a p or not.
Ptr<T> AssignIfNull(Ptr<T> p) {
const T* other = nullptr;
if (ptr_.compare_exchange_weak(other, p.get(),
std::memory_order_release, std::memory_order_acquire)) {
p->IncrementRefcount();
return p;
} else {
// wrap in an extra smartptr (increments refcount) >> return Ptr<T>(other);
}
}
/// Return pointer stored in *this,
Ptr<T> Load() const {
return Ptr<T>(ptr_);
}
~CachedAtomicPtr() {
if (const T* ptr = ptr_) {
ptr->DecrementRefcount();
}
}
private:
std::atomic<const T*> ptr_;
};
Example usage:
/// Objects of this class are in shared use by multiple threads.
class A {
// Returns B corresponding to the value of *this.
// If not yet in cache, B is calculated and cached in *this.
// Calculating can happen in multiple threads in parallel,
// the first cached result will be used in all threads.
Ptr<B> GetOrCalcB() const {
Ptr<B> b = cached_.Load();
if (!b) {
b = cached_.AssignIfNull(CalcB());
}
return b;
}
// ...
private:
// Calculates cached B object according to value of *this.
Ptr<B> CalcB() const;
private:
mutable CachedAtomicPtr<B> cached_;
// ... own data ...
};
So, what do you think? Should I just use std::atomic<std::shared_ptr>
instead? Any other suggestions? Did I get the memory order parameters
right in compare_exchange_weak()?
I need to look at this when I get some time. Been very busy lately.
Humm... Perhaps, when you get some free time to burn... Try to model it
in Relacy and see what happens:
https://www.1024cores.net/home/relacy-race-detector/rrd-introduction
https://groups.google.com/g/relacy
On 9/16/2024 2:31 PM, Paavo Helde wrote:
On 15.09.2024 23:13, Chris M. Thomasson wrote:
On 9/15/2024 11:54 AM, Paavo Helde wrote:
I am thinking of developing some lock-free data structures for
better scaling on multi-core hardware and avoiding potential
deadlocks. In particular, I have got a lot of classes which are
mostly immutable after construction, except for some cached data
members which are calculated on demand only, then stored in the
object for later use.
Caching single numeric values is easy. However, some cached data is
large and accessed via a std::shared_ptr type refcounted
smartpointers. Updating such a smartpointer in a thread-shared
object is a bit more tricky. There is a std::atomic<std::shared_ptr>
in C+ +20, but I wonder if I can do a bit better by providing my own
implementation which uses CAS on a single pointer (instead of DCAS
with additional data fields or other trickery).
This is assuming that
a) the cached value will not change any more after assigned, and
will stay intact until the containing object destruction;
b) it's ok if multiple threads calculate the value at the same time;
the first one stored will be the one which gets used.
My current prototype code is as follows (Ptr<T> is similar to
std::shared_ptr<T>, but is using an internal atomic refcounter;
using an internal counter allows me to generate additional
smartpointers from a raw pointer).
template<typename T>
class CachedAtomicPtr {
public:
CachedAtomicPtr(): ptr_(nullptr) {}
/// Store p in *this if *this is not yet assigned.
/// Return pointer stored in *this, which can be \a p or not. >>>> Ptr<T> AssignIfNull(Ptr<T> p) {
const T* other = nullptr;
if (ptr_.compare_exchange_weak(other, p.get(),
std::memory_order_release, std::memory_order_acquire)) {
p->IncrementRefcount();
return p;
} else {
// wrap in an extra smartptr (increments refcount)
return Ptr<T>(other);
}
}
So as long as CachedAtomicPtr is alive, the cached pointer, the one that
gets installed in your AssignIfNull function, will be alive?
Sorry if my
question sounds stupid or something. Get trying to get a handle on your
usage pattern. Also, the first pointer installed in CachedAtomicPtr will remain that way for the entire duration of the lifetime of said CachedAtomicPtr instance?
On 9/16/2024 2:46 PM, Chris M. Thomasson wrote:
On 9/16/2024 2:31 PM, Paavo Helde wrote:
On 15.09.2024 23:13, Chris M. Thomasson wrote:
On 9/15/2024 11:54 AM, Paavo Helde wrote:
I am thinking of developing some lock-free data structures for
better scaling on multi-core hardware and avoiding potential
deadlocks. In particular, I have got a lot of classes which are
mostly immutable after construction, except for some cached data
members which are calculated on demand only, then stored in the
object for later use.
Caching single numeric values is easy. However, some cached data is
large and accessed via a std::shared_ptr type refcounted
smartpointers. Updating such a smartpointer in a thread-shared
object is a bit more tricky. There is a
std::atomic<std::shared_ptr> in C+ +20, but I wonder if I can do a
bit better by providing my own implementation which uses CAS on a
single pointer (instead of DCAS with additional data fields or
other trickery).
This is assuming that
a) the cached value will not change any more after assigned, and
will stay intact until the containing object destruction;
b) it's ok if multiple threads calculate the value at the same
time; the first one stored will be the one which gets used.
My current prototype code is as follows (Ptr<T> is similar to
std::shared_ptr<T>, but is using an internal atomic refcounter;
using an internal counter allows me to generate additional
smartpointers from a raw pointer).
template<typename T>
class CachedAtomicPtr {
public:
CachedAtomicPtr(): ptr_(nullptr) {}
/// Store p in *this if *this is not yet assigned.
/// Return pointer stored in *this, which can be \a p or not. >>>>> Ptr<T> AssignIfNull(Ptr<T> p) {
const T* other = nullptr;
if (ptr_.compare_exchange_weak(other, p.get(),
std::memory_order_release, std::memory_order_acquire)) {
p->IncrementRefcount();
return p;
} else {
// wrap in an extra smartptr (increments refcount)
return Ptr<T>(other);
}
}
So as long as CachedAtomicPtr is alive, the cached pointer, the one
that gets installed in your AssignIfNull function, will be alive?
Sorry if my question sounds stupid or something. Get trying to get a
handle on your usage pattern. Also, the first pointer installed in
CachedAtomicPtr will remain that way for the entire duration of the
lifetime of said CachedAtomicPtr instance?
So as long as CachedAtomicPtr stays alive, the refcount on its
successfully _installed_ point will always be +1. In other words in
order for the count to drop to zero wrt its installed smart pointer, is
only _after_ CachedAtomicPtr has been destroyed and its dtor called? Am
I getting close or WAY off in the damn weeds somewhere out
there.... ;^) ? So one a smart pointer is installed in a
CachedAtomicPtr, it will never change? Am I right, or totally wrong?
On 9/16/2024 10:59 PM, Chris M. Thomasson wrote:
On 9/16/2024 10:54 PM, Paavo Helde wrote:[...]
template<typename T>
class CachedAtomicPtr {
public:
CachedAtomicPtr(): ptr_(nullptr) {}
/// Store p in *this if *this is not yet assigned.
/// Return pointer stored in *this, which can be \a p or not. >>>>>>> Ptr<T> AssignIfNull(Ptr<T> p) {
const T* other = nullptr;
if (ptr_.compare_exchange_weak(other, p.get(),
std::memory_order_release, std::memory_order_acquire)) {
p->IncrementRefcount();
return p;
} else {
// wrap in an extra smartptr (increments refcount)
return Ptr<T>(other);
}
^^^^^^^^^^^^^^^^^^
Is Ptr<T> an intrusive reference count? I assume it is.
On 9/17/2024 2:22 AM, Paavo Helde wrote:
On 17.09.2024 09:04, Chris M. Thomasson wrote:
On 9/16/2024 10:59 PM, Chris M. Thomasson wrote:
On 9/16/2024 10:54 PM, Paavo Helde wrote:[...]
template<typename T>
class CachedAtomicPtr {
public:
CachedAtomicPtr(): ptr_(nullptr) {}
/// Store p in *this if *this is not yet assigned.
/// Return pointer stored in *this, which can be \a p or not.
Ptr<T> AssignIfNull(Ptr<T> p) {
const T* other = nullptr;
if (ptr_.compare_exchange_weak(other, p.get(), >>>>>>>>> std::memory_order_release, std::memory_order_acquire)) {
p->IncrementRefcount();
return p;
} else {
// wrap in an extra smartptr (increments refcount)
return Ptr<T>(other);
}
^^^^^^^^^^^^^^^^^^
Is Ptr<T> an intrusive reference count? I assume it is.
Yes. Otherwise I could not generate new smartpointers from bare T*.
FYI, here is my current full compilable code together with a test
harness (no relacy, could not get it working, so this just creates a
number of threads which make use of the CachedAtomicPtr objects in
parallel.
#include <cstddef>
#include <atomic>
#include <iostream>
#include <stdexcept>
#include <deque>
#include <mutex>
#include <thread>
#include <vector>
/// debug instrumentation
std::atomic<int> gAcount = 0, gBcount = 0, gCASFailureCount = 0;
/// program exit code
std::atomic<int> exitCode = EXIT_SUCCESS;
void Assert(bool x) {
if (!x) {
throw std::logic_error("Assert failed");
}
}
class RefCountedBase {
public:
RefCountedBase(): refcount_(0) {}
RefCountedBase(const RefCountedBase&): refcount_(0) {}
RefCountedBase(RefCountedBase&&) = delete;
RefCountedBase& operator=(const RefCountedBase&) = delete;
RefCountedBase& operator=(RefCountedBase&&) = delete;
void Capture() const noexcept {
++refcount_;
}
void Release() const noexcept {
if (--refcount_ == 0) {
delete const_cast<RefCountedBase*>(this);
}
}
virtual ~RefCountedBase() {}
private:
mutable std::atomic<std::size_t> refcount_;
};
template<class T>
class Ptr {
public:
Ptr(): ptr_(nullptr) {}
explicit Ptr(const T* ptr): ptr_(ptr) { if (ptr_) { ptr_-
;Capture(); } }Ptr(const Ptr& b): ptr_(b.ptr_) { if (ptr_) { ptr_->Capture(); } } >> Ptr(Ptr&& b) noexcept: ptr_(b.ptr_) { b.ptr_ = nullptr; }
~Ptr() { if (ptr_) { ptr_->Release(); } }
Ptr& operator=(const Ptr& b) {
if (b.ptr_) { b.ptr_->Capture(); }
if (ptr_) { ptr_->Release(); }
ptr_ = b.ptr_;
return *this;
}
Ptr& operator=(Ptr&& b) noexcept {
if (ptr_) { ptr_->Release(); }
ptr_ = b.ptr_;
b.ptr_ = nullptr;
return *this;
}
const T* operator->() const { return ptr_; }
const T& operator*() const { return *ptr_; }
explicit operator bool() const { return ptr_!=nullptr; }
const T* get() const { return ptr_; }
private:
mutable const T* ptr_;
};
template<typename T>
class CachedAtomicPtr {
public:
CachedAtomicPtr(): ptr_(nullptr) {}
/// Store p in *this if *this is not yet assigned.
/// Return pointer stored in *this, which can be \a p or not.
Ptr<T> AssignIfNull(Ptr<T> p) {
const T* other = nullptr;
if (ptr_.compare_exchange_strong(other, p.get(),
std::memory_order_release, std::memory_order_acquire)) {
p->Capture();
Only one thread should ever get here, right? It just installed the
pointer p.get() into ptr_, right?
return p;
} else {
++gCASFailureCount;
return Ptr<T>(other);
}
}
Ptr<T> Load() const {
return Ptr<T>(ptr_);
}
Now this is the crux of an potential issue. Strong thread safety allows
for a thread to take a reference even if it does not already own one.
This is not allowed in basic thread safety.
So, for example this scenario needs strong thread safety:
static atomic_ptr<foo> g_foo(nullptr);
thread_a()
{
g_foo = new foo();
}
thread_b()
{
local_ptr<foo> l_foo = g_foo;
if (l_foo) l_foo->bar();
}
thread_c()
{
g_foo = nullptr;
}
This example does not work with shared_ptr, but should work with atomic<shared_ptr>, it should even be lock-free on archs that support
it. thread_b is taking a reference to g_foo when it does not already own
a reference.
So, basically you would need your CachedAtomicPtr to stay alive. It's
dtor should only be called after all threads that could potentially use
it are joined, and the program is about to end.
Or else, I think you are
going to need strong thread safety for the CachedAtomicPtr::Load
function to work in a general sense.
Just skimmed over it.
On 9/26/2024 3:14 AM, Paavo Helde wrote:
On 26.09.2024 08:49, Chris M. Thomasson wrote:^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
On 9/17/2024 2:22 AM, Paavo Helde wrote:
On 17.09.2024 09:04, Chris M. Thomasson wrote:
On 9/16/2024 10:59 PM, Chris M. Thomasson wrote:
On 9/16/2024 10:54 PM, Paavo Helde wrote:[...]
template<typename T>
class CachedAtomicPtr {
public:
CachedAtomicPtr(): ptr_(nullptr) {}
/// Store p in *this if *this is not yet assigned. >>>>>>>>>>> /// Return pointer stored in *this, which can be \a p or >>>>>>>>>>> not.
Ptr<T> AssignIfNull(Ptr<T> p) {
const T* other = nullptr;
if (ptr_.compare_exchange_weak(other, p.get(), >>>>>>>>>>> std::memory_order_release, std::memory_order_acquire)) { >>>>>>>>>>> p->IncrementRefcount();
return p;
} else {
// wrap in an extra smartptr (increments refcount)
return Ptr<T>(other);
}
^^^^^^^^^^^^^^^^^^
Is Ptr<T> an intrusive reference count? I assume it is.
Yes. Otherwise I could not generate new smartpointers from bare T*.
FYI, here is my current full compilable code together with a test
harness (no relacy, could not get it working, so this just creates a
number of threads which make use of the CachedAtomicPtr objects in
parallel.
#include <cstddef>
#include <atomic>
#include <iostream>
#include <stdexcept>
#include <deque>
#include <mutex>
#include <thread>
#include <vector>
/// debug instrumentation
std::atomic<int> gAcount = 0, gBcount = 0, gCASFailureCount = 0;
/// program exit code
std::atomic<int> exitCode = EXIT_SUCCESS;
void Assert(bool x) {
if (!x) {
throw std::logic_error("Assert failed");
}
}
class RefCountedBase {
public:
RefCountedBase(): refcount_(0) {}
RefCountedBase(const RefCountedBase&): refcount_(0) {}
RefCountedBase(RefCountedBase&&) = delete;
RefCountedBase& operator=(const RefCountedBase&) = delete;
RefCountedBase& operator=(RefCountedBase&&) = delete;
void Capture() const noexcept {
++refcount_;
}
void Release() const noexcept {
if (--refcount_ == 0) {
delete const_cast<RefCountedBase*>(this);
}
}
virtual ~RefCountedBase() {}
private:
mutable std::atomic<std::size_t> refcount_;
};
template<class T>
class Ptr {
public:
Ptr(): ptr_(nullptr) {}
explicit Ptr(const T* ptr): ptr_(ptr) { if (ptr_) { ptr_-
;Capture(); } }Ptr(const Ptr& b): ptr_(b.ptr_) { if (ptr_) { ptr_->Capture(); } }
Ptr(Ptr&& b) noexcept: ptr_(b.ptr_) { b.ptr_ = nullptr; }
~Ptr() { if (ptr_) { ptr_->Release(); } }
Ptr& operator=(const Ptr& b) {
if (b.ptr_) { b.ptr_->Capture(); }
if (ptr_) { ptr_->Release(); }
ptr_ = b.ptr_;
return *this;
}
Ptr& operator=(Ptr&& b) noexcept {
if (ptr_) { ptr_->Release(); }
ptr_ = b.ptr_;
b.ptr_ = nullptr;
return *this;
}
const T* operator->() const { return ptr_; }
const T& operator*() const { return *ptr_; }
explicit operator bool() const { return ptr_!=nullptr; }
const T* get() const { return ptr_; }
private:
mutable const T* ptr_;
};
template<typename T>
class CachedAtomicPtr {
public:
CachedAtomicPtr(): ptr_(nullptr) {}
/// Store p in *this if *this is not yet assigned.
/// Return pointer stored in *this, which can be \a p or not. >>>> Ptr<T> AssignIfNull(Ptr<T> p) {
const T* other = nullptr;
if (ptr_.compare_exchange_strong(other, p.get(),
std::memory_order_release, std::memory_order_acquire)) {
p->Capture();
[...]
Only one thread should ever get here, right? It just installed the
pointer p.get() into ptr_, right?
Yes, that's the idea. The first thread which manages to install non-
null pointer will increase the refcount, others will fail and their
objects will be released when refcounts drop to zero.
Why do a Capture _after_ the pointer is atomically installed? Think of
adding a reference in preparation for installation. If failed, it can decrement it. If it succeeded it leaves it alone.
<pseudo code>
___________________
shared refcount<foo>* g_foo = nullptr;
void thread_a()
{
// initialized with two references
refcount<foo> local = new refcount<foo>(2);
refcount<foo>* shared = CAS_STRONG(&g_foo, nullptr, local);
if (shared)
{
// another thread beat us to it.
local.dec(); // we dec because we failed to install...
// well, we have a reference to shared and local... :^)
}
else
{
// well, shared is nullptr so we were the first thread!
}
}
___________________
Sorry for all the questions. ;^o
On 9/26/2024 5:24 PM, Chris M. Thomasson wrote:
On 9/26/2024 3:14 AM, Paavo Helde wrote:
On 26.09.2024 08:49, Chris M. Thomasson wrote:^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
On 9/17/2024 2:22 AM, Paavo Helde wrote:
On 17.09.2024 09:04, Chris M. Thomasson wrote:
On 9/16/2024 10:59 PM, Chris M. Thomasson wrote:
On 9/16/2024 10:54 PM, Paavo Helde wrote:[...]
template<typename T>
class CachedAtomicPtr {
public:
CachedAtomicPtr(): ptr_(nullptr) {}
/// Store p in *this if *this is not yet assigned. >>>>>>>>>>>> /// Return pointer stored in *this, which can be \a p >>>>>>>>>>>> or not.
Ptr<T> AssignIfNull(Ptr<T> p) {
const T* other = nullptr;
if (ptr_.compare_exchange_weak(other, p.get(), >>>>>>>>>>>> std::memory_order_release, std::memory_order_acquire)) { >>>>>>>>>>>> p->IncrementRefcount();
return p;
} else {
// wrap in an extra smartptr (increments refcount)
return Ptr<T>(other);
}
^^^^^^^^^^^^^^^^^^
Is Ptr<T> an intrusive reference count? I assume it is.
Yes. Otherwise I could not generate new smartpointers from bare T*.
FYI, here is my current full compilable code together with a test
harness (no relacy, could not get it working, so this just creates
a number of threads which make use of the CachedAtomicPtr objects
in parallel.
#include <cstddef>
#include <atomic>
#include <iostream>
#include <stdexcept>
#include <deque>
#include <mutex>
#include <thread>
#include <vector>
/// debug instrumentation
std::atomic<int> gAcount = 0, gBcount = 0, gCASFailureCount = 0;
/// program exit code
std::atomic<int> exitCode = EXIT_SUCCESS;
void Assert(bool x) {
if (!x) {
throw std::logic_error("Assert failed");
}
}
class RefCountedBase {
public:
RefCountedBase(): refcount_(0) {}
RefCountedBase(const RefCountedBase&): refcount_(0) {}
RefCountedBase(RefCountedBase&&) = delete;
RefCountedBase& operator=(const RefCountedBase&) = delete; >>>>> RefCountedBase& operator=(RefCountedBase&&) = delete;
void Capture() const noexcept {
++refcount_;
}
void Release() const noexcept {
if (--refcount_ == 0) {
delete const_cast<RefCountedBase*>(this);
}
}
virtual ~RefCountedBase() {}
private:
mutable std::atomic<std::size_t> refcount_;
};
template<class T>
class Ptr {
public:
Ptr(): ptr_(nullptr) {}
explicit Ptr(const T* ptr): ptr_(ptr) { if (ptr_) { ptr_-
;Capture(); } }Ptr(const Ptr& b): ptr_(b.ptr_) { if (ptr_) { ptr_-
Capture(); } }Ptr(Ptr&& b) noexcept: ptr_(b.ptr_) { b.ptr_ = nullptr; }
~Ptr() { if (ptr_) { ptr_->Release(); } }
Ptr& operator=(const Ptr& b) {
if (b.ptr_) { b.ptr_->Capture(); }
if (ptr_) { ptr_->Release(); }
ptr_ = b.ptr_;
return *this;
}
Ptr& operator=(Ptr&& b) noexcept {
if (ptr_) { ptr_->Release(); }
ptr_ = b.ptr_;
b.ptr_ = nullptr;
return *this;
}
const T* operator->() const { return ptr_; }
const T& operator*() const { return *ptr_; }
explicit operator bool() const { return ptr_!=nullptr; }
const T* get() const { return ptr_; }
private:
mutable const T* ptr_;
};
template<typename T>
class CachedAtomicPtr {
public:
CachedAtomicPtr(): ptr_(nullptr) {}
/// Store p in *this if *this is not yet assigned.
/// Return pointer stored in *this, which can be \a p or not. >>>>> Ptr<T> AssignIfNull(Ptr<T> p) {
const T* other = nullptr;
if (ptr_.compare_exchange_strong(other, p.get(),
std::memory_order_release, std::memory_order_acquire)) {
p->Capture();
[...]
Only one thread should ever get here, right? It just installed the
pointer p.get() into ptr_, right?
Yes, that's the idea. The first thread which manages to install non-
null pointer will increase the refcount, others will fail and their
objects will be released when refcounts drop to zero.
Why do a Capture _after_ the pointer is atomically installed? Think of
adding a reference in preparation for installation. If failed, it can
decrement it. If it succeeded it leaves it alone.
<pseudo code>
___________________
shared refcount<foo>* g_foo = nullptr;
void thread_a()
{
// initialized with two references
refcount<foo> local = new refcount<foo>(2);
refcount<foo>* shared = CAS_STRONG(&g_foo, nullptr, local);
if (shared)
{
// another thread beat us to it.
local.dec(); // we dec because we failed to install...
// well, we have a reference to shared and local... :^)
Now, actually, we have a reference to shared, but we should not
decrement it here. We can rely on g_foo to never be set to null
prematurely wrt our program structure. When our program is shutting
down, g_foo can be decremented if its not nullptr. This would mean that successfully installing a point with a pre count of 2 would work. Taking
a reference would not need to increment anything, and would not need to decrement anything. The lifetime is tied to g_foo wrt installing a
pointer into it. Is that close to what you are doing? Or way off in the
damn weeds?
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 493 |
Nodes: | 16 (2 / 14) |
Uptime: | 191:18:42 |
Calls: | 9,707 |
Calls today: | 2 |
Files: | 13,740 |
Messages: | 6,180,052 |