Note that these strategies (collectively, "delayed reclamation") lose one benefit of the RAII concept. With Rust's Drop or C++ destructors we are guaranteed that when we don't need this Goat the clean-up-unused-goat code we wrote will run now - unlike with a Garbage Collector where that code either runs at some unspecified future time or not at all.
Delayed reclamation means that "when we don't need this Goat" will sometimes be arbitrarily delayed after we apparently didn't need it, and it's no longer possible to say for sure by examining the program. This will almost always be a trade you're willing to make, but you need to know it exists, this is not a fun thing to discover while debugging.
With RCU the "we don't need this Goat" occurs as soon as the last readers are done with the earlier version of Goat, modulo a limited grace period in some RCU implementations. New readers always get the latest version, so there is no risk of waiting for an "unspecified" or "arbitrary" amount of time.
Hazard pointers are a bit fiddlier AIUI, since the reclamation step must be triggered explicitly and verify that no hazard pointers exist to the object being reclaimed, so it is quite possible that you won't promptly reclaim the object.
From the point of view of any particular updater, the death of every current reader (who would keep the Goat alive) is arbitrary. Obviously in a real system it won't literally be unbounded, but I don't think it will usually make sense to work out how long that could take - in terms of clock cycles, or wall clock time, or any external metric - it won't necessarily be immediate but it will happen "eventually" and so just write software accordingly.
Maybe I'm wrong for some reason and there is a practical limit in play for RCU but not Hazard Pointers, I don't think so though.
One of the central purposes of RCU is to decouple the updater/writer(s) from the reader(s). It doesn't matter to the writer if there is still a reader "out there" using the old version. And it likewise doesn't matter to (most) readers that the version they have is now old.
What is delayed is the actual destruction/deallocation of the RCU-managed objects. Nobody cares about that delay unless the objects control limited resources (in which case, RCU is likely a bad fit) or there are so many objects and/or they are so large that the delay in deallocating them could cause memory pressure of some type.
> we are guaranteed that when we don't need this Goat the clean-up-unused-goat code we wrote will run now
Not to put too fine a point on things, but rust (and c++) very explicitly don’t guarantee this. They are both quite explicit about being allowed to leak memory and never free it (due to reference cycles), something a GC is not typically allowed to do. So yes it usually happens, it just is not guaranteed.
Your point that reference counting can result in memory leaks is absolutely correct and a worthwhile point. However, it's also worth pointing out that tracing garbage collectors like those used by Java are also allowed to leak memory and never free it. In the most extreme scenario you have the epsilon garbage collector, which leaks everything:
Implementing a garbage collector that is guaranteed to free memory when it's actually no longer needed is equivalent to solving the halting problem (via Rice's theorem), and so any garbage collection algorithm is going to have to leak some memory, it's simply unavoidable.
All you're getting to is that we're not obliged in Rust to ever decide we didn't need the Goat - but I didn't argue that we are.
The "finalizer problem" in the Garbage Collected languages isn't about a Goat which never becomes unused, it's about a situation where the Goat is unused but the clean-up never happens.
You used quotes so I can't tell if you think it's a real problem or just a self inflicted one because the warning that finalizers aren't guaranteed to run was ignored.
If the finalizer was for an object that is just memory, it's not a problem. The GC has decided it doesn't need to recover the memory used by that object just yet. I've seen this with heaps defined to be large enough to handle a memory spike. When the spikes stop happening, a lot of stuff in the old gen pool just stays there. At one point I was looking at 50GB heap dumps looking for memory leaks.
It is a problem when the object has non memory resources like a socket descriptor. You can run out of descriptors. So not relying on GC and using try with resource is a good idea.
Delayed reclamation means that "when we don't need this Goat" will sometimes be arbitrarily delayed after we apparently didn't need it, and it's no longer possible to say for sure by examining the program. This will almost always be a trade you're willing to make, but you need to know it exists, this is not a fun thing to discover while debugging.