class Concurrent::AbstractLocals
when the EC goes out of scope.
them. This is why we need to use a finalizer to clean up the locals array
ALL the locals arrays, so we can null out the appropriate slots in all of
Because we need to null out freed slots, we need to keep references to
local when the index is reused).
objects, and also to prevent “stale” values from being passed on to a new
references held in the now-unused slots (both to avoid blocking GC of those
get bigger and bigger with time), and 2) we need to null out all the
for use by other new local variables (otherwise the locals arrays could
Of course, when a local variable is GC’d, 1) we need to recover its index
thread).
those values (since the structure is only ever accessed by a single
a global, is that no synchronization is needed when reading and writing
The good thing about using a per-EC structure to hold values, rather than
locals array will be used for the value of that variable.
For example, if the allocated index is 1, that means slot #1 in EVERY EC’s
allocate an “index” for it.
of local variable values. Each time a new local variable is created, we
Each execution context (EC, thread or fiber) has a lazily initialized array
per-thread and per-fiber locals.
An abstract implementation of local storage, with sub-classes for
@!macro internal_implementation_note
@!visibility private
def fetch(index)
def fetch(index) locals = self.locals value = locals ? locals[index] : nil if nil == value yield elsif NULL.equal?(value) nil else value end end
def free_index(index)
def free_index(index) weak_synchronize do # The cost of GC'ing a TLV is linear in the number of ECs using local # variables. But that is natural! More ECs means more storage is used # per local variable. So naturally more CPU time is required to free # more storage. # # DO NOT use each_value which might conflict with new pair assignment # into the hash in #set method. @all_arrays.values.each do |locals| locals[index] = nil end # free index has to be published after the arrays are cleared: @free << index end end
def initialize
def initialize @free = [] @lock = Mutex.new @all_arrays = {} @next = 0 end
def local_finalizer(index)
def local_finalizer(index) proc do free_index(index) end end
def locals
def locals raise NotImplementedError end
def locals!
def locals! raise NotImplementedError end
def next_index(local)
def next_index(local) index = synchronize do if @free.empty? @next += 1 else @free.pop end end # When the local goes out of scope, we should free the associated index # and all values stored into it. ObjectSpace.define_finalizer(local, local_finalizer(index)) index end
def set(index, value)
def set(index, value) locals = self.locals! locals[index] = (nil == value ? NULL : value) value end
def synchronize
def synchronize @lock.synchronize { yield } end
def thread_fiber_finalizer(array_object_id)
def thread_fiber_finalizer(array_object_id) proc do weak_synchronize do @all_arrays.delete(array_object_id) end end end
def weak_synchronize
def weak_synchronize yield end