class ActiveSupport::Cache::Store

def fetch(name, options = nil)

cache.fetch('foo') # => "bar"
end
:bar
cache.fetch("foo", force: true, raw: true) do
cache = ActiveSupport::Cache::MemCacheStore.new

We can use this option with #fetch too:
option, which tells the memcached server to store all values as strings.
For example, MemCacheStore's #write method supports the +:raw+

miss. +options+ will be passed to the #read and #write calls.
Internally, #fetch calls #read_entry, and calls #write_entry on a cache
Other options will be handled by the specific cache store implementation.

# cache.fetch('foo') => "new value 1"
# sleep 10 # First thread extend the life of cache by another 10 seconds
# val_2 => "original value"
# val_1 => "new value 1"

end
end
'new value 2'
val_2 = cache.fetch('foo', race_condition_ttl: 10) do
Thread.new do

end
end
'new value 1'
sleep 1
val_1 = cache.fetch('foo', race_condition_ttl: 10) do
Thread.new do

sleep 60
val_2 = nil
val_1 = nil
cache.write('foo', 'original value')

cache = ActiveSupport::Cache::MemoryStore.new(expires_in: 1.minute)
# Set all values to expire after one minute.

any role.
a new value is generated and :race_condition_ttl does not play
life of stale cache is extended only if it expired recently. Otherwise
regenerated after the specified number of seconds. Also note that the
If the process regenerating the entry errors out, the entry will be

The key is to keep :race_condition_ttl small.
new value. After that all the processes will start getting the new value.
meantime that first process will go ahead and will write into cache the
will continue to use slightly stale data for a just a bit longer. In the
seconds. Because of extended life of the previous cache, other processes
Yes, this process is extending the time for a stale value by another few
bump the cache expiration time by the value set in :race_condition_ttl.
avoid that case the first process to find an expired cache entry will
to read data natively and then they all will try to write to cache. To
cache expires and due to heavy load several different processes will try
a cache entry is used very frequently and is under heavy load. If a
Setting :race_condition_ttl is very useful in situations where

cache.write(key, value, expires_in: 1.minute) # Set a lower value for one entry
cache = ActiveSupport::Cache::MemoryStore.new(expires_in: 5.minutes)

the +fetch+ or +write+ method to effect just one entry.
(in which case all entries will be affected), or it can be supplied to
seconds. This value can be specified as an option to the constructor
All caches support auto-expiring content after a specified number of
Setting :expires_in will set an expiration time on the cache.

in a compressed format.
Setting :compress will store a large cache entry set by the call

cache.fetch('today', force: true) # => nil
cache.write('today', 'Monday')

Setting force: true will force a cache miss:
You may also specify additional options via the +options+ argument.

cache.fetch('city') # => "Duckburgh"
end
'Duckburgh'
cache.fetch('city') do
cache.fetch('city') # => nil

cache.fetch('today') # => "Monday"
cache.write('today', 'Monday')

return value will be returned.
block will be written to the cache under the given cache key, and that
the key and executed in the event of a cache miss. The return value of the
returned. However, if a block has been passed, that block will be passed
If there is no such data in the cache (a cache miss), then +nil+ will be

the cache with the given key, then that data is returned.
Fetches data from the cache, using the given key. If there is data in
def fetch(name, options = nil)
  if block_given?
    options = merged_options(options)
    key = namespaced_key(name, options)
    cached_entry = find_cached_entry(key, name, options) unless options[:force]
    entry = handle_expired_entry(cached_entry, key, options)
    if entry
      get_entry_value(entry, name, options)
    else
      save_block_result_to_cache(name, options) { |_name| yield _name }
    end
  else
    read(name, options)
  end
end