class BulkCacheFetcher

uncached case.
Rails’ nested caching while avoiding the n+1 queries problem in the
cache entire object hierarchies, which works particularly well with
opportunity to fetch them in bulk. This allows you to preload and
objects can’t be served from the cache, you will have the
Fetches many objects from a cache in order. In the event that some
frozen_string_literal: true

def cache_all(keys, values, options = {})

Caches all +values+ under their respective +keys+.
def cache_all(keys, values, options = {})
  keys.zip(values) { |k, v| @cache.write(cache_key(k), v, options) }
end

def cache_key(identifier)

of the identifier.
identifiers with extra information attached, it's the first part
key. For simple identifiers, it's just the identifier, for
Returns the part of the identifier that we can use as the cache
def cache_key(identifier)
  Array(identifier).first
end

def cache_keys(identifiers)

Returns the cache keys for all of the +identifiers+.
def cache_keys(identifiers)
  identifiers.map { |identifier| cache_key(identifier) }
end

def coalesce(cache_keys, cached_keys_with_objects, found_objects)

order.
+cached_keys_with_objects, or grab them from +found_objects+, in
Given a list of +cache_keys+, either find associated objects from
def coalesce(cache_keys, cached_keys_with_objects, found_objects)
  found_objects = Array(found_objects)
  cache_keys.map { |key| cached_keys_with_objects.fetch(key) { found_objects.shift } }
end

def fetch(object_identifiers, options = {}, &finder_block)

expiration.
objects, so you can use it for things like setting cache
+options+ will be passed along unmodified when caching newly found

in the same order as the object_identifiers passed in.
respective keys. The objects returned by +fetch+ will be returned
passed into the block, because they'll be cached under their
should be returned in the same order as the identifiers that were
where you can find the objects as you see fit. These objects
cache will be passed as an ordered list to finder_block,
from the cache first. Identifiers for objects that aren't in the
object_identifiers. +fetch+ will try to find the objects
Returns a list of objects identified by
def fetch(object_identifiers, options = {}, &finder_block)
  object_identifiers = normalize(object_identifiers)
  cached_keys_with_objects, uncached_identifiers = partition(object_identifiers)
  found_objects = find(uncached_identifiers, options, &finder_block)
  coalesce(cache_keys(object_identifiers), cached_keys_with_objects, found_objects)
end

def find(identifiers, options = {})

+finder_block+. Will pass +options+ on to the cache.
Finds all of the objects identified by +identifiers+, using the
def find(identifiers, options = {})
  return [] if identifiers.empty?
  Array(yield(identifiers)).tap do |objects|
    verify_equal_key_and_value_counts!(identifiers, objects)
    cache_all(identifiers, objects, options)
  end
end

def initialize(cache)

http://guides.rubyonrails.org/caching_with_rails.html
respond to the standard Rails cache API, described on
Creates a new bulk cache fetcher, backed by +cache+. Cache must
def initialize(cache)
  @cache = cache
end

def normalize(identifiers)

Makes sure we can iterate over identifiers.
def normalize(identifiers)
  identifiers.respond_to?(:each) ? identifiers : Array(identifiers)
end

def partition(object_identifiers)

objects that weren't cached.
from the cache. The second is a list of all of the identifiers for
of {cache_key: object} for all the objects we were able to serve
Splits a list of identifiers into two objects. The first is a hash
def partition(object_identifiers)
  uncached_identifiers = object_identifiers.dup
  cache_keys = cache_keys(object_identifiers)
  cached_keys_with_objects = @cache.read_multi(*cache_keys)
  cache_keys.each do |cache_key|
    uncached_identifiers.delete(cache_key) if cached_keys_with_objects.key?(cache_key)
  end
  [cached_keys_with_objects, uncached_identifiers]
end

def verify_equal_key_and_value_counts!(identifiers, objects)

+objects+, and vice-versa.
Makes sure we have enough +identifiers+ to cache all of our
def verify_equal_key_and_value_counts!(identifiers, objects)
  fail ArgumentError, 'You are returning too many objects from your cache block!' if objects.length > identifiers.length
  fail ArgumentError, 'You are returning too few objects from your cache block!' if objects.length < identifiers.length
end