class ActiveStorage::Blob
If you need to create a derivative or otherwise change the blob, simply create a new blob and purge the old one.
update a blob’s metadata on a subsequent pass, but you should not update the key or change the uploaded file.
Blobs are intended to be immutable in as-so-far as their reference to a specific file goes. You’re allowed to
point for uploads, and can work with deployments like Heroku that do not provide large amounts of disk space.
service that deals with files. The second option is faster, since you’re not using your own server as a staging
The first option doesn’t require any client-side JavaScript integration, and can be used by any other back-end
2. Ahead of the file being directly uploaded client-side to the service, via create_before_direct_upload!
.io
with the file contents must be available at the server for this operation.
1. Ahead of the file being uploaded server-side to the service, via create_and_upload!
. A rewindable
Blobs can be created in two ways:
A blob is a record that contains the metadata about a file and a key for where that file resides on the service.
def allowed_inline?
def allowed_inline? ActiveStorage.content_types_allowed_inline.include?(content_type) end
def audio?
def audio? content_type.start_with?("audio") end
def build_after_unfurling(io:, filename:, content_type: nil, metadata: nil, identify: true) #:nodoc:
def build_after_unfurling(io:, filename:, content_type: nil, metadata: nil, identify: true) #:nodoc: new(filename: filename, content_type: content_type, metadata: metadata).tap do |blob| blob.unfurl(io, identify: identify) end end
def build_after_upload(io:, filename:, content_type: nil, metadata: nil, identify: true)
Returns a new, unsaved blob instance after the +io+ has been uploaded to the service.
def build_after_upload(io:, filename:, content_type: nil, metadata: nil, identify: true) new(filename: filename, content_type: content_type, metadata: metadata).tap do |blob| blob.upload(io, identify: identify) end end
def compute_checksum_in_chunks(io)
def compute_checksum_in_chunks(io) Digest::MD5.new.tap do |checksum| while chunk = io.read(5.megabytes) checksum << chunk end io.rewind end.base64digest end
def content_type_for_service_url
def content_type_for_service_url forcibly_serve_as_binary? ? ActiveStorage.binary_content_type : content_type end
def create_after_unfurling!(io:, filename:, content_type: nil, metadata: nil, identify: true, record: nil) #:nodoc:
def create_after_unfurling!(io:, filename:, content_type: nil, metadata: nil, identify: true, record: nil) #:nodoc: build_after_unfurling(io: io, filename: filename, content_type: content_type, metadata: metadata, identify: identify).tap(&:save!) end
def create_and_upload!(io:, filename:, content_type: nil, metadata: nil, identify: true, record: nil)
to key collisions.
service. The blob instance is saved before the upload begins to avoid clobbering another due
Creates a new blob instance and then uploads the contents of the given io to the
def create_and_upload!(io:, filename:, content_type: nil, metadata: nil, identify: true, record: nil) create_after_unfurling!(io: io, filename: filename, content_type: content_type, metadata: metadata, identify: identify).tap do |blob| blob.upload_without_unfurling(io) end end
def create_before_direct_upload!(filename:, byte_size:, checksum:, content_type: nil, metadata: nil)
Once the form using the direct upload is submitted, the blob can be associated with the right record using
in order to produce the signed URL for uploading. This signed URL points to the key generated by the blob.
no file yet. It's intended to be used together with a client-side upload, which will first create the blob
Returns a saved blob _without_ uploading a file to the service. This blob will point to a key where there is
def create_before_direct_upload!(filename:, byte_size:, checksum:, content_type: nil, metadata: nil) create! filename: filename, byte_size: byte_size, checksum: checksum, content_type: content_type, metadata: metadata end
def delete
deleted as well or you will essentially have a dead reference. It's recommended to use #purge and #purge_later
Deletes the files on the service associated with the blob. This should only be done if the blob is going to be
def delete service.delete(key) service.delete_prefixed("variants/#{key}/") if image? end
def download(&block)
Downloads the file associated with this blob. If no block is given, the entire file is read into memory and returned.
def download(&block) service.download key, &block end
def extract_content_type(io)
def extract_content_type(io) Marcel::MimeType.for io, name: filename.to_s, declared_type: content_type end
def filename
queried for basename, extension, and a sanitized version of the filename
Returns an ActiveStorage::Filename instance of the filename that can be
def filename ActiveStorage::Filename.new(self[:filename]) end
def find_signed(id)
that was created ahead of the upload itself on form submission.
This is particularly helpful for direct uploads where the client-side needs to refer to the blob
You can use the signed ID of a blob to refer to it on the client side without fear of tampering.
def find_signed(id) find ActiveStorage.verifier.verify(id, purpose: :blob_id) end
def forced_disposition_for_service_url
def forced_disposition_for_service_url if forcibly_serve_as_binary? || !allowed_inline? :attachment end end
def forcibly_serve_as_binary?
def forcibly_serve_as_binary? ActiveStorage.content_types_to_serve_as_binary.include?(content_type) end
def generate_unique_secure_token
the same or higher amount of entropy as in the base-58 encoding used by `has_secure_token`
to only contain the base-36 character alphabet and will therefore be lowercase. To maintain
with databases which treat indices as case-sensitive, all blob keys generated are going
To prevent problems with case-insensitive filesystems, especially in combination
def generate_unique_secure_token SecureRandom.base36(28) end
def image?
def image? content_type.start_with?("image") end
def key
This key is not intended to be revealed directly to the user.
secure-token format from Rails in lower case. So it'll look like: xtapjjcjiudrlk3tmwyjgpuobabd.
Returns the key pointing to the file on the service that's associated with this blob. The key is the
def key # We can't wait until the record is first saved to have a key for it self[:key] ||= self.class.generate_unique_secure_token end
def open(tmpdir: nil, &block)
The tempfile is automatically closed and unlinked after the given block is executed.
end
# ...
blob.open(tmpdir: "/path/to/tmp") do |file|
By default, the tempfile is created in Dir.tmpdir. Pass +tmpdir:+ to create it in a different directory:
The tempfile's name is prefixed with +ActiveStorage-+ and the blob's ID. Its extension matches that of the blob.
Downloads the blob to a tempfile on disk. Yields the tempfile.
def open(tmpdir: nil, &block) service.open key, checksum: checksum, name: [ "ActiveStorage-#{id}-", filename.extension_with_delimiter ], tmpdir: tmpdir, &block end
def purge
blobs. Note, though, that deleting the file off the service will initiate a HTTP connection to the service, which may
Deletes the file on the service and then destroys the blob record. This is the recommended way to dispose of unwanted
def purge destroy delete rescue ActiveRecord::InvalidForeignKey end
def purge_later
Enqueues an ActiveStorage::PurgeJob to call #purge. This is the recommended way to purge blobs from a transaction,
def purge_later ActiveStorage::PurgeJob.perform_later(self) end
def service_headers_for_direct_upload
def service_headers_for_direct_upload service.headers_for_direct_upload key, filename: filename, content_type: content_type, content_length: byte_size, checksum: checksum end
def service_metadata
def service_metadata if forcibly_serve_as_binary? { content_type: ActiveStorage.binary_content_type, disposition: :attachment, filename: filename } elsif !allowed_inline? { content_type: content_type, disposition: :attachment, filename: filename } else { content_type: content_type } end end
def service_url(expires_in: ActiveStorage.service_urls_expire_in, disposition: :inline, filename: nil, **options)
Hiding the +service_url+ behind a redirect also gives you the power to change services without updating all URLs. And
with users. Instead, the +service_url+ should only be exposed as a redirect from a stable, possibly authenticated URL.
Returns the URL of the blob on the service. This URL is intended to be short-lived for security and not used directly
def service_url(expires_in: ActiveStorage.service_urls_expire_in, disposition: :inline, filename: nil, **options) filename = ActiveStorage::Filename.wrap(filename || self.filename) service.url key, expires_in: expires_in, filename: filename, content_type: content_type_for_service_url, disposition: forced_disposition_for_service_url || disposition, **options end
def service_url_for_direct_upload(expires_in: ActiveStorage.service_urls_expire_in)
Returns a URL that can be used to directly upload a file for this blob on the service. This URL is intended to be
def service_url_for_direct_upload(expires_in: ActiveStorage.service_urls_expire_in) service.url_for_direct_upload key, expires_in: expires_in, content_type: content_type, content_length: byte_size, checksum: checksum end
def signed_id
Returns a signed ID for this blob that's suitable for reference on the client-side without fear of tampering.
def signed_id ActiveStorage.verifier.generate(id, purpose: :blob_id) end
def text?
def text? content_type.start_with?("text") end
def unfurl(io, identify: true) #:nodoc:
def unfurl(io, identify: true) #:nodoc: self.checksum = compute_checksum_in_chunks(io) self.content_type = extract_content_type(io) if content_type.nil? || identify self.byte_size = io.size self.identified = true end
def upload(io, identify: true)
If you do use this method directly, make sure you are using it on a persisted Blob as otherwise another blob's
Normally, you do not have to call this method directly at all. Use the +create_and_upload!+ class method instead.
you specify a +content_type+ and pass +identify+ as false.
and store that in +byte_size+ on the blob record. The content type is automatically extracted from the +io+ unless
checksum does not match what the service receives, an exception will be raised. We also measure the size of the +io+
Prior to uploading, we compute the checksum, which is sent to the service for transit integrity validation. If the
you should instead simply create a new blob based on the old one.
using this method after a file has already been uploaded to fit with a blob. If you want to create a derivative blob,
Uploads the +io+ to the service on the +key+ for this blob. Blobs are intended to be immutable, so you shouldn't be
def upload(io, identify: true) unfurl io, identify: identify upload_without_unfurling io end
def upload_without_unfurling(io) #:nodoc:
def upload_without_unfurling(io) #:nodoc: service.upload key, io, checksum: checksum, **service_metadata end
def video?
def video? content_type.start_with?("video") end