class Net::SFTP::Operations::Upload
those, instead.
events will be ignored. You can create a catchall method named “call” for
If you omit any of those methods, the progress updates for those missing
sftp.upload!(“local”, “remote”, :progress => CustomHandler.new)
end
end
puts “all done!”
def on_finish(uploader)
end
puts “creating directory #{path}”
def on_mkdir(uploader, path)
end
puts “finished with #{file.remote}”
def on_close(uploader, file)
end
puts “writing #{data.length} bytes to #{file.remote} starting at #{offset}”
def on_put(uploader, file, offset, data)
end
puts “starting upload: #{file.local} -> #{file.remote} (#{file.size} bytes)”
def on_open(uploader, file)
class CustomHandler
to the uploader:
handler objects that respond to certain methods, and then pass your handler
a block can become cumbersome. In those cases, you can create custom
However, for more complex implementations (e.g., GUI interfaces and such)
end
puts “all done!”
when :finish then
puts “creating directory #{args}”
# args : remote path name
when :mkdir then
puts “finished with #{args.remote}”
# args : file metadata
when :close then
puts “writing #{args.length} bytes to #{args.remote} starting at #{args}”
# args : data being written (as string)
# args : byte offset in remote file
# args : file metadata
when :put then
puts “starting upload: #{args.local} -> #{args.remote} (#{args.size} bytes}”
# args : file metadata
when :open then
case event
sftp.upload!(“local”, “remote”) do |event, uploader, *args|
Using a block it’s pretty straightforward:
object.
two ways to do this: either using a callback block, or a special custom
Sometimes it is desirable to track the progress of an upload. There are
== Progress Monitoring
an IO object is given as local. This defaults to “<memory>”.
* :name - the filename to report to the progress monitor when
defaults to 32,000 bytes.
from the source. Increasing this value might improve throughput. It
* :read_size - the maximum number of bytes to read at a time
higher might improve throughput. Reducing it will reduce throughput.
this will default to 16, otherwise it will default to 2. Setting this
any given time. When uploading an entire directory tree recursively,
* :requests - the number of pending SFTP requests to allow at
callback. See the discussion of “progress monitoring” below.
* :progress - either a block or an object to act as a progress
The following options are supported:
sftp.upload!(io, “/path/to/remote”)
io = StringIO.new(data)
require ‘stringio’
in memory, you can pass an IO object and upload its contents:
If you want to send data to a file on the remote server, but the data is
sftp.upload!(“/path/to/directory”, “/path/to/remote”, :mkdir => false)
For uploading a directory without creating it, do
and their contents, recursively, to “/path/to/remote” on the remote server.
This will upload “/path/to/directory”, its contents, its subdirectories,
sftp.upload!(“/path/to/directory”, “/path/to/remote”)
path as the first parameter:
To upload an entire directory tree, recursively, simply pass the directory
uploads.each { |u| u.wait }
uploads = %w(file1 file2 file3).map { |f| sftp.upload(f, “remote/#{f}”) }
employ the #wait method of the returned object:
Or, if you have multiple uploads that you want to run in parallel, you can
sftp.upload!(“/path/to/local.txt”, “/path/to/remote.txt”)
the upload finishes, you can use the ‘bang’ variant:
By default, this operates asynchronously, so if you want to block until
uploader = sftp.upload(“/path/to/local.txt”, “/path/to/remote.txt”)
local and remote paths:
To upload a single file to the remote server, simply specify both the
progress reporting mechanism.
files, and even entire directory trees via SFTP, and provides a flexible
A general purpose uploader module for Net::SFTP. It can upload IO objects,
def [](name)
Returns the property with the given name. This allows Upload instances
def [](name) @properties[name.to_sym] end
def []=(name, value)
Sets the given property to the given name. This allows Upload instances
def []=(name, value) @properties[name.to_sym] = value end
def abort!
def abort! @active = 0 @stack.clear @uploads.clear end
def active?
Returns true if the uploader is currently running. When this is false,
def active? @active > 0 || @stack.any? end
def entries_for(local)
Returns all directory entries for the given path, removing the '.'
def entries_for(local) ::Dir.entries(local).reject { |v| %w(. ..).include?(v) } end
def initialize(sftp, local, remote, options={}, &progress) #:nodoc:
run in order to effect the upload. (See #wait.)
This will return immediately, and requires that the SSH event loop be
target.
identifying the location on the remote host that the upload should
identifying a file or directory on the local host. +remote+ is a string
+local+ is either an IO object containing data to upload, or a string
Instantiates a new uploader process on top of the given SFTP session.
def initialize(sftp, local, remote, options={}, &progress) #:nodoc: @sftp = sftp @local = local @remote = remote @progress = progress || options[:progress] @options = options @properties = options[:properties] || {} @active = 0 self.logger = sftp.logger @uploads = [] @recursive = local.respond_to?(:read) ? false : ::File.directory?(local) if recursive? @stack = [entries_for(local)] @local_cwd = local @remote_cwd = remote @active += 1 if @options[:mkdir] sftp.mkdir(remote) do |response| @active -= 1 raise StatusException.new(response, "mkdir `#{remote}'") unless response.ok? (options[:requests] || RECURSIVE_READERS).to_i.times do break unless process_next_entry end end else @active -= 1 process_next_entry end else raise ArgumentError, "expected a file to upload" unless local.respond_to?(:read) || ::File.exist?(local) @stack = [[local]] process_next_entry end end
def on_close(response)
close failed, otherwise it calls #process_next_entry to continue the
Called when a +close+ request finishes. Raises a StatusException if the
def on_close(response) @active -= 1 file = response.request[:file] raise StatusException.new(response, "close #{file.remote}") unless response.ok? process_next_entry end
def on_mkdir(response)
If the request failed, this will raise a StatusException, otherwise
Called when a +mkdir+ request finishes, successfully or otherwise.
def on_mkdir(response) @active -= 1 dir = response.request[:dir] raise StatusException.new(response, "mkdir #{dir}") unless response.ok? process_next_entry end
def on_open(response)
open failed, otherwise it calls #write_next_chunk to begin sending
Called when an +open+ request finishes. Raises StatusException if the
def on_open(response) @active -= 1 file = response.request[:file] raise StatusException.new(response, "open #{file.remote}") unless response.ok? file.handle = response[:handle] @uploads << file write_next_chunk(file) if !recursive? (options[:requests] || SINGLE_FILE_READERS).to_i.times { write_next_chunk(file) } end end
def on_write(response)
write failed, otherwise it calls #write_next_chunk to continue the
Called when a +write+ request finishes. Raises StatusException if the
def on_write(response) @active -= 1 file = response.request[:file] raise StatusException.new(response, "write #{file.remote}") unless response.ok? write_next_chunk(file) end
def open_file(local, remote)
def open_file(local, remote) @active += 1 if local.respond_to?(:read) file = local name = options[:name] || "<memory>" else file = ::File.open(local, "rb") name = local end if file.respond_to?(:stat) size = file.stat.size else size = file.size end metafile = LiveFile.new(name, remote, file, size) update_progress(:open, metafile) request = sftp.open(remote, "w", &method(:on_open)) request[:file] = metafile end
def process_next_entry
Examines the stack and determines what action to take. This is the
def process_next_entry if @stack.empty? if @uploads.any? write_next_chunk(@uploads.first) elsif !active? update_progress(:finish) end return false elsif @stack.last.empty? @stack.pop @local_cwd = ::File.dirname(@local_cwd) @remote_cwd = ::File.dirname(@remote_cwd) process_next_entry elsif recursive? entry = @stack.last.shift lpath = ::File.join(@local_cwd, entry) rpath = ::File.join(@remote_cwd, entry) if ::File.directory?(lpath) @stack.push(entries_for(lpath)) @local_cwd = lpath @remote_cwd = rpath @active += 1 update_progress(:mkdir, rpath) request = sftp.mkdir(rpath, &method(:on_mkdir)) request[:dir] = rpath else open_file(lpath, rpath) end else open_file(@stack.pop.first, remote) end return true end
def progress; @progress; end
def progress; @progress; end
def recursive?
Returns true if a directory tree is being uploaded, and false if only a
def recursive? @recursive end
def update_progress(event, *args)
Attempts to notify the progress monitor (if one was given) about
def update_progress(event, *args) on = "on_#{event}" if progress.respond_to?(on) progress.send(on, self, *args) elsif progress.respond_to?(:call) progress.call(event, self, *args) end end
def wait
def wait sftp.loop { active? } self end
def write_next_chunk(file)
Attempts to send the next chunk from the given file (where +file+ is
def write_next_chunk(file) if file.io.nil? process_next_entry else @active += 1 offset = file.io.pos data = file.io.read(options[:read_size] || DEFAULT_READ_SIZE) if data.nil? update_progress(:close, file) request = sftp.close(file.handle, &method(:on_close)) request[:file] = file file.io.close file.io = nil @uploads.delete(file) else update_progress(:put, file, offset, data) request = sftp.write(file.handle, offset, data, &method(:on_write)) request[:file] = file end end end