class Ollama::Commands::Embed
embed = ollama.embed(model: ‘all-minilm’, input: [‘Why is the sky blue?’, ‘Why is the grass green?’])
@example Generating embeddings for multiple texts
embed = ollama.embed(model: ‘all-minilm’, input: ‘Why is the sky blue?’)
@example Generating embeddings for a single text
embedding requests for generating vector representations of text.
the base command structure and provides the necessary functionality to execute
generates embeddings for text input using a specified model. It inherits from
This class is used to interact with the Ollama API’s embed endpoint, which
A command class that represents the embed API endpoint for Ollama.
def self.path
-
(String)- the API endpoint path '/api/embed' for embed requests
def self.path '/api/embed' end
def initialize(model:, input:, options: nil, truncate: nil, keep_alive: nil, dimensions: nil)
-
dimensions(Integer, nil) -- truncates the output embedding to the specified dimension. -
keep_alive(String, nil) -- duration to keep the model loaded in memory -
truncate(Boolean, nil) -- whether to truncate the input if it exceeds context length -
options(Ollama::Options, nil) -- optional configuration parameters for the model -
input(String, Array) -- the text input(s) to generate embeddings for -
model(String) -- the name of the model to use for generating embeddings
def initialize(model:, input:, options: nil, truncate: nil, keep_alive: nil, dimensions: nil) @model, @input, @options, @truncate, @keep_alive, @dimensions = model, input, options, truncate, keep_alive, dimensions @stream = false end
def perform(handler)
-
(self)- returns the current instance after initiating the request
Parameters:
-
handler(Ollama::Handler) -- the handler object responsible for processing API
def perform(handler) @client.request(method: :post, path: self.class.path, body: to_json, stream:, handler:) end