class Aws::BedrockAgent::Types::InferenceConfiguration


@see docs.aws.amazon.com/goto/WebAPI/bedrock-agent-2023-06-05/InferenceConfiguration AWS API Documentation
@return [Float]
the top 80% of the probability distribution of next tokens.
if you set ‘topP` to 0.8, the model only selects the next token from
which the model chooses the next token in the sequence. For example,
set for `Top P` determines the number of most-likely candidates from
the following token at each point of generation. The value that you
While generating a response, the model determines the probability of
@!attribute [rw] top_p
@return [Integer]
the top 50 most likely choices.
you set `topK` to 50, the model selects the next token from among
the model chooses the next token in the sequence. For example, if
set for `topK` is the number of most-likely candidates from which
the following token at each point of generation. The value that you
While generating a response, the model determines the probability of
@!attribute [rw] top_k
@return [Float]
makes the model more likely to choose lower-probability options.
likely to choose higher-probability options, while a higher value
while generating a response. A lower value makes the model more
The likelihood of the model selecting higher-probability options
@!attribute [rw] temperature
@return [Array<String>]
characters that causes the model to stop generating the response.
A list of stop sequences. A stop sequence is a sequence of
@!attribute [rw] stop_sequences
@return [Integer]
The maximum number of tokens to allow in the generated response.
@!attribute [rw] maximum_length<br><br>: docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html<br><br><br><br>foundation models].
`promptType`. For more information, see [Inference parameters for
foundation model in the part of the agent sequence defined by the
Contains inference parameters to use when the agent invokes a