class Aws::BedrockAgent::Types::PromptModelInferenceConfiguration
@see docs.aws.amazon.com/goto/WebAPI/bedrock-agent-2023-06-05/PromptModelInferenceConfiguration AWS API Documentation
@return [Float]
for the next token.
The percentage of most-likely candidates that the model considers
@!attribute [rw] top_p
@return [Float]
outputs.
more predictable outputs and a higher value for more surprising
Controls the randomness of the response. Choose a lower value for
@!attribute [rw] temperature
@return [Array<String>]
stop generating.
A list of strings that define sequences after which the model will
@!attribute [rw] stop_sequences
@return [Integer]
The maximum number of tokens to return in the response.
@!attribute [rw] max_tokens<br><br>: docs.aws.amazon.com/bedrock/latest/userguide/inference-parameters.html<br><br><br><br>prompt. For more information, see [Inference parameters].
Contains inference configurations related to model inference for a