class Tryouts::ExpectationEvaluators::PerformanceTime
-
Separate evaluate method signature to receive timing data from testbatch
- Expression evaluation with timing context for flexible thresholds
- Nanosecond capture → millisecond display for precision + readability
- Chosen “less than or equal to + 10%” over strict window for usability
DESIGN DECISIONS:
- Timing data passed via ExpectationResult for extensibility
- Actual display shows formatted timing (e.g., “5.23ms”)
- Expected display shows evaluated threshold (e.g., “100”, “10.5”)
- Uses Process.clock_gettime(Process::CLOCK_MONOTONIC, :nanosecond) for accuracy
- Timing captured in nanoseconds for precision, displayed in milliseconds
IMPLEMENTATION DETAILS:
- Designed for performance regression testing, not precision timing
- This differs from strict window matching - only cares about upper bound
- Formula: actual_time_ms <= expected_limit_ms * 1.1
- Performance expectations use “less than or equal to + 10%” logic
TOLERANCE LOGIC:
This allows expressions like: #=%> result * 2, #=%> _ + 10
- ‘_`: alias for the same timing data
- `result`: execution time in milliseconds (e.g., 5.23)
In performance expectations, the timing data is available as:
TIMING DATA AVAILABILITY:
sleep(0.001) #=%> 0.1 # Fail: 1ms execution exceeds 0.1ms + 10%
sleep(0.005) #=%> result * 2 # Pass: 5ms execution, 10ms threshold
Array.new(1000) #=%> 1 # Pass: array creation under 1ms
sleep(0.01) #=%> 15 # Pass: 10ms sleep under 15ms (+10% tolerance)
1 + 1 #=%> 100 # Pass: simple addition under 100ms
Examples:
SYNTAX: #=%> threshold_expression
- Provides performance regression testing for documentation-style tests
- Supports both static thresholds and dynamic expressions using timing data
- Validates that test execution time meets performance thresholds
PURPOSE:
Evaluator for performance time expectations using syntax: #=%> milliseconds
def self.handles?(expectation_type)
def self.handles?(expectation_type) expectation_type == :performance_time end
def evaluate(actual_result = nil, execution_time_ns = nil)
def evaluate(actual_result = nil, execution_time_ns = nil) if execution_time_ns.nil? return build_result( passed: false, actual: 'No timing data available', expected: 'Performance measurement', error: 'Performance expectations require execution timing data' ) end # Create result packet with timing data available to expectation expectation_result = ExpectationResult.from_timing(actual_result, execution_time_ns) expected_limit_ms = eval_expectation_content(@expectation.content, expectation_result) actual_time_ms = expectation_result.execution_time_ms # Performance tolerance: actual <= expected + 10% (not strict window) max_allowed_ms = expected_limit_ms * 1.1 within_tolerance = actual_time_ms <= max_allowed_ms build_result( passed: within_tolerance, actual: "#{actual_time_ms}ms", expected: expected_limit_ms, ) rescue StandardError => ex handle_evaluation_error(ex, actual_result) end