PRELIMINARY RESULT Benchmark Suite v1.0.15 - Results subject to change

OpenSSL Performance Benchmark

Schmatz Algorithm Benchmarks3 iterations per version
Last run: 2026-01-05 • Iterations: 1.1.1w (3x), 3.0.18 (3x), 3.1.8 (3x), 3.2.6 (3x), 3.3.5 (3x), 3.4.3 (3x), 3.5.4 (3x), 3.6.0 (3x)

RSA Key Size Comparison

Based on Martin Schmatz's (IBM) methodology. Tests RSA signing and verification at different key sizes (2048, 3072, 4096 bits).

RSA Sign Performance

RSA Verify Performance

ECDSA Curve Comparison

Tests ECDSA performance across different curve sizes (P-256, P-384, P-521).

ECDSA Sign Performance

ECDSA Verify Performance

🔬 Post-Quantum: ML-DSA (Dilithium) Considerations

In his presentation, Martin Schmatz raised important concerns about Dilithium's (ML-DSA) k-values and the rejection sampling retry mechanism. This is a critical consideration for stress testing post-quantum signature algorithms.

⚠️ Schmatz's Concern: Variable Signing Latency

Unlike classical algorithms (RSA, ECDSA) where signing time is deterministic, Dilithium uses rejection sampling that may require multiple internal retries. This creates timing variance that could impact systems under high load, particularly for latency-sensitive applications.

We've implemented dedicated testing for this: See the Post-Quantum Cryptography page for detailed ML-DSA rejection sampling analysis, including:

  • Coefficient of Variation (CV%) - Measures signing time variance
  • P99, P99.9, P99.99 latencies - Tail latency analysis for capacity planning
  • Outlier detection - Operations taking >2× the mean time (indicating many retries)

The benchmark runs for 90 seconds to collect ~108,000 samples for statistically robust P99.99 measurements.

Block Size Sensitivity (AES-256-GCM)

What This Chart Shows: This benchmark measures AES-256-GCM encryption throughput across different block sizes (16 bytes to 8KB) to reveal how cryptographic operations scale with data size.

Key Insights:

  • Small blocks (16-64 bytes) stress initialization overhead - each encryption requires Provider setup, key scheduling, and context creation
  • Medium blocks (256 bytes - 1KB) show the transition point where throughput begins to increase
  • Large blocks (8KB+) achieve maximum throughput by amortizing initialization costs across more data
  • The gap between versions reveals Provider architecture overhead in OpenSSL 3.x compared to 1.1.1w

Real-World Impact: Applications encrypting small messages (e.g., individual database fields, IoT sensor data) will see much lower throughput than bulk encryption (file encryption, large API payloads).

Performance Data (KB/s)

← Back to Overview
View on GitHub
Open Source Benchmark
Found a problem? Have an improvement?
Fork the repository and submit a pull request!
Licensed under Apache 2.0 • Community-driven development • v1.0.15