Benchmark Tool
Use the Klever Benchmark Tool to check if your machine is ready to run a validator node.
Follow the instructions on https://docs.docker.com/engine/install to install Docker before continuing.
Step 1 — Create the benchmark folder
The benchmark tool uses a local folder to write temporary disk test files. Create it in your working directory:
mkdir benchmark
Make sure to give it correct permissions so the Docker container can read/write to it.
chown 999 benchmark/
Your folder structure should look like this:
~/
└── benchmark/
Step 2 — Run the benchmark
Run all benchmarks with the default settings:
docker run \
-v $(pwd)/benchmark:/opt/klever-blockchain/benchmark \
kleverapp/klever-go:latest \
/usr/local/bin/benchmark \
--disk-dir /opt/klever-blockchain/benchmark
The tool will run six test categories one by one. Each one produces a result with a verdict tag — [OK], [!!], or [XX] — and a score. At the end, a final grade is printed.
Step 3 — Understand the results
Verdicts
Each metric is compared against two thresholds:
| Icon | Label | Meaning |
|---|---|---|
[OK] | PASS | Meets production validator requirements |
[!!] | WARN | Minimum requirements met but below recommended |
[XX] | FAIL | Does not meet validator requirements |
Final grade
The tool adds up a weighted score across all categories (total: 1,000 points) and assigns a grade:
| Grade | Score | Description |
|---|---|---|
| S | ≥ 90 % | Elite — top-tier validator hardware |
| A | ≥ 75 % | Excellent — production-ready for high-traffic networks |
| B | ≥ 60 % | Good — suitable for standard validator operation |
| C | ≥ 45 % | Acceptable — meets minimum validator requirements |
| D | ≥ 30 % | Marginal — several metrics below recommended levels |
| F | < 30 % | Insufficient — does not meet validator requirements |
What each category tests
| Category | What it measures | Key metrics |
|---|---|---|
| Goroutine | CPU scalability via parallel SHA-256 hashing | Efficiency at NumCPU workers |
| Disk I/O | Sequential and random read/write throughput | MB/s, IOPS |
| Network | TCP loopback latency and streaming throughput | P50/P99 µs, MB/s |
| KV Store | In-memory state-access patterns (80/20 read-write) | ops/s |
| Memory | DRAM bandwidth, random latency, allocator speed | GB/s, ns, M allocs/s |
| BigNum / FPU | 2048-bit modexp/modmul and float64 transcendentals | ops/s |
Step 4 — Save the report
To keep a copy of the results, output the report as JSON and save it to a file:
docker run \
-v $(pwd)/benchmark:/opt/klever-blockchain/benchmark \
kleverapp/klever-go:latest \
/usr/local/bin/benchmark \
--disk-dir /opt/klever-blockchain/benchmark \
--output json > report-$(hostname)-$(date +%Y%m%d).json
Advanced options
You can skip specific categories or tune the test parameters using flags.
Run only disk and network tests:
docker run \
-v $(pwd)/benchmark:/opt/klever-blockchain/benchmark \
kleverapp/klever-go:latest \
/usr/local/bin/benchmark \
--disk-dir /opt/klever-blockchain/benchmark \
--skip-goroutine --skip-kv --skip-memory --skip-bignum
Increase concurrency and test duration:
docker run \
-v $(pwd)/benchmark:/opt/klever-blockchain/benchmark \
kleverapp/klever-go:latest \
/usr/local/bin/benchmark \
--disk-dir /opt/klever-blockchain/benchmark \
--goroutines 128 --duration 5
All available flags
| Flag | Default | Description |
|---|---|---|
--goroutines | NumCPU×4 | Maximum number of concurrent goroutines |
--duration | 3 | Duration in seconds per concurrency level |
--disk-dir | auto temp dir | Directory for disk I/O test files |
--disk-size | 256 | Sequential test file size in MB |
--skip-goroutine | — | Skip CPU scalability benchmark |
--skip-disk | — | Skip disk I/O benchmark |
--skip-network | — | Skip network TCP loopback benchmark |
--skip-kv | — | Skip KV store benchmark |
--skip-memory | — | Skip memory bandwidth and latency benchmark |
--skip-bignum | — | Skip big-number / FPU benchmark |
--output | text | Output format: text or json |
--verbose | — | Enable verbose logging |
--version | — | Print version and exit |