JAM

Conformance Performance

Important Note

This leaderboard highlights performance differences between JAM implementations. All implementations are works in progress and none are fully conformant yet. The rankings serve to track relative performance improvements over time.

Performance Comparison

All implementations relative to polkajam(aggregate weighted scores - see methodology below)

Apr 11, 8:24 AM
1062052
1
Jamzilla
Go
baseline2.19ms
2
JavaJAM
Java
1.5x slower3.28ms
3
jamzilla-int
1.3x slower2.88ms
4
TurboJam
C++
3.1x slower6.73ms
5
Vinwolf
Rust
4.7x slower10.35ms
6
Typeberry
TypeScript
5.4x slower11.84ms
7
JAM Forge
Scala
7.3x slower16.04ms
8
jotl
10.4x slower22.81ms
9
Boka
Swift
13.3x slower29.18ms
10
PyJAMaz
Python
14.7x slower32.22ms
11
jampy-recompiler
19.1x slower41.82ms
12
New JAMneration
Go
17.5x slower38.24ms
13
GrayMatter
Elixir
17.9x slower39.11ms
14
Jam4s
Scala
21.5x slower47.18ms
15
JamPy
Python
22.2x slower48.64ms
16
PBnJAM
TypeScript
34.5x slower75.56ms
Linear scale • Lower is better
Percentiles:
P50
P90
P99
<1.2x
<2x
<10x
>50x

Performance Rankings

Baseline: polkajam(Score: )

RankTeamLanguageScoreP50 (ms)P90 (ms)Relative PerformanceTrend
1
Jamzilla
Go3.01.543.40
baseline
2
JavaJAM
Java3.93.015.24
1.5x slower
3
jamzilla-int
4.31.795.19
1.3x slower
4
TurboJam
C++7.85.5910.71
3.1x slower
5
Vinwolf
Rust13.010.2917.65
4.7x slower
6
Typeberry
TypeScript15.310.0321.28
5.4x slower
7
JAM Forge
Scala21.514.5522.97
7.3x slower
8
jotl
32.317.1639.09
10.4x slower
9
Boka
Swift39.025.3148.12
13.3x slower
10
PyJAMaz
Python42.535.2149.59
14.7x slower
11
jampy-recompiler
55.331.9979.83
19.1x slower
12
New JAMneration
Go55.526.8161.02
17.5x slower
13
GrayMatter
Elixir58.926.4062.83
17.9x slower
14
Jam4s
Scala64.344.2182.60
21.5x slower
15
JamPy
Python66.136.7895.36
22.2x slower
16
PBnJAM
TypeScript91.061.60114.04
34.5x slower

Audit Time Calculator

Time required for polkajam to complete audit

1Jamzilla
3.0d
2JavaJAM
4.5d
3jamzilla-int
3.9d
4TurboJam
9.2d
5Vinwolf
14.2d
6Typeberry
16.2d
7JAM Forge
22.0d
8jotl
31.2d
9Boka
39.9d
10PyJAMaz
44.1d
11jampy-recompiler
57.3d
12New JAMneration
52.4d
13GrayMatter
53.6d
14Jam4s
64.6d
15JamPy
66.6d
16PBnJAM
103.4d

Note: These calculations show the real-world impact of performance differences on audit requirements.

Scoring Methodology

Weighted scoring system that considers full performance distribution. Our scoring system prioritizes consistent, predictable performance by weighing multiple statistical metrics:

Median (P50)
35%
Typical performance
90th Percentile
25%
Consistency
Mean
20%
Average
99th Percentile
10%
Worst case
Consistency
10%
Lower variance

How it works:

  1. 1. Performance measurements are based on the public W3F test vector traces
  2. 2. For each benchmark, we calculate a weighted score using the metrics above
  3. 3. We use geometric mean across all benchmarks to aggregate metrics
  4. 4. Teams are ranked by their final weighted score (lower is better)
  5. 5. Polkajam (interpreted) serves as the baseline (1.0x) for relative comparisons
Note: Only teams with data for all four benchmarks (Safrole, Fallback, Storage, Storage Light) are included in the overview. Zero values are excluded from calculations as they likely represent measurement errors.

Performance data updated regularly. Version: 0.7.2| Last updated: Apr 11, 2026, 8:24 AM| Source data from: Apr 11, 2026

Testing protocol conformance at scale. Learn more at jam-conformance | Commit 1062052 | View all clients | Download raw data