JAM

Conformance Performance

Important Note

This leaderboard highlights performance differences between JAM implementations. All implementations are works in progress and none are fully conformant yet. The rankings serve to track relative performance improvements over time.

Performance Comparison

All implementations relative to polkajam(aggregate weighted scores - see methodology below)

Mar 21, 8:14 AM
41e01f0
1
Jamzilla
Go
baseline2.01ms
2
jamzilla-int
1.4x slower2.91ms
3
JavaJAM
Java
1.9x slower3.86ms
4
TurboJam
C++
4.8x slower9.74ms
5
Vinwolf
Rust
5.1x slower10.34ms
6
Typeberry
TypeScript
5.8x slower11.69ms
7
JAM Forge
Scala
8.7x slower17.58ms
8
jotl
11.6x slower23.32ms
9
Boka
Swift
14.5x slower29.32ms
10
PyJAMaz
Python
16.1x slower32.35ms
11
jampy-recompiler
20.8x slower41.89ms
12
New JAMneration
Go
20.0x slower40.33ms
13
Jam4s
Scala
23.8x slower47.90ms
14
JamPy
Python
24.2x slower48.69ms
15
GrayMatter
Elixir
22.3x slower44.91ms
16
PBnJAM
TypeScript
38.1x slower76.76ms
Linear scale • Lower is better
Percentiles:
P50
P90
P99
<1.2x
<2x
<10x
>50x

Performance Rankings

Baseline: polkajam(Score: )

RankTeamLanguageScoreP50 (ms)P90 (ms)Relative PerformanceTrend
1
Jamzilla
Go2.71.283.09
baseline
2
jamzilla-int
4.31.655.40
1.4x slower
3
JavaJAM
Java4.53.955.65
1.9x slower
4
TurboJam
C++12.37.2118.61
4.8x slower
5
Vinwolf
Rust13.010.3017.68
5.1x slower
6
Typeberry
TypeScript15.09.8021.11
5.8x slower
7
JAM Forge
Scala23.015.2426.98
8.7x slower
8
jotl
32.617.6239.29
11.6x slower
9
Boka
Swift38.725.5147.37
14.5x slower
10
PyJAMaz
Python42.835.2649.47
16.1x slower
11
jampy-recompiler
55.232.2779.57
20.8x slower
12
New JAMneration
Go57.528.1063.67
20.0x slower
13
Jam4s
Scala65.445.2683.69
23.8x slower
14
JamPy
Python65.936.5795.90
24.2x slower
15
GrayMatter
Elixir70.031.5584.65
22.3x slower
16
PBnJAM
TypeScript92.163.01116.62
38.1x slower

Audit Time Calculator

Time required for polkajam to complete audit

1Jamzilla
3.0d
2jamzilla-int
4.3d
3JavaJAM
5.7d
4TurboJam
14.5d
5Vinwolf
15.4d
6Typeberry
17.4d
7JAM Forge
26.2d
8jotl
34.7d
9Boka
43.6d
10PyJAMaz
48.2d
11jampy-recompiler
62.4d
12New JAMneration
60.0d
13Jam4s
71.3d
14JamPy
72.5d
15GrayMatter
66.9d
16PBnJAM
114.3d

Note: These calculations show the real-world impact of performance differences on audit requirements.

Scoring Methodology

Weighted scoring system that considers full performance distribution. Our scoring system prioritizes consistent, predictable performance by weighing multiple statistical metrics:

Median (P50)
35%
Typical performance
90th Percentile
25%
Consistency
Mean
20%
Average
99th Percentile
10%
Worst case
Consistency
10%
Lower variance

How it works:

  1. 1. Performance measurements are based on the public W3F test vector traces
  2. 2. For each benchmark, we calculate a weighted score using the metrics above
  3. 3. We use geometric mean across all benchmarks to aggregate metrics
  4. 4. Teams are ranked by their final weighted score (lower is better)
  5. 5. Polkajam (interpreted) serves as the baseline (1.0x) for relative comparisons
Note: Only teams with data for all four benchmarks (Safrole, Fallback, Storage, Storage Light) are included in the overview. Zero values are excluded from calculations as they likely represent measurement errors.

Performance data updated regularly. Version: 0.7.2| Last updated: Mar 21, 2026, 8:14 AM| Source data from: Mar 21, 2026

Testing protocol conformance at scale. Learn more at jam-conformance | Commit 41e01f0 | View all clients | Download raw data