JAM

Conformance Performance

Important Note

This leaderboard highlights performance differences between JAM implementations. All implementations are works in progress and none are fully conformant yet. The rankings serve to track relative performance improvements over time.

Performance Comparison

All implementations relative to polkajam(aggregate weighted scores - see methodology below)

May 4, 9:04 AM
1062052
1
Jamzilla
Go
baseline0.92ms
2
jamzilla-int
1.4x slower1.25ms
3
JavaJAM
Java
1.8x slower1.65ms
4
TurboJam
C++
2.7x slower2.48ms
5
Vinwolf
Rust
4.4x slower4.01ms
6
Typeberry
TypeScript
8.7x slower8.00ms
7
JAM Forge
Scala
8.8x slower8.14ms
8
Boka
Swift
13.9x slower12.76ms
9
jotl
16.2x slower14.92ms
10
New JAMneration
Go
19.9x slower18.30ms
11
PyJAMaz
Python
22.1x slower20.38ms
12
jampy-recompiler
23.8x slower21.88ms
13
JamPy
Python
27.8x slower25.60ms
14
Jam4s
Scala
29.5x slower27.15ms
15
GrayMatter
Elixir
27.5x slower25.36ms
16
PBnJAM
TypeScript
59.9x slower55.16ms
Logarithmic scale • Lower is better
Percentiles:
P50
P90
P99
<1.2x
<2x
<10x
>50x

Performance Rankings

Baseline: polkajam(Score: )

RankTeamLanguageScoreP50 (ms)P90 (ms)Relative PerformanceTrend
1
Jamzilla
Go1.20.711.46
baseline
2
jamzilla-int
1.80.872.26
1.4x slower
3
JavaJAM
Java1.91.762.30
1.8x slower
4
TurboJam
C++3.02.483.68
2.7x slower
5
Vinwolf
Rust4.83.976.42
4.4x slower
6
Typeberry
TypeScript9.97.4513.76
8.7x slower
7
JAM Forge
Scala11.46.1713.38
8.8x slower
8
Boka
Swift17.210.2721.21
13.9x slower
9
jotl
19.911.5919.68
16.2x slower
10
New JAMneration
Go25.211.8826.04
19.9x slower
11
PyJAMaz
Python26.623.1329.45
22.1x slower
12
jampy-recompiler
30.617.3340.72
23.8x slower
13
JamPy
Python36.819.5049.06
27.8x slower
14
Jam4s
Scala37.825.6849.41
29.5x slower
15
GrayMatter
Elixir39.618.0040.44
27.5x slower
16
PBnJAM
TypeScript66.051.0772.29
59.9x slower

Audit Time Calculator

Time required for polkajam to complete audit

1Jamzilla
3.0d
2jamzilla-int
4.1d
3JavaJAM
5.4d
4TurboJam
8.1d
5Vinwolf
13.1d
6Typeberry
26.1d
7JAM Forge
26.5d
8Boka
41.6d
9jotl
48.6d
10New JAMneration
59.6d
11PyJAMaz
66.4d
12jampy-recompiler
71.3d
13JamPy
83.4d
14Jam4s
88.5d
15GrayMatter
82.6d
16PBnJAM
179.7d

Note: These calculations show the real-world impact of performance differences on audit requirements.

Scoring Methodology

Weighted scoring system that considers full performance distribution. Our scoring system prioritizes consistent, predictable performance by weighing multiple statistical metrics:

Median (P50)
35%
Typical performance
90th Percentile
25%
Consistency
Mean
20%
Average
99th Percentile
10%
Worst case
Consistency
10%
Lower variance

How it works:

  1. 1. Performance measurements are based on the public W3F test vector traces
  2. 2. For each benchmark, we calculate a weighted score using the metrics above
  3. 3. We use geometric mean across all benchmarks to aggregate metrics
  4. 4. Teams are ranked by their final weighted score (lower is better)
  5. 5. Polkajam (interpreted) serves as the baseline (1.0x) for relative comparisons
Note: Only teams with data for all four benchmarks (Safrole, Fallback, Storage, Storage Light) are included in the overview. Zero values are excluded from calculations as they likely represent measurement errors.

Performance data updated regularly. Version: 0.7.2| Last updated: May 4, 2026, 9:04 AM| Source data from: May 4, 2026

Testing protocol conformance at scale. Learn more at jam-conformance | Commit 1062052 | View all clients | Download raw data