Introduction
jammin is Fluffy Labs’ toolbox for JAM service builders. You can use it to spin up a project, build services, ship them to a network, and keep an eye on what happens after deploy.
- jammin cli starts projects, builds services, deploys testnets, and runs tests. Works well in scripts and CI.
- jammin studio will be a desktop or IDE front-end for folks who prefer clicks over shells.
- jammin inspect shows what the network is doing and lets you poke deployed services.
All tools read the same YAML config, so you can swap between them without conversion work. Check the jammin suite overview for the long version.
Inspiration sources
We pay close attention to existing smart-contract stacks such as Truffle Suite and Hardhat, plus Polkadot tooling like Chopsticks. These projects already solved many problems we care about. The inspirations section collects sample layouts, configs, and notes on what works well (and what doesn’t).
Resources
Requirements
All jammin tooling expects a recent macOS or Linux environment with the tools below available globally.
Bun
# macOS and Linux
curl -fsSL https://bun.sh/install | bash
# Or with Homebrew (macOS)
brew install oven-sh/bun/bun
Docker
# macOS
brew install --cask docker
open /Applications/Docker.app
# Ubuntu/Debian
sudo apt-get update
sudo apt-get install -y docker.io
sudo usermod -aG docker $USER && newgrp docker
Git
# macOS
brew install git
# Ubuntu/Debian
sudo apt-get update
sudo apt-get install -y git
Verify each tool with bun --version, docker --version, and git --version before running jammin commands.
Getting Started
This guide walks you through creating your first jammin project and understanding the basic workflow.
Prerequisites
Before you start, make sure you have the required tools installed. See the Requirements page for detailed installation instructions.
Quick checklist:
- Bun
- Docker
- Git
Install pre-releases, canary, or main-branch builds (for dev)
bun add -g @fluffylabs/jammin@next
or
Install latest release
bun add -g @fluffylabs/jammin@latest
Creating a new project
jammin CLI provides a create command to bootstrap new projects from templates. You can run it interactively or with command-line arguments.
Interactive mode
Simply run the create command without arguments to start the interactive setup:
jammin create
The interactive wizard will ask you:
- Project name - Must start with an alphanumeric character and can only contain letters, numbers, hyphens, and underscores
- Template - Choose from available templates:
jam-sdk- JAM SDK template for building JAM servicesjade- JADE SDK templatejambrains- JamBrains SDK templateundecided- Starter template for exploring options with all of the above
Command-line mode
If you prefer to skip the interactive wizard, provide the project name and template directly:
jammin create my-app
Or specify a specific template:
jammin create my-app --template jade
After creation completes, navigate to your project:
cd my-app
Next steps
Once you’ve created a project, you can use the following jammin commands:
jammin build- Build your servicesjammin test- Run unit testsjammin deploy- Deploy to a network
Refer to the jammin suite documentation for detailed information about each command.
Service SDK Examples
Using Docker images
This guide explains how to run the examples from tomusdrw/jam-examples using docker images.
JAM SDK
First, build the docker image.
$ docker build -f jam-sdk.Dockerfile -t jam-sdk .
Then cd into the example code directory:
$ cd jam-examples/empty-jamsdk
And build:
$ docker run --rm -v $(pwd):/app jam-sdk jam-pvm-build -m service
Unit tests
To run unit tests:
$ docker run --rm -v $(pwd):/app jam-sdk cargo test
JamBrains SDK
The docker image provided by JamBrains is going to do all the work here:
Pull the image:
$ docker pull ghcr.io/jambrains/service-sdk:latest
On Apple Silicon, you may need to add: --platform linux/amd64.
And build:
$ cd jam-examples/empty-jambrains
$ docker run --rm -v $(pwd):/app ghcr.io/jambrains/service-sdk:latest single-file main.c
Jade (Spacejam)
First, build the docker image.
$ docker build -f jade.Dockerfile -t jade .
Then cd into the example code directory:
$ cd jam-examples/empty-jade
And build:
$ docker run --rm -v $(pwd):/app jade
Notice that “cargo” is set as the entry point of this docker image (and “build” as the default command).
Unit tests
To run unit tests:
$ docker run --rm -v $(pwd):/app jade test
Contributing
Style Guidelines
- Prefer short paragraphs and task-oriented headings.
- Use call-outs or tip blocks for warnings and advanced topics.
- Keep code samples runnable and reference supporting repositories when necessary.
Review Checklist
- All new pages are linked in
SUMMARY.md. - Commands and API references were validated locally.
- Screenshots or diagrams include captions and alt text.
jammin suite
jammin cli
- Create
- Bootstraps new projects from pre-configured templates
- Available templates:
jam-sdk- JAM SDK template for building JAM servicesjade- JADE SDK templatejambrains- JamBrains SDK templateundecided- Starter template for exploring options with all of the above
- Templates are fetched from GitHub (
jammin-createorganization) - Project is configured in a yaml file. That YAML contains:
- List of services and their destinations
- Build
- Each service is being built with its SDK-specific instructions.
- We provide docker images for JAM-SDK, later maybe JADE or other services.
- Defining a “custom” service type with a bunch of commands is possible.
- Building ends with
*.jamfile being produced for the service.
- Deploy
- Multi-service deployment
- Preparing genesis state
- Each service that we built is put to the genesis state and that’s what the nodes are initialised with.
- Using bootstrap service (polkajam’s or our custom)
- We can connect to an already running network and just deploy the services using some
newAPI of the bootstrap service.
- We can connect to an already running network and just deploy the services using some
- Configuration file for testnet spawning:
- Node definitions (we can have a bunch pre-defined ones)
- Number of instances of each node definition
- Node definition should probably tell us how to map some parameters (like separate networking/rpc ports, etc) we may decide on the common CLI flags required or some sort of mapping between jammin option and the node option
- Genesis file should be passed to all of the nodes and there should be some bootnodes so that the network can stay connected.
- Focus on typeberry initially.
- Support upgradable services pattern. We could allow the user, instead of deploying a fresh set of services, to just upgrade the existing ones.
- Unit (Service) Testing
- CLI should have a way to run unit tests from each of the service. Note that these tests will be written in SDK-specific way, we don’t care about them any more than whether the command exits with
0or something else.
- CLI should have a way to run unit tests from each of the service. Note that these tests will be written in SDK-specific way, we don’t care about them any more than whether the command exits with
- Integration (Project) Testing
- We should provide an SDK to interact with the deployed services in a deterministic way.
- For instance:
- Create a work item and pass it through refine
- Create another work item and pass it through refine
- Take two work results and put them into a package.
- Send the package for accumulation.
- Assert that specific changes happened.
- Interacting
- Interacting should basically be the same as integration testing, with that difference that interacting allows more dynamic actions. Perhaps that could be just a REPL if the API is good enough.
- Interacting will require some encoding of input arguments for refine, so the user needs to tell us what’s the shape of the objects its service expects - a bit of duplication between Service SDK and our format, presumably @typeberry/codec. In the future perhaps we could have a common source of these, but it’s not a priority for now.
jammin studio
- Electron app or VS Code extension
- The main goal is to make it easy to start with JAM development without needing to touch the CLI.
- Electron app would need to observe the filesystem to make sure that AI agents can alter the code.
- The studio responsibility pretty much ends after the Build step. When we build the contracts, we should later be able to deploy them and that’s it.
jammin inspect
- The inspector could be part of the studio, especially if it’s an electron app, but can easily be also just a separate web app.
- The inspector is mostly useful after Deployment step. After that:
- We can see the network running, maybe even some sort of simplified topology.
- We can inspect the state of the services (something like
state-viewer) - We should probably see incoming blocks and be able to inspect what’s in them (work packages)
- We may want to inspect the refinement? Although maybe just running
jamtopwould suffice.
- From the service inputs/outputs encoding definition we should provide components so that the user can build a simple UI with the help of AI agents. From this UI we would want to interact with the service, i.e.:
- Pass some data to work items and submit them for refinement.
- Inspect services and view accumulated results.
- Ideally the
inspectorSHOULD not use RPC to contact with the nodes, we should rather embed a typeberry node that is simply part of the network and read all of the data from it. - Surely the topology or refinement inspection would require some extra data (maybe from telemetry or just configuration and RPC to query if nodes are up and running), but majority of the interactions should be just using the embedded node.
- If we plan to run in the browser (my preference) we are going to need WebSockets interface.
- The idea is to basically run a separate typeberry node in your terminal that exposes custom WebSocket interface.
- That WebSocket interface should be super simple and possibly even based on the JAMNP protocol handlers.
- Handshake should involve making sure we have the same genesis state.
- Upon connection we need to learn about the difference of blocks between “BridgeNode” and “BrowserNode”
- Then the bridge node should simply send all of the blocks to the browser node.
- When the browser node receives them it should be able to have the same state.
- In the future we would probably rather warp-sync the browser node from the bridge node (to avoid excessive cpu in the browser), however for small deployments this should be good enough.
jammin API/config proposal
Open problems
How the services should reference each other?
- do we hardcoded service ids in service code?
- do we pass service ids during deployment (deployment order issues?)
- do we initialize services after they are deployed (authorization issues?)
Directory structure
.
├── services/
│ ├── service a
│ └── service b
├── tests/
│ └── a-b-interaction.ts
├── types/
│ └── service-a-types.ts
├── jammin.build.yml
├── jammin.networks.yml
├── package.json
└── README.md
jammin.build.yml
services:
# we simply list paths to all services and declare what sdk their using
- path: ./services/service a
name: a
# built-in sdk
sdk: jam-sdk-0.1.26
- path: ./services/service b
name: serviceB
sdk: custom
sdks:
# custom sdks need to provide docker image to pull and build & test commands
custom:
image: customservice/buildimage
build: build
test: test
deployment:
# name of the network to spawn?
spawn: local
# bootstrap-service or genesis
# NOTE that we cannot upgrade when doing genesis
# for bootstrap-service or upgrade we probably expect the network to be already running?
deploy_with: bootstrap-service
upgrade: true
jammin.networks.yml
This looks a lot like docker-compose wrapper - we will be running multiple containers and inter-connecting them, but we also need to pass some configuration details.
Perhaps it would be better to provide templates of docker compose files?
networks:
# possibility to specify multiple different network configurations in one file
local:
# spawn two instances of typeberry dev nodes.
# TODO: figure out how to pass config (with bootnodes) and indexes?
- image: typeberry-0.4.1
args: dev
instances: 2
- image: polkajam
instances: 2
# perhaps this should just point to docker compose files?
other:
compose: ./docker-compose-other.yml
# Do we want to allow running the nodes locally as well?
# could be useful to have one node running locally (for easier debugging)
# and connect to the docker-composed network?
types/service-a-types.ts
import { codec } from "@typeberry/lib";
export const RefineInputParams = codec.object({
slot: codec.u32,
myHash: codec.bytes(32),
myData: codec.blob,
});
tests/a-b-interaction.ts
import { describe, it } from 'node:test';
import { assert } from 'node:assert';
import { encode, u32, bytes, blob, client, query } from 'jammin-sdk';
import { RefineInputParams } from '../types/service-a-types.js';
describe('A - B interaction', () => {
// TODO [ToDr] Are the services already deployed?
it('should send tokens from A to B', async () => {
const encoded = encode(RefineInputParams, {
slot: u32(5),
myHash: bytes.parse(32, "0x1234...ffff"),
myData: blob.text("abc")
});
const item1 = await client.refine(
services.a, // name coming from config
encoded
);
const item2 = await client.refine(
services.a,
encoded
);
const accountInfoBefore = await query.serviceInfo(services.serviceB)
const result = await client.accumulate(
client.package(item1, item2),
);
assert.strictEqual(result, true);
const accountInfoAfter = await query.serviceInfo(services.serviceB)
// should fail, because we expect the balance to be changed
assert.deepEqual(accountInfoBefore, accountInfoAfter);
});
});
Execution
Work mode
- Two-week sprints.
- Sync meetings every Tuesday and Thursday (10:00 am)
- Fast pace - focus on delivering features, not necessarily perfect code.
- Pair-work. For each feature/project we have someone who is responsible and someone who is helping (reviewing / being a rubber duck, etc). Can be fine-grain (specific task) but also coarse-grain (the entire project). These two ppl work closely with each other. It’s NOT REQUIRED for all changes to be discussed/reviewed by me, however I’m always happy to provide feedback when needed. The person writing code is encouraged to discuss the code design with the second person prior starting.
Tasks
typeberry
- Full node browser support
- WebSocket interface client<>host
- Block authorship
- Networking with other impls (polkajam - priority)
- Node API to fetch all required data easily
jammin cli
- Init
- Define configuration files and propose project structure (package.json?)
- Template for starting new projects.
- Agent instructions!
- Build
- Prepare docker images to build services (JAM-SDK). Ideally we would only need bun + docker to work on services.
- Deployment
- Starting test networks based on configuration.
- Preparation of genesis files
- Some basic monitoring to make sure the nodes are up and connected (stdout based or rather some health endpoint or metrics)
- Interacting with bootstrap service or creating our own bootstrap service to deploy & upgrade contracts.
- Unit testing
- Running docker containers for each service to invoke test command.
- Integration testing
- jammin-sdk framework:
- Create type definitions (needed to pass as work items)
- Testing setup. We want to be able to write tests that interact with deployed services. So we need to know their ids from deployment step (some configuration need to be written in a local file, or we can have that as storage entry in bootstrap service).
- Running tests should be possible multiple times. That means that either we deploy a fresh set of contracts every time or we have a way to rollback some blocks.
- We should be able to create some work items, refine them and then test accumulation.
- AI agents-workflow first. Focus on providing some AI instructions so that the types can be auto-generated by AI based on rust types for instance.
- The point of integration testing is not just to focus on a single service, but rather on service<>service interactions (so we are interested in balance transfers, etc).
- jammin-sdk framework:
- Interacting
- CLI level for interacting is low priority. We can just have “give arguments/work item as hex” to submit it and get result back. I’d rather have users use GUI for now.
jammin studio
- Figure out if it’s suitable as vscode extension or better as standalone electron app.
- Probably the last thing to build.
jammin inspect
- separate web app, but most likely using components heavily from our state viewer / codec, etc.
- Initially we should just redirect to these other tools or run them in an iframe, unless it’s going to be extremely difficult.
state-viewercould simply read the state from a running node over RPC (or our custom WebSocket protocol). Alternatively the state can be exchanged using postMessage and an iframe.- Run embedded node and display current block number
- Use type definitions from jammin-sdk (testing) to create a generic UI. There should be a place for the user to create a custom components that would be embedded into jammin inspect.
- We focus on higher-level information, not on raw bytes (which can be seen in state-viewer, etc).
inspirations
jammin borrows tricks from other chains and frameworks. Studying their wins (and pain) keeps us from repeating old mistakes.
- Truffle reference shows the classic migrations-first layout.
- Hardhat reference covers the TypeScript-heavy, plugin-friendly world.
- Chopsticks reference explains the Polkadot fork-and-script workflow.
- Foundry reference highlights the fast Rust-style CLI approach for Solidity.
- Anchor reference documents the Solana/Rust structure with IDLs and macros.
- Sui reference shows how Move packages and the
suiCLI manage builds, tests, and localnets.
lessons from the field
| Tool | Pros | Cons |
|---|---|---|
| Truffle | ✅ Easy onboarding, built-in migrations, Ganache pairing, lots of tutorials. | 🚫 Slow compile/test cycles, weak TS support, aging deps, fragile migrations on big repos. |
| Hardhat | ✅ TS-first config, rich tasks, good errors, mainnet forking, huge plugin list. | 🚫 Setup can be heavy, plugins drift, flexible structure confuses new folks. |
| Chopsticks | ✅ Fast forked chains, TypeScript scenarios, easy access to live storage. | 🚫 Eats RAM/CPU on large forks, sparse docs, breaks when runtime metadata changes. |
| Foundry | ✅ Blazing forge test, fuzzing, Solidity-native scripts. | 🚫 Multiple CLIs to juggle, remapping issues, docs assume Solidity veterans. |
| Anchor | ✅ Rust macros reduce boilerplate, auto IDLs, tight validator integration. | 🚫 Macros hide errors, Solana tooling drifts, TS tests need dependency babysitting. |
| Sui | ✅ Move tests are quick, sui client starts localnet fast, manifests keep deps explicit. | 🚫 Move borrow rules are steep, dependency revisions churn, localnet wipes state without snapshots. |
truffle example project
Truffle still shows up in a lot of audits and legacy repos, so it’s a useful baseline. This is what you usually get right after truffle init, plus a couple of files teams add in real life.
directory layout
.
├── contracts/
│ ├── Migrations.sol
│ └── WorkRegistry.sol
├── migrations/
│ ├── 1_initial_migration.js
│ └── 2_deploy_work_registry.js
├── test/
│ ├── workRegistry.test.js
│ └── helpers/time.js
├── scripts/
│ └── mint-work-items.js
├── truffle-config.js
├── package.json
└── README.md
what lives where
contracts/– Solidity sources.Migrations.solis required;WorkRegistry.solstands in for your real code.migrations/– numbered JS files that deploy contracts one after another.scripts/– ad-hoc helpers (seed accounts, mint tokens).test/– mocha tests in JS/TS, sometimes helpers in subfolders.truffle-config.js– networks, compilers, plugins.
sample config
require("dotenv").config();
const { HDWalletProvider } = require("@truffle/hdwallet-provider");
module.exports = {
networks: {
development: { host: "127.0.0.1", port: 8545, network_id: "*" },
typeberryDev: {
provider: () =>
new HDWalletProvider(process.env.DEPLOYER_KEY, "http://127.0.0.1:9944"),
network_id: 42,
skipDryRun: true
}
},
compilers: {
solc: {
version: "0.8.21",
settings: { optimizer: { enabled: true, runs: 200 } }
}
},
plugins: ["truffle-plugin-verify"]
};
sample migration
const WorkRegistry = artifacts.require("WorkRegistry");
module.exports = async function (deployer, network, accounts) {
const curator = accounts[0];
await deployer.deploy(WorkRegistry, curator);
const registry = await WorkRegistry.deployed();
console.log(`Registry deployed on ${network} at ${registry.address}`);
};
Takeaways for jammin: migrations need to be simple, config must live in version control, and we should leave hook points for scripts/plugins instead of forcing hand-written glue.
hardhat example project
Hardhat feels more modern than Truffle: TypeScript by default, flexible tasks, big plugin ecosystem. Here’s a trimmed layout from npx hardhat plus the files teams usually add.
directory layout
.
├── contracts/
│ └── WorkRegistry.sol
├── scripts/
│ └── deploy.ts
├── test/
│ └── workRegistry.spec.ts
├── hardhat.config.ts
├── package.json
├── tsconfig.json
└── .env
what lives where
contracts/– Solidity sources; Hardhat writes build artifacts intoartifacts/andcache/.hardhat.config.ts– main brain. Networks, optimizer, plugins, custom tasks.scripts/– run vianpx hardhat run scripts/deploy.ts --network ....test/– mocha + chai, usually with ethers.js fixtures and typechain types..env– secrets for RPC providers or explorer APIs.
sample config
import { HardhatUserConfig } from "hardhat/types";
import "@nomicfoundation/hardhat-toolbox";
import * as dotenv from "dotenv";
dotenv.config();
const config: HardhatUserConfig = {
solidity: {
version: "0.8.21",
settings: { optimizer: { enabled: true, runs: 200 } }
},
networks: {
hardhat: process.env.TYPEBERRY_RPC
? { forking: { url: process.env.TYPEBERRY_RPC } }
: {},
typeberryDev: {
url: "http://127.0.0.1:9944",
accounts: process.env.DEPLOYER_KEY ? [process.env.DEPLOYER_KEY] : []
}
},
etherscan: { apiKey: process.env.EXPLORER_KEY }
};
export default config;
sample deploy script
import { ethers } from "hardhat";
async function main() {
const curator = (await ethers.getSigners())[0];
const WorkRegistry = await ethers.getContractFactory("WorkRegistry");
const registry = await WorkRegistry.deploy(curator.address);
await registry.deployed();
console.log("Registry deployed to:", registry.address);
}
main().catch((error) => {
console.error(error);
process.exitCode = 1;
});
Lessons for jammin: keep configs in code, make every task scriptable, and provide good TypeScript types for plugins and tests. Also, mainnet forking plus task automation should be first-class features, not bolted on later.
chopsticks example setup
Chopsticks lets you fork a Polkadot chain, script it with TypeScript, and replay bugs without heavy infra. It is a good model for jammin’s integration flows.
directory layout
.
├── chopsticks.config.ts
├── scenarios/
│ ├── bootstrap.ts
│ └── ai-workflow.ts
├── fixtures/typeberry-state.json
├── scripts/seed-balances.ts
├── package.json
└── README.md
chopsticks.config.ts– which chain to fork, at which block, and where to store the temp DB.scenarios/– TypeScript scripts that hit the API, send extrinsics, rewind, or snapshot.fixtures/– genesis overrides or saved storage dumps.scripts/– optional helper commands (seeding wallets, etc.).
sample config
import { defineConfig } from "@acala-network/chopsticks";
export default defineConfig({
chain: "wss://rpc.polkajam.dev",
block: 12_345_678,
db: "./.chopsticks/typeberry.db",
endpoints: { typeberry: "ws://127.0.0.1:9944" },
runtimeLog: true,
scenarios: ["./scenarios/ai-workflow.ts"]
});
sample scenario
import { scenario } from "@acala-network/chopsticks";
export default scenario(async ({ api, log }) => {
const alice = "5Fh...";
await api.tx.balances
.transferKeepAlive(alice, 1_000_000_000)
.signAndSend(alice);
log.info("Seeded AI workflow account");
});
notes from the field
- What people like – quick forked chains, familiar TypeScript, easy way to tweak chain state or run scripts as if they were live.
- Rough edges – eats RAM/CPU on large forks, docs are thin outside of examples, runtime metadata changes upstream can break older scenarios.
jammin should copy the good parts (fast forks, simple scripts) but keep resources in check and document breaking changes clearly.
foundry example project
Foundry (forge/cast/anvil) is the go-to toolkit for Solidity folks who want fast tests and rust-style CLIs. Here is a small layout pulled from forge init with a few extras that show up in day-to-day repos.
directory layout
.
├── contracts/
│ └── WorkRegistry.sol
├── script/
│ └── Deploy.s.sol
├── test/
│ └── WorkRegistry.t.sol
├── foundry.toml
├── lib/
│ └── forge-std/
├── .env
└── README.md
what lives where
contracts/– Solidity sources.script/– deployment or maintenance scripts written in Solidity; run viaforge script.test/– Solidity tests (or via forge’s ffi mode).forge-stdhelpers live underlib/.foundry.toml– compiler version, optimizer, remappings, RPC endpoints, fuzz settings.
sample foundry.toml
[profile.default]
src = "contracts"
out = "out"
libs = ["lib"]
solc_version = "0.8.21"
optimizer = true
optimizer_runs = 200
ffi = true
rpc_endpoints = { typeberryDev = "http://127.0.0.1:9944" }
sample script
// script/Deploy.s.sol
pragma solidity 0.8.21;
import "forge-std/Script.sol";
import "../contracts/WorkRegistry.sol";
contract Deploy is Script {
function run() external {
vm.startBroadcast();
WorkRegistry registry = new WorkRegistry(msg.sender);
console.log("Registry deployed at", address(registry));
vm.stopBroadcast();
}
}
takeaways
- Speed:
forge testis very fast, even with fuzzing. - Scripts stay close to contracts (Solidity), which keeps logic consistent.
.env+castCLI make it easy to poke live networks.- Downsides: new users juggle many CLIs (forge/cast/anvil) and remapping errors still bite people. jammin should keep config simple, provide fast tests, and avoid surprise tool juggling.
anchor example project
Anchor is the Rust-based framework most Solana teams use. Its structure helps us think about how jammin should mix declarative accounts, IDLs, and scripting.
directory layout
.
├── programs/
│ └── work-registry/
│ ├── Cargo.toml
│ └── src/lib.rs
├── tests/
│ └── work-registry.ts
├── Anchor.toml
├── Cargo.toml
├── tsconfig.json
└── package.json
what lives where
programs/<name>/– Rust crate for the on-chain program.tests/– TypeScript tests that use Anchor’s client plus mocha.Anchor.toml– cluster RPC URLs, program IDs, workspace config.- Root
Cargo.tomlwires all programs together.
sample Anchor.toml
[programs.localnet]
work_registry = "WorkReg1stry111111111111111111111111111111"
[provider]
cluster = "Localnet"
wallet = "~/.config/solana/id.json"
[scripts]
test = "npm run test"
sample rust entry
#![allow(unused)]
fn main() {
use anchor_lang::prelude::*;
declare_id!("WorkReg1stry111111111111111111111111111111");
#[program]
pub mod work_registry {
use super::*;
pub fn register_work(ctx: Context<RegisterWork>, payload: Vec<u8>) -> Result<()> {
let account = &mut ctx.accounts.work;
account.owner = ctx.accounts.owner.key();
account.payload = payload;
Ok(())
}
}
}
notes
- Anchor’s macros keep boilerplate low but require you to stick to its patterns.
- Tests run against
solana-test-validator, often spin up quickly but can be flaky when IDs drift. - IDL generation is automatic, which downstream clients love.
- For jammin: aim for strong IDL/codegen, keep tests close to the runtime, and lean on scripts for repeatable deployments.
sui example project
Sui projects use Move packages plus the sui CLI for builds, tests, and localnet work. This snapshot shows what you get after sui move new plus a few files teams add right away.
directory layout
.
├── Move.toml
├── sources/
│ └── work_registry.move
├── tests/
│ └── work_registry_tests.move
├── scripts/
│ └── publish.sh
├── sui.client.yaml
└── README.md
what lives where
Move.toml– package manifest listing named addresses, dependencies, build profile.sources/– Move modules.tests/– Move unit tests (run withsui move test).scripts/– helper shell/JS files to publish packages or call entry functions viasui client.sui.client.yaml– CLI profile (RPC endpoints, active address, keystore path).
sample Move.toml
[package]
name = "work_registry"
version = "0.0.1"
[dependencies]
Sui = { git = "https://github.com/MystenLabs/sui.git", subdir = "crates/sui-framework/packages/sui-framework", rev = "mainnet" }
[addresses]
work_registry = "0x0"
sample module
#![allow(unused)]
fn main() {
module work_registry::work {
use sui::transfer;
struct Work has key {
id: UID,
owner: address,
payload: vector<u8>,
}
public entry fun register(ctx: &mut TxContext, payload: vector<u8>) {
let work = Work {
id: object::new(ctx),
owner: tx_context::sender(ctx),
payload,
};
transfer::share_object(work);
}
}
}
notes
- Pros – strong object model,
sui clientcan spin up a localnet quickly, Move unit tests run fast, CLI profiles keep RPC/auth tidy. - Cons – Move borrow rules trip up newcomers, dependency pins drift often, localnet resets wipe state, and tooling is still moving fast.
Design hint for jammin: keep manifests simple, make local devnets one command away, and document breaking runtime changes so SDKs do not drift.