Sharded Merkle Trees
Phases 3 and 4 of the scaling roadmap introduce horizontal scaling through multiple parallel Merkle trees and a decentralized runner network to manage them. The ShardedTreeRegistry manages independent commitment tree shards, while the RunnerRegistry handles operator staking, shard assignment, and slashing.
The Scaling Bottleneck
A single CommitmentTree at Merkle depth 20 supports approximately 1 million commitments. As the protocol grows, a single tree becomes a bottleneck:
- Root updates serialize all commits through one operator
- Proof generation requires the full tree state
- The 1M leaf capacity is a hard limit
Sharding solves this by deploying multiple independent trees, each handling a subset of commitments.
ShardedTreeRegistry
The ShardedTreeRegistry contract manages multiple CommitmentTree + NullifierRegistry pairs:
Shard Structure
Each shard consists of:
| Component | Description |
|---|---|
commitmentTree | Pre-deployed CommitmentTree contract for this shard |
nullifierRegistry | Pre-deployed NullifierRegistry contract for this shard |
vault | The vault authorized to write to this shard (typically a BatchCommitRevealVault) |
active | Whether the shard accepts new commitments |
Shards reuse the existing CommitmentTree and NullifierRegistry contracts with zero code changes -- they are simply deployed N times. Each shard gets its own root updater, its own vault, and its own independent state.
Capacity Scaling
| Shards | Commitment Capacity | Description |
|---|---|---|
| 1 | ~1 million | Base vault (single tree, depth 20) |
| 4 | ~4 million | 4 parallel trees |
| 16 | ~16 million | 16 parallel trees |
| 64 | ~64 million | 64 parallel trees |
Capacity scales linearly with the number of shards. Each shard handles approximately 1 million commitments at depth 20.
Deterministic Shard Assignment
Commitments are assigned to shards deterministically based on the first byte of the commitment hash:
function getShardForCommitment(bytes32 commitment) external view returns (uint256) {
if (shardCount == 0) return 0;
return uint256(uint8(commitment[0])) % shardCount;
}
This ensures that off-chain aggregators and proof generators know which shard a commitment belongs to without on-chain coordination. The assignment is deterministic and consistent -- given a commitment hash and a shard count, the shard is always the same.
Global Nullifier Deduplication
Each shard has its own local NullifierRegistry, but the ShardedTreeRegistry maintains a global nullifier mapping to prevent cross-shard double-reveals:
// 0 = not spent, shardId+1 = spent in that shard
mapping(bytes32 => uint256) public nullifierShardOrigin;
When a shard's vault records a nullifier, it also calls recordGlobalNullifier(nullifier, shardId) on the registry. This global check ensures that a nullifier spent in Shard 0 cannot be spent again in Shard 5. The +1 offset avoids ambiguity between "not spent" (0) and "spent in shard 0" (stored as 1).
Immutable Shard Configuration
Shard configuration follows an append-only model:
- New shards can be added at any time by the owner
- Active shards can be deactivated (no new commits, but existing reveals still work)
- Deactivated shards cannot be reactivated or modified
- Shard-to-tree mapping is set once and cannot be changed
This immutability ensures that commitments made in a shard will always be revealable through that shard, regardless of future configuration changes.
Shard Management Functions
| Function | Description |
|---|---|
createShard(tree, registry) | Register a pre-deployed tree + registry pair as a new shard |
setShardVault(shardId, vault) | Set the authorized vault for a shard (one-time) |
setShardRootOperator(shardId, operator) | Set the root operator for a shard's tree |
deactivateShard(shardId) | Freeze a shard (no new commits) |
transferShardTreeOwnership(shardId, newOwner) | Emergency ownership transfer |
transferShardRegistryOwnership(shardId, newOwner) | Emergency ownership transfer |
RunnerRegistry (Phase 4)
The RunnerRegistry decentralizes shard management by introducing bonded operators (runners) who manage shards independently:
Runner Registration
Operators register as runners by staking GHOST:
function registerRunner(string calldata endpoint) external payable returns (uint256 runnerId)
| Constant | Value | Description |
|---|---|---|
MIN_STAKE | 10,000 GHOST | Minimum stake to register as a runner |
MAX_SHARDS_PER_RUNNER | 8 | Maximum shards a single runner can manage |
UNBONDING_PERIOD | 604,800 blocks (~7 days) | Cooldown before stake withdrawal |
SLASH_PERCENT | 50% | Percentage of stake forfeited on slashing |
Each runner provides an intentSequencerEndpoint -- a WebSocket URL that agents connect to for submitting intents and receiving batch results.
Shard Assignment
Shards are assigned to runners by the registry owner:
function assignShard(uint256 runnerId, uint256 shardId) external onlyOwner
Each shard can be assigned to exactly one runner. Each runner can manage up to MAX_SHARDS_PER_RUNNER (8) shards. When a shard is assigned, the runner becomes responsible for:
- Running the root updater (computing and publishing Merkle roots)
- Running the proof farm (generating Groth16 proofs for users)
- Running the intent sequencer (receiving and ordering user operations)
Runner Responsibilities
| Responsibility | Description |
|---|---|
| Root updater | Compute Merkle root updates for assigned shards and submit them on-chain |
| Proof farm | Generate Groth16 proofs for users who need to reveal commitments in the runner's shards |
| Intent sequencer | Accept user intents (commit/reveal requests) via WebSocket, batch them, and submit on-chain |
| Liveness | Maintain uptime; extended downtime may result in shard reassignment |
Slashing
Runners can be slashed for misbehavior:
function slashRunner(uint256 runnerId, bytes calldata reason) external onlyOwner
On slashing:
- 50% of the runner's stake is sent to the
slashRecipient(treasury) - The runner's state transitions to
SLASHED - All assigned shards are unassigned (available for reassignment to other runners)
- The runner retains the remaining 50% of their stake
In the current implementation, slashing is owner-initiated with a reason parameter. Future versions will implement on-chain fraud proof verification, where anyone can submit a proof of misbehavior to trigger slashing.
Exit Lifecycle
Runners exit through a two-step process:
requestExit()-- Runner requests to exit. State transitions toEXITING, all assigned shards are unassigned, and the unbonding clock starts.completeExit()-- After the unbonding period (604,800 blocks, ~7 days), the runner withdraws their stake. State transitions toINACTIVE.
The unbonding period ensures that any misbehavior during the runner's active period can still be detected and slashed before the stake is returned.
Pull-Based Withdrawal
Stake returns use a pull-based pattern: the runner must call completeExit() to receive their funds. This avoids the reentrancy risks of push-based withdrawals and ensures the runner explicitly acknowledges the exit.
Self-Scaling Economics
The runner network creates a self-scaling system:
- Increased demand leads to more commitments and reveals, generating more fees
- Higher fees make running a shard more profitable
- More operators stake GHOST and register as runners to capture fee revenue
- More runners mean more shards can be managed, increasing total throughput
- Increased throughput handles the growing demand
The chain remains the settlement backbone. Runners handle the heavy lifting of Merkle root computation, proof generation, and intent sequencing. The protocol scales by adding more runners rather than increasing block size or reducing block time.