# Makechain > A realtime decentralized protocol for ordering and storing git-like messages ## Architecture Makechain uses a layered architecture with single-chain Simplex BFT consensus, parallel per-project execution, and a separate data availability layer. ### System Overview ``` ┌─────────────────────────┐ │ Clients │ grpc-web / gRPC │ (Browser, CLI, SDK) │ └───────────┬─────────────┘ │ ┌───────────▼──────────────────────────────────────┐ │ Validator Node │ │ │ │ ┌──────────┐ ┌──────────────┐ ┌────────────┐ │ │ │ gRPC API │→ │ Mempool │→ │ Execution │ │ │ │ (tonic) │ │ │ │ Engine │ │ │ └──────────┘ └──────┬───────┘ └─────┬──────┘ │ │ │ │ │ │ ┌───────▼────────────────▼───────┐ │ │ │ Simplex BFT (single chain) │ │ │ │ ~200ms blocks, ~300ms finality │ │ │ └───────────────┬────────────────┘ │ │ │ │ │ ┌───────────────▼────────────────┐ │ │ │ State Engine (MemoryStore) │ │ │ │ Prefix-namespaced key-value │ │ │ └────────────────────────────────┘ │ └───────────────────────────────────────────────────┘ ``` ### Layers #### Message Layer Every message is a self-authenticating envelope containing a BLAKE3 hash, Ed25519 signature, and the signer's public key. Messages are structurally validated before entering the mempool. #### Consensus Layer A single Simplex BFT consensus chain orders all messages. The leader proposes blocks by draining the mempool, and the execution engine processes them in two phases: 1. **Account pre-pass** — KEY\_ADD, KEY\_REMOVE, ACCOUNT\_DATA, VERIFICATION\_ADD/REMOVE, PROJECT\_CREATE, PROJECT\_REMOVE, and FORK are applied serially (they modify shared account state like project count) 2. **Parallel project execution** — Remaining project-scoped messages are grouped by `project_id` and executed in parallel via rayon, each with its own copy-on-write overlay store This single-chain model with parallel execution achieves high throughput without the complexity of cross-shard coordination. #### State Layer State is stored in a prefix-namespaced key-value store with lexicographic ordering for range scans: | Prefix | Namespace | | ------ | -------------------------------- | | `0x01` | Project state | | `0x02` | Project metadata | | `0x03` | Refs | | `0x04` | Commits | | `0x05` | Collaborators | | `0x06` | Account state | | `0x07` | Account metadata | | `0x08` | Key entries | | `0x09` | Verifications | | `0x0A` | Project name index | | `0x0B` | Key reverse index (pubkey → mid) | Currently backed by an in-memory BTreeMap (`MemoryStore`). The `StateStore` trait is designed for future migration to QMDB (Queryable Merkle Database) for persistence and merkle proofs. #### Data Availability Layer The consensus layer stores only message metadata (\~100-500 bytes). File content (blobs, trees) lives in a separate DA layer, referenced by `da_reference` in commit bundles. ### Commonware Primitives Makechain builds on the [Commonware Library](https://commonware.xyz): | Primitive | Usage | | ------------------------- | ---------------------------------------- | | `commonware-consensus` | Simplex BFT consensus engine | | `commonware-p2p` | Authenticated peer connections | | `commonware-parallel` | Execution strategies (Sequential, Rayon) | | `commonware-runtime` | Async task execution (tokio backend) | | `commonware-cryptography` | Ed25519 signing, BLAKE3 digests | | `commonware-codec` | Binary serialization | ### gRPC API The node exposes a gRPC service on port 50051 (configurable) with: * **grpc-web support** — browser clients via HTTP/1.1 (tonic-web) * **CORS** — configured for cross-origin grpc-web requests * **Server reflection** — runtime service discovery (grpc reflection v1) * **Message streaming** — `SubscribeMessages` with type and project\_id filters ## Getting Started Makechain is a Rust crate implementing the core protocol with a node binary and CLI client. This guide walks you through building, running a node, and submitting your first message. ### Prerequisites * **Rust nightly 1.93+** (edition 2024) * **protoc** (protobuf compiler) on your PATH ### Build ```bash git clone https://github.com/officialunofficial/makechain.git cd makechain cargo build ``` ### Run a Node Start a single-validator development node: ```bash cargo run --bin node -- --grpc-addr 127.0.0.1:50051 --p2p-addr 127.0.0.1:50052 ``` The node starts Simplex BFT consensus and a gRPC server with grpc-web support. #### Node Flags | Flag | Default | Description | | ---------------------- | --------------- | ------------------------------------------------------- | | `--grpc-addr` | `0.0.0.0:50051` | gRPC listen address | | `--p2p-addr` | `0.0.0.0:50052` | P2P listen address | | `--seed` | `0` | Validator key seed (deterministic derivation) | | `--data-dir` | `.makechain` | Data directory for snapshots | | `--snapshot-interval` | `100` | Save state snapshot every N blocks (0 = disabled) | | `--metrics-addr` | `0.0.0.0:9090` | Prometheus metrics endpoint | | `--network` | `devnet` | Network: devnet, testnet, or mainnet | | `--validators` | | Additional validator public keys (hex, comma-separated) | | `--bootstrappers` | | Bootstrap peers: `pubkey@host:port` (comma-separated) | | `--rate-limit-burst` | `100` | Max burst tokens per account (0 = disabled) | | `--rate-limit-rate` | `10.0` | Tokens replenished per second per account | | `--mempool-capacity` | `100000` | Maximum pending messages in mempool | | `--max-block-messages` | `10000` | Maximum messages per block | ### Use the CLI #### Generate a keypair ```bash cargo run --bin cli -- keygen # Output: # secret: # public: ``` #### Register your key on-chain ```bash cargo run --bin cli -- register-key --secret --mid 1 # Output: accepted: hash= ``` #### Create a project ```bash cargo run --bin cli -- create-project --secret --mid 1 --name "my-project" # Output: accepted: project_id= ``` #### Manage projects ```bash # Set project metadata cargo run --bin cli -- set-project-metadata --secret --mid 1 \ --project-id --description "updated desc" # Add a collaborator cargo run --bin cli -- add-collaborator --secret --mid 1 \ --project-id --collaborator-mid 2 # Fork a project cargo run --bin cli -- fork-project --secret --mid 1 \ --source-project-id --source-commit-hash --name "my-fork" ``` #### Query state ```bash # Get account info cargo run --bin cli -- get-account --mid 1 # Get project info cargo run --bin cli -- get-project --id # Look up project by name cargo run --bin cli -- get-project-by-name --mid 1 --name "my-project" # List all projects cargo run --bin cli -- list-projects # List refs, commits, collaborators cargo run --bin cli -- list-refs --project-id cargo run --bin cli -- list-commits --project-id cargo run --bin cli -- list-collaborators --project-id # List account keys cargo run --bin cli -- list-keys --mid 1 # Subscribe to live message stream cargo run --bin cli -- subscribe ``` ### Using as a Library You can also use makechain as a Rust library for building and verifying messages: ```rust use makechain::message::{build_message, verify_message}; use makechain::proto::{self, message_data::Body, MessageType, Network}; use ed25519_dalek::SigningKey; use rand::rngs::OsRng; // Generate a signing key let signing_key = SigningKey::generate(&mut OsRng); // Create a PROJECT_CREATE message let data = proto::MessageData { r#type: MessageType::ProjectCreate as i32, mid: 1, timestamp: 1000, network: Network::Devnet as i32, body: Some(Body::ProjectCreate(proto::ProjectCreateBody { name: "my-project".to_string(), visibility: proto::Visibility::Public as i32, description: "A new project".to_string(), license: "MIT".to_string(), })), }; // Sign and wrap the message let message = build_message(data, &signing_key).unwrap(); // The message hash IS the project_id (content-addressed) let project_id = &message.hash; // Verify the message assert!(verify_message(&message).is_ok()); ``` ### Run Tests ```bash cargo test # Run all 490 tests cargo test # Run a specific test ``` ### Next Steps * [Protocol Overview](/protocol/overview) — understand message semantics * [Message Types](/protocol/messages) — all available operations * [Architecture](/architecture) — system design and execution model * [API Reference](/api/overview) — gRPC endpoints ## Consensus ### Engine Makechain uses **Simplex BFT** via the [commonware-consensus](https://commonware.xyz) primitive. A single consensus chain orders all messages with parallel per-project execution within each block. | Property | Value | | --------------- | ------------------------------------------------ | | Block time | \~200ms target | | Finality | \~300ms (2-chain rule) | | Fault tolerance | Byzantine fault tolerant up to 1/3 of validators | Validators are selected by a permissioned set initially, with a path to permissionless staking. ### Block Lifecycle 1. **Propose** — The round leader drains the mempool, executes messages in two phases (account pre-pass + parallel project execution), and produces a state root digest 2. **Verify** — Other validators re-execute the messages and verify the state root matches 3. **Notarize** — Validators vote to notarize the block (2/3 threshold) 4. **Finalize** — When two consecutive blocks are notarized, the first is finalized (2-chain rule) 5. **Commit** — State diffs are applied to the base store and committed messages are broadcast to subscribers ### Networking * **Transport:** `commonware-p2p::authenticated` — encrypted connections between peers identified by Ed25519 public keys * **Channels:** Three Simplex network channels — votes, certificates, and resolver (catch-up) * **Mempool:** Messages submitted to any validator are propagated to the leader's mempool * **Sync:** New nodes download periodic snapshots, then sync blocks from the snapshot height ### Configuration Key consensus parameters (configurable via `ConsensusConfig`): | Parameter | Default | Description | | -------------------------- | ------- | ------------------------------------------- | | `leader_timeout` | 200ms | Time to wait for a leader proposal | | `notarization_timeout` | 500ms | Time to wait for notarization | | `max_block_messages` | 10,000 | Maximum messages per block | | `max_project_messages` | 500 | Maximum messages per project per block | | `mempool_capacity` | 100,000 | Maximum pending messages | | `max_timestamp_age_secs` | 600 | Reject messages older than 10 minutes | | `max_timestamp_drift_secs` | 30 | Reject messages more than 30s in the future | ## Data Availability The consensus layer stores only message metadata (\~100–500 bytes per message). Actual file content — blobs, trees, and full commit messages — lives in a separate data availability (DA) layer. ### Architecture ``` Developer Makechain Consensus DA Layer │ │ │ ├─ Upload blobs ───────────────────────────────────► │ │ │ │ ├─ COMMIT_BUNDLE ──────────►│ │ │ (da_reference = hash) │ │ │ ├─ DA sampling ─────────►│ │ │ (confirm availability)│ │ │◄──────────────────────┘│ │ │ │ │ ├─ Include in block │ │ │ │ Consumer │ │ │ │ │ ├─ Read commit metadata ◄──┤ │ ├─ Fetch blobs ───────────────────────────────────► │ │ │ │ ``` ### DA Reference Each `COMMIT_BUNDLE` includes a `da_reference` — a 32-byte hash identifying the erasure-coded blob data in the DA layer. This hash is opaque to the consensus layer; its interpretation depends on the DA backend. The DA reference links consensus-layer metadata to the full data: | Consensus Layer (validators) | DA Layer (storage) | | ----------------------------------- | -------------------------------- | | Commit hash, title, author, parents | Full commit message text | | Tree root hash | Tree objects (directory listing) | | DA reference | Blob objects (file content) | ### Blob Lifecycle 1. **Upload**: Developer uploads tree and blob data to the DA layer, receiving a `da_reference` hash 2. **Submit**: Developer submits a `COMMIT_BUNDLE` with the `da_reference` and commit metadata 3. **Validate**: Validators confirm the data is available via DA sampling before including the bundle in a block 4. **Store**: Consensus stores only the metadata; the DA layer retains the full data 5. **Retrieve**: Consumers read metadata from consensus and fetch full data from the DA layer ### Recovery from Pruning When consensus-layer commit metadata is pruned (see [Storage Limits](/protocol/storage-limits)), the full commit data remains recoverable from the DA layer: * Pruned `CommitMeta` entries lose their hash, title, and parent links from validator state * The DA layer retains the complete blob data indefinitely (subject to DA-layer retention policies) * A node syncing from scratch can reconstruct pruned commit history by walking the DA layer This separation ensures that storage limits on validators don't cause permanent data loss. ### DA Sampling Validators use DA sampling to confirm that blob data is actually available before including a `COMMIT_BUNDLE` in a block. This prevents a scenario where a developer submits metadata referencing data that doesn't exist. The sampling mechanism integrates with the `CertifiableAutomaton` trait from commonware-consensus: ``` certify(digest) → bool ``` The default implementation returns `true` (no sampling). When DA sampling is enabled, `certify()` will check that all `da_reference` values in the proposed block are available in the DA layer before voting to finalize. ### DA Backend Options The DA backend is pluggable. Options under consideration: | Backend | Tradeoffs | | ----------------------------------- | ------------------------------------------------------------- | | **Commonware `coding`** | Erasure coding with availability sampling, native integration | | **Validator-operated storage** | Content-addressed blob store run by the validator set | | **External DA (Celestia, EigenDA)** | External trust assumption, may add latency | | **IPFS** | Decentralized, but no availability guarantees | The initial implementation uses a simple content-addressed blob store. The `da_reference` is the BLAKE3 hash of the uploaded data. ## Identity ### Accounts An **account** is identified by a unique Make ID (`mid`, uint64) assigned by an onchain registry contract. The registry maps MIDs to Ed25519 owner keys. ### Keys All keys are **Ed25519**. Each account has one or more registered keys with explicit scopes: | Scope | Capabilities | | --------- | ----------------------------------------------------------------------- | | `OWNER` | Full account control: manage keys, transfer projects, delete account | | `SIGNING` | Push commits, update refs, manage collaborators on authorized projects | | `AGENT` | Automated actions (CI/CD, AI agents) — scoped to specific projects/refs | Keys are registered onchain and relayed into the consensus layer as `KEY_ADD` / `KEY_REMOVE` messages (2P set) so validators can verify signatures without querying the chain. ### Signature Scheme * **Ed25519** — fast verification (\~60k verifications/sec on commodity hardware), compact signatures (64 bytes), deterministic signing (no nonce reuse risk) * **BLAKE3** — 32-byte digests for message hashing, commit hashing, and merkle tree construction ### External Address Verification Accounts can prove ownership of external blockchain addresses via `VERIFICATION_ADD` / `VERIFICATION_REMOVE` messages (2P set). Each verification requires a `claim_signature` proving the external key signed a deterministic challenge message. #### Challenge Message The message to sign is: ``` makechain:verify: ``` Where `` is the decimal string representation of the account's Make ID. For example, account `42` signs the UTF-8 bytes of `makechain:verify:42`. #### Ethereum (ETH\_ADDRESS) Sign the challenge using [EIP-191 personal\_sign](https://eips.ethereum.org/EIPS/eip-191): ``` keccak256("\x19Ethereum Signed Message:\n" + len(message) + message) ``` The `claim_signature` is 65 bytes: `r (32) || s (32) || v (1)` where `v` is the recovery ID (0 or 1). The `address` field is the 20-byte Ethereum address. The protocol recovers the public key from the signature, derives the address via `keccak256(pubkey)[12..]`, and verifies it matches. #### Solana (SOL\_ADDRESS) Sign the challenge using standard Ed25519: ``` ed25519_sign(keypair, "makechain:verify:") ``` The `claim_signature` is 64 bytes (standard Ed25519 signature). The `address` field is the 32-byte Solana public key. The protocol verifies the signature directly against the address. ## Message Types All message types and their semantics. ### 2P: Project Set | Type | Description | Required Scope | | ---------------- | ----------------------------------------------------- | -------------- | | `PROJECT_CREATE` | Create a new project with name and visibility | SIGNING | | `PROJECT_REMOVE` | Remove a project (hides refs, commits, collaborators) | OWNER | Conflict key: `(project_id)`. A removed project retains its data — a subsequent `PROJECT_CREATE` referencing the same project ID restores it. ### 1P: Singleton | Type | Description | Required Scope | | ------ | --------------------------------------------- | -------------- | | `FORK` | Fork an existing project at a specific commit | SIGNING | Includes `source_commit_hash` anchoring the fork to a precise point. The forked project's ID is the BLAKE3 hash of the FORK message. ### 1P: LWW Register | Type | Conflict Key | Required Scope | | ------------------ | --------------------- | -------------- | | `PROJECT_METADATA` | `(project_id, field)` | SIGNING | | `ACCOUNT_DATA` | `(mid, field)` | SIGNING | ### 1P: Append-only | Type | Description | Required Scope | | --------------- | ----------------------------------------------------- | -------------- | | `COMMIT_BUNDLE` | Declare a batch of new commit metadata + DA reference | AGENT | Commits are ordered parent-first within a bundle. Each commit includes: hash, parent hashes, tree root hash, author MID, title, and message hash. ### 1P: State Transition | Type | Description | Required Scope | | ----------------- | ---------------------- | -------------- | | `PROJECT_ARCHIVE` | Make project read-only | OWNER | ### 2P: Ref Set (CAS-ordered) | Type | Description | Required Scope | | ------------ | ------------------------------- | -------------- | | `REF_UPDATE` | Move a ref to a new commit hash | AGENT | | `REF_DELETE` | Remove a ref | AGENT | `REF_UPDATE` uses compare-and-swap: includes expected current hash (`old_hash`). If the ref has moved, the update is rejected. Updates must be fast-forward (the new commit must be a descendant of the current ref target) unless `force = true`. ### 2P: Collaborator Set | Type | Description | Required Scope | | --------------------- | ------------------------------------ | --------------- | | `COLLABORATOR_ADD` | Grant an account access to a project | SIGNING (admin) | | `COLLABORATOR_REMOVE` | Revoke access | SIGNING (admin) | Permissions: `READ`, `WRITE`, `ADMIN`, `OWNER`. ### 2P: Key Set | Type | Description | Required Scope | | ------------ | ------------------------------------ | -------------- | | `KEY_ADD` | Register an Ed25519 key with a scope | OWNER | | `KEY_REMOVE` | Revoke a key | OWNER | Relayed from onchain registry events. ### 2P: Verification Set | Type | Description | Required Scope | | --------------------- | -------------------------------------- | -------------- | | `VERIFICATION_ADD` | Prove ownership of an external address | SIGNING | | `VERIFICATION_REMOVE` | Revoke a verification | SIGNING | Supported types: `ETH_ADDRESS` (Ethereum EOA), `SOL_ADDRESS` (Solana). The `claim_signature` must be a valid signature over the challenge message `makechain:verify:`. See [Identity](/protocol/identity#external-address-verification) for signing details. ## Protocol Overview Makechain is a realtime decentralized protocol for ordering and storing git-like messages — project creation, commits, ref updates, access control — with permissionless publishing and cryptographic attribution. ### Design Goals 1. **High throughput** — 10,000+ messages per second with sub-second finality 2. **Permissionless publishing** — anyone can create projects and push code 3. **Self-authenticating messages** — every message verifiable without external lookups 4. **Thin consensus** — consensus orders metadata and ref pointers; file blobs live in a separate DA layer ### Message Envelope Every message on the network is wrapped in a self-authenticating envelope: ``` Message { data: MessageData // The operation hash: bytes(32) // BLAKE3(data) signature: bytes(64) // Ed25519 signature over hash signer: bytes(32) // Ed25519 public key } ``` Verification: check that `signer` is a registered key for `data.mid` with sufficient scope for the message type. ### Message Semantics Every message type follows one of two paradigms: #### 1P (One-Phase) The message creates or updates state unilaterally. No paired "undo" message exists. | Sub-type | Behavior | Examples | | -------------------- | ------------------------------------ | ---------------------------------- | | **Singleton** | Creates a new resource, irreversible | `FORK` | | **LWW Register** | Last-write-wins per conflict key | `PROJECT_METADATA`, `ACCOUNT_DATA` | | **Append-only** | Adds entries to a growing set | `COMMIT_BUNDLE` | | **State transition** | Moves resource to terminal state | `PROJECT_ARCHIVE` | #### 2P (Two-Phase) Add and Remove pairs operating on a set. On a timestamp tie, **remove wins**. | Sub-type | Behavior | Examples | | --------------- | ------------------------------------ | --------------------------------------------- | | **Set** | Standard add/remove with remove-wins | Project, Collaborator, Key, Verification sets | | **CAS-ordered** | Compare-and-swap for sequencing | `REF_UPDATE` / `REF_DELETE` | ### Content-Addressed IDs Project IDs are content-addressed — the `project_id` is the BLAKE3 hash of the `PROJECT_CREATE` message itself (i.e., `Message.hash`). Forked project IDs are the hash of the `FORK` message. This means two projects with the same name get different IDs because the hash includes MID, timestamp, etc. ## Parallel Execution Makechain uses a single consensus chain with parallel per-project execution within each block, rather than separate shard chains. ### Execution Model Within each block, messages are processed in two phases: #### Phase 1: Account Pre-pass (Serial) Account-level messages are applied serially because they modify shared account state (key registrations, project counts): * `KEY_ADD` / `KEY_REMOVE` * `ACCOUNT_DATA` * `VERIFICATION_ADD` / `VERIFICATION_REMOVE` * `PROJECT_CREATE` / `PROJECT_REMOVE` / `FORK` (modify account `project_count`) #### Phase 2: Project Execution (Parallel) Project-scoped messages are grouped by `project_id` and each group is executed in parallel using rayon: * `PROJECT_ARCHIVE` / `PROJECT_METADATA` * `REF_UPDATE` / `REF_DELETE` * `COMMIT_BUNDLE` * `COLLABORATOR_ADD` / `COLLABORATOR_REMOVE` Each project group operates on its own copy-on-write overlay store (`OverlayStore`) that can read the base state plus any account changes from Phase 1 (via `SnapshotStore`), ensuring isolation between projects. ### State Root Computation After execution, state diffs from all projects are combined into a global state root: 1. Each project's diffs produce a per-project merkle root (BLAKE3 of sorted key-value pairs) 2. All project roots are sorted and combined into a global root This is deterministic regardless of parallel execution order. ### Future: Sharding The protocol spec reserves the possibility of sharding by `project_id` for horizontal scaling: ```rust shard_index = project_id[0..4] as u32 % num_shards ``` The current parallel execution model is a stepping stone — the per-project isolation already provides the separation needed for future sharding without cross-shard coordination (except for `FORK`, which includes a state proof from the source project). ## State Model ### Projects A project's state consists of: * **Metadata** — name, description, visibility, license * **Refs** — map of ref names to commit hashes * **Known commits** — set of registered commit hashes + metadata * **Collaborators** — map of MIDs to permission levels * **Owner** — Make ID of the project owner ### Accounts An account's state consists of: * **Registered keys** — set of public keys with scopes * **Account metadata** — username, avatar, bio * **Verified addresses** — set of external addresses with claim proofs * **Storage units** — capacity ### Merkle State Project state is authenticated via per-project merkle roots: ``` Global State Root ├── Project A Root (BLAKE3 of sorted key-value diffs) ├── Project B Root ├── Project C Root └── ... ``` Each project has a `project_root` (BLAKE3 hash of its sorted key-value state diffs). The global state root combines all per-project roots in sorted order, producing a deterministic root regardless of parallel execution order. ### Storage Limits Per storage unit (yearly): | Resource | Limit | | --------------------------- | ------ | | Projects | 10 | | Commit metadata per project | 10,000 | | Refs per project | 200 | | Collaborators per project | 50 | | Keys per account | 50 | | Verifications per account | 50 | | DA storage | 1 GB | ### Pruning When a project exceeds its commit metadata limit, the oldest entries are pruned from consensus state: * Head commits for every branch/tag are always retained * Intermediate commits on active branches are retained up to the limit * Commits on deleted branches with no remaining ref are pruned first * Full commit history remains recoverable from the DA layer ## Storage Limits Makechain enforces per-account storage limits to prevent unbounded state growth. Each account has **storage units** (default: 1 for free tier), and limits scale with units. ### Per Storage Unit (Yearly) | Resource | Limit | | --------------------------- | ------ | | Projects | 10 | | Commit metadata per project | 10,000 | | Refs per project | 200 | | Collaborators per project | 50 | | Keys per account | 50 | | Verifications per account | 50 | | DA storage | 1 GB | ### Enforcement Limits are enforced at state transition time: * **Projects:** `PROJECT_CREATE` and `FORK` check `project_count < storage_units * 10` before incrementing. `PROJECT_REMOVE` decrements the count, freeing capacity for new projects. * **Refs:** `REF_UPDATE` checks `ref_count < 200` when creating a new ref. Updating an existing ref doesn't change the count. `REF_DELETE` decrements. * **Collaborators:** `COLLABORATOR_ADD` checks `collaborator_count < 50` when adding a new collaborator. Permission updates don't change the count. * **Commits:** `COMMIT_BUNDLE` always appends commits, then triggers auto-pruning if the count exceeds 10,000. * **Keys:** `KEY_ADD` checks `key_count < 50` when adding a new key. `KEY_REMOVE` decrements. * **Verifications:** `VERIFICATION_ADD` checks `verification_count < 50` when adding a new verification. `VERIFICATION_REMOVE` decrements. ### Commit Pruning When a project exceeds its commit metadata limit, the oldest unprotected commits are pruned from consensus state: **A commit referenced by any active ref is never pruned.** The ref's head commit and its entire parent chain (reachable via parent links) are protected. #### Pruning Algorithm 1. Build the **protected set**: BFS from all active ref heads through parent links 2. Enumerate all commits; collect those not in the protected set 3. Sort unprotected commits by `indexed_at` ascending (oldest first) 4. Delete oldest unprotected commits until at or below the limit #### What This Means in Practice * Head commits for every branch/tag are always retained * Intermediate commits on active branches are retained * Commits on deleted branches with no remaining ref are pruned first * Full commit history remains recoverable from the DA layer — pruning only removes `CommitMeta` from validator state ### Error Types | Error | Trigger | | --------------------------- | ------------------------- | | `StorageLimitExceeded` | Project count at capacity | | `RefLimitExceeded` | Ref count at 200 | | `CollaboratorLimitExceeded` | Collaborator count at 50 | | `KeyLimitExceeded` | Key count at 50 | | `VerificationLimitExceeded` | Verification count at 50 | ## Message Submit Pipeline Every message goes through a multi-stage validation pipeline before being included in a block. ### Pipeline Stages ``` Client → gRPC SubmitMessage │ ├─ 1. Verify hash + signature │ BLAKE3(data) == hash │ Ed25519.verify(signature, hash, signer) │ ├─ 2. Structural validation │ Field sizes, non-empty constraints, enum validity │ (no state lookups) │ ├─ 3. Signer authorization pre-check │ Verify signer is a registered key for data.mid │ (skipped for KEY_ADD/KEY_REMOVE — relayed from registry) │ ├─ 4. Network validation │ Message network matches the node's configured network │ ├─ 5. Mempool admission │ Deduplication (by message hash) │ Capacity check (default: 100,000) │ Timestamp window (10 min past, 30 sec future) │ ├─ [Mempool] ──── Consensus proposes block ──── │ ├─ 6. Block execution │ Serial account pre-pass (KEY_ADD, KEY_REMOVE, ACCOUNT_DATA, │ VERIFICATION_ADD/REMOVE, PROJECT_CREATE, PROJECT_REMOVE, FORK) │ Parallel project execution (grouped by project_id) │ Full state validation (authorization, CAS checks, etc.) │ └─ 7. Finalization State diffs applied to base store Block built and stored Messages broadcast to subscribers ``` ### Rejection Points Messages can be rejected at any stage. Each stage returns a specific error: | Stage | Example Errors | | --------------- | -------------------------------------------------------- | | 1. Verification | Hash mismatch, invalid signature | | 2. Structural | Missing body, invalid field length | | 3. Pre-check | Signer not registered for MID | | 4. Network | Wrong network (e.g., testnet message to devnet node) | | 5. Mempool | Duplicate message, mempool full, timestamp out of window | | 6. Execution | Unauthorized, CAS mismatch, project not found | Stages 1-5 happen synchronously on the submit RPC. Stage 6 happens asynchronously during block execution — if a message fails execution, it's silently dropped (the block proceeds without it). ### Subscriber Notifications Messages are broadcast to `SubscribeMessages` subscribers **only after consensus finalization** (stage 7), not on submit. This ensures subscribers see the canonical committed order and never see messages that fail execution. ### Timestamp Validation The mempool enforces a timestamp window: * **Maximum age:** 10 minutes in the past (configurable via `max_timestamp_age_secs`) * **Maximum drift:** 30 seconds in the future (configurable via `max_timestamp_drift_secs`) This prevents replay of old messages and rejects messages with clock skew beyond the tolerance window. ## Brand ### Logo The Makechain wordmark uses Inter SemiBold at tight letter-spacing, with the five brand shapes arranged below. #### Dark background
#### Light background
#### Clear space Maintain at least 1x the height of the shapes row as clear space around the logo.
### Brand Shapes The five primary brand shapes appear in the logo and serve as visual anchors throughout documentation.
Square #00EEBE
Circle #7A3BF7
Triangle #FA7CFA
Star #FAD030
Heart #FE0302
### Usage in Headings Shapes are placed inline to the left of section headings using `` in MDX: ```mdx ## Section Title ``` Cycle through the five brand shapes per page. Assign shapes consistently within a page but vary across pages. ### Principles
Monochrome base
Black and white only. No grays in primary surfaces. The absence of color makes the shapes hit harder.
Vibrant accents
Color only comes from the shapes. Every accent is saturated and distinct — no pastels, no gradients.
Geometric precision
Clean edges, integer coordinates, no anti-aliasing artifacts. Shapes are math, not illustration.
Tight spacing
Dense information, minimal whitespace. Every pixel earns its place. Content over chrome.
## Colors ### Theme The base theme is pure monochrome. Background and text use black/white with graduated neutral layers for depth. #### Backgrounds
Background #000000
Background Dark #0a0a0a
Background 2 #111111
Background 3 #191919
Background 4 #1e1e1e
Background 5 #252525
#### Text
Text #ffffff
Text 2 #cccccc
Text 3 #999999
Text 4 #666666
#### Borders
Border #252525
Border 2 #404040
*** ### Accent Palette All color in the system comes from the shape accents. No color is used for text, backgrounds, or UI chrome — only for these geometric marks. #### Brand (primary 5)
#00EEBE
#7A3BF7
#FA7CFA
#FAD030
#FE0302
#### Extended
#FF6B35
#FF3366
#EC4899
#F59E0B
#84CC16
#22C55E
#14B8A6
#06B6D4
#0096FF
#3B82F6
#6366F1
#8B5CF6
#A855F7
*** ### Contrast All accent colors are tested against the `#000000` background. | Color | Hex | Ratio | WCAG AA | | ------------------------------------------------------------------------------------------------------------------------------------------------------ | --------- | ------ | ------------ | | Green | `#00EEBE` | 12.8:1 | Pass | | Purple | `#7A3BF7` | 4.0:1 | Pass (large) | | Pink | `#FA7CFA` | 8.0:1 | Pass | | Yellow | `#FAD030` | 11.4:1 | Pass | | Red | `#FE0302` | 4.6:1 | Pass (large) | | Orange | `#FF6B35` | 6.5:1 | Pass | | Blue | `#3B82F6` | 5.3:1 | Pass | | Cyan | `#06B6D4` | 8.1:1 | Pass | | Emerald | `#22C55E` | 8.3:1 | Pass | *** ### Light Mode The system inverts cleanly. All theme tokens have light-mode counterparts: | Token | Dark | Light | | ------------ | --------- | --------- | | Background | `#000000` | `#ffffff` | | Background 2 | `#111111` | `#f5f5f5` | | Background 3 | `#191919` | `#eeeeee` | | Text | `#ffffff` | `#000000` | | Text 2 | `#cccccc` | `#333333` | | Text 3 | `#999999` | `#666666` | | Border | `#252525` | `#e0e0e0` | | Border 2 | `#404040` | `#cccccc` | Accent colors are identical in both modes — they're vivid enough to work on black or white. ## Components Patterns for composing content elements across the docs. ### Section Headings Every H2 gets a shape prefix. The shape is an inline `` at 14px, vertically centered.
Getting Started
Key Features
Architecture
Configuration
Community
*** ### Feature Cards Grid of cards with shape accent, title, and description. Used for overviews and principle lists.
Fast Finality
Sub-second block finality via Simplex BFT. No waiting for confirmations.
Cryptographic Auth
Every message is self-authenticating with Ed25519 signatures and BLAKE3 hashes.
Parallel Execution
Projects execute in parallel within each block via rayon thread pool.
*** ### Stat Blocks Horizontal row of key metrics. Shape serves as a bullet marker.
\~200ms
Block time
\~300ms
Finality
10k+
Messages/sec
32 bytes
Content-addressed IDs
*** ### Status Row Inline shapes as status indicators.
Consensus — operational
gRPC API — operational
DA Layer — syncing
Sharding — planned
*** ### Code Blocks Fenced code blocks use the `#111111` background with monospace font. ```rust // Content-addressed project ID let project_id = blake3::hash(&message_bytes); ``` ```bash cargo run --bin node -- --port 50051 --p2p-port 50052 ``` ``` Global State Root ├── Project A Root (BLAKE3 of sorted key-value diffs) ├── Project B Root └── ... ``` *** ### Tables Standard markdown tables for structured data. Borders and backgrounds come from theme tokens. | Message Type | Phase | Scope | | ------------------ | -------- | ------- | | `PROJECT_CREATE` | Serial | SIGNING | | `KEY_ADD` | Serial | OWNER | | `COMMIT_BUNDLE` | Parallel | AGENT | | `REF_UPDATE` | Parallel | AGENT | | `COLLABORATOR_ADD` | Parallel | SIGNING | *** ### Lists with Shapes Use shapes as custom bullet markers for feature lists.
Permissionless — anyone can create projects and push code without gatekeepers
Content-addressed — project IDs are BLAKE3 hashes of creation messages
CRDT semantics — deterministic conflict resolution with LWW, remove-wins, and CAS
Merkle-authenticated — every state entry is provable via per-project roots
*** ### Callout Boxes Bordered containers for important information, keyed by shape.
Note
The consensus layer stores only message metadata (\~100-500 bytes). File content lives in a separate DA layer.
Important
REF\_UPDATE uses compare-and-swap. If the ref has moved since your read, the update is rejected.
Tip
Use cargo test test\_name to run a single test by name for fast iteration.
*** ### Architecture Diagrams ASCII diagrams in fenced code blocks, referenced by surrounding shapes. ``` ┌─────────────────────────┐ │ Clients │ grpc-web / gRPC │ (Browser, CLI, SDK) │ └───────────┬─────────────┘ │ ┌───────────▼─────────────┐ │ Validator Node │ │ ┌────────┐ ┌─────────┐ │ │ │ gRPC │→│ Mempool │ │ │ └────────┘ └────┬────┘ │ │ ┌───▼────┐ │ │ │Simplex │ │ │ │BFT │ │ │ └───┬────┘ │ │ ┌───▼────┐ │ │ │ State │ │ │ └────────┘ │ └──────────────────────────┘ ``` *** ### Shape Pairing Guide When writing docs pages, assign shapes to H2s consistently within a page. The recommended cycle: | Position | Shape | Color | Typical meaning | | -------- | ---------------------------------------------------------------------------------------------------------- | --------- | ----------------------- | | 1st H2 | square | `#00EEBE` | Primary / main concept | | 2nd H2 | circle | `#7A3BF7` | Secondary / supporting | | 3rd H2 | triangle | `#FA7CFA` | Technical detail | | 4th H2 | star | `#FAD030` | Configuration / options | | 5th H2 | heart | `#FE0302` | Community / coda | For pages with more than 5 sections, pull from the extended shape set: diamond, hexagon, bolt, shield, sparkle, leaf, flame, etc. ## Shapes 55 vector shapes for use as visual anchors throughout the docs. Use inline in MDX headings: ```mdx ## Section Title ``` *** ### Geometric
square
rounded-square
circle
oval
triangle
caret
diamond
pentagon
hexagon
heptagon
octagon
capsule
semicircle
parallelogram
trapezoid
### Symbols
star
sparkle
starburst
heart
cross
x-mark
asterisk
bolt
shield
flag
target
eye
infinity
hourglass
ribbon
### Nature
sun
moon
crescent
leaf
flower
droplet
flame
wave
### 3D
cube
pyramid
### Outlines & Rings
ring
donut
spiral
arc
### Directional
arrow-right
arrow-up
chevron-right
chevron-down
### Decorative
grid
dots
dash
slash
zigzag
stripe
bracket
### Colors | Swatch | Hex | Used by | | ---------------------------------------------------------------------------------------------------------------------------------------------- | --------- | ---------------------------------------------- | | | `#00EEBE` | square, stripe | | | `#7A3BF7` | circle | | | `#FA7CFA` | triangle, flower | | | `#FAD030` | star, sparkle, bolt, flame, sun | | | `#FE0302` | heart, x-mark, target, flag | | | `#FF6B35` | diamond, semicircle, starburst, flame, pyramid | | | `#0096FF` | pentagon, parallelogram, rounded-square, wave | | | `#06B6D4` | hexagon, eye, slash | | | `#14B8A6` | octagon, cube, bracket | | | `#3B82F6` | cross, droplet, grid | | | `#84CC16` | arrow-right, heptagon, dash | | | `#F59E0B` | arrow-up, crescent, hourglass | | | `#EC4899` | trapezoid, spiral, ribbon | | | `#8B5CF6` | chevron-down, moon | | | `#FF3366` | ring, asterisk, zigzag, caret | | | `#6366F1` | donut, shield | | | `#A855F7` | oval, infinity, arc | | | `#22C55E` | capsule, leaf | | | `#EC4899` | chevron-right, flower | ## Typography ### Typeface The system uses the default Vocs font stack — system sans-serif for body text and monospace for code.
BODY
Inter, -apple-system, BlinkMacSystemFont, "Segoe UI", Helvetica, Arial, sans-serif
CODE
ui-monospace, SFMono-Regular, "SF Mono", Menlo, Consolas, monospace
*** ### Scale
H1
Page Title
H2
Section Heading
H3
Subsection Heading
BODY
Every operation is a cryptographically signed, self-authenticating message — verifiable without external lookups. Messages are ordered by Simplex BFT consensus with sub-second finality.
SMALL / CAPTION
Supplementary text for labels, metadata, and annotations.
CODE
shard\_index = project\_id\[0..4] as u32 % num\_shards
*** ### Hierarchy Rules 1. **One H1 per page** — the page title. No shape prefix. 2. **H2 with shape** — major sections. Every H2 gets a shape to its left at `width="14"`. 3. **H3 plain** — subsections within an H2. No shape, no decoration. 4. **Body at 0.85 opacity** — slightly softened white for comfortable reading on black. 5. **Code in monospace** — inline `code` and fenced blocks use the monospace stack on `#111111` background. *** ### Inline Code Use backtick-wrapped `inline code` for: * Field names: `project_id`, `old_hash`, `da_reference` * Message types: `COMMIT_BUNDLE`, `REF_UPDATE` * Hex values: `0x01`, `0x0A` * CLI commands: `cargo test`, `bun run build` *** ### Tables Tables use the default Vocs styling — borders from the theme, alternating row contrast via background layers. | Weight | Usage | Example | | ------ | --------------- | -------------------------- | | 700 | H1 page title | `font-weight: bold` | | 600 | H2, H3 headings | `font-weight: semibold` | | 400 | Body text | `font-weight: normal` | | 400 | Code blocks | Monospace at normal weight | ## Writing Guide The Makechain documentation is the canonical reference for the protocol, its APIs, and tooling. This guide provides editorial standards for writing clear, consistent, and accurate documentation. This page covers: * [Writing general documentation](#general-documentation) * [Writing protocol documentation](#protocol-documentation) * [Writing API documentation](#api-documentation) *** ### General Documentation #### Voice and tone Write in a technical, direct voice. Assume the reader is a developer who understands cryptography, distributed systems, and version control. Do not over-explain fundamentals — link to external references when background is needed. **Be precise, not verbose.** Every sentence should convey information. Cut filler words, hedging phrases ("it should be noted that"), and unnecessary qualifiers. * Correct: "Messages are ordered by Simplex BFT consensus with sub-second finality." * Incorrect: "It's worth noting that messages are typically ordered by what we call Simplex BFT consensus, which generally provides sub-second finality." #### Second person Write in the second person. Use "you" when addressing the reader directly. * Correct: "You submit messages via the gRPC `SubmitMessage` endpoint." * Incorrect: "We submit messages via the gRPC `SubmitMessage` endpoint." Reserve "we" for statements where the Makechain team is the explicit subject: "We plan to add P-256/WebAuthn as a secondary signature scheme." #### Present tense Use present tense to describe how the system works. Use future tense only for features that do not exist yet. * Correct: "The execution engine processes messages in two phases." * Incorrect: "The execution engine will process messages in two phases." #### Active voice Use active voice. Passive voice obscures the subject and adds unnecessary words. * Correct: "The leader proposes blocks by draining the mempool." * Incorrect: "Blocks are proposed by the leader by draining the mempool." #### Short sentences One idea per sentence. If a sentence has more than one comma, split it. Follow a long sentence with a short one. * Correct: "Each project group operates on its own copy-on-write overlay store. This ensures isolation between projects." * Incorrect: "Each project group operates on its own copy-on-write overlay store that can read the base state plus any account changes from Phase 1 via the snapshot store, which ensures isolation between projects." #### Gender-neutral language Use "they" as a singular pronoun. Address groups as "developers," "users," or "validators." #### No emojis Do not use emojis in documentation. Color and visual interest come from the [shape system](/design/shapes), not emoji. *** ### Spelling and Terminology #### Makechain-specific terms Use these terms consistently: | Term | Usage | Not | | ------------ | ------------------------------------------------------------------------------------------------------------------- | ------------------------------------- | | Make ID | The account identifier. Abbreviate as `mid` in code contexts. | MakeID, make-id | | message | Lowercase when referring to the concept. | Message (unless starting a sentence) | | message type | Refer to specific types in `SCREAMING_SNAKE_CASE` with backticks: `PROJECT_CREATE` | ProjectCreate, project\_create | | project ID | Lowercase "ID." Always note it is content-addressed (BLAKE3 hash of the creation message). | Project Id, projectId | | ref | A branch or tag pointer. Plural: "refs." | reference, branch (unless clarifying) | | scope | Key permission level. Three scopes: OWNER, SIGNING, AGENT. Show in ALL CAPS without backticks when used as a label. | scope level, permission | | DA layer | Data availability layer. Spell out on first use per page, then abbreviate. | data layer, blob store | | state root | The BLAKE3 merkle root of all state. No hyphen. | stateroot, state-root | | mempool | One word, lowercase. | mem-pool, memory pool | | consensus | Lowercase unless starting a sentence. Refer to the specific algorithm as "Simplex BFT." | Consensus | #### External product casing Match the canonical casing of external tools and protocols: * Ed25519 (not ed25519 or ED25519) * BLAKE3 (not blake3 or Blake3) * gRPC (not GRPC or Grpc) * grpc-web (lowercase with hyphen) * protobuf (lowercase) * Rust (capitalized) * rayon (lowercase — it's a crate name) * Cloudflare (capitalized) * Ethereum (capitalized), but `ETH_ADDRESS` in code * Solana (capitalized), but `SOL_ADDRESS` in code #### Abbreviations Spell out abbreviations on first use per page, followed by the abbreviation in parentheses: * "data availability (DA) layer" * "Byzantine Fault Tolerant (BFT) consensus" * "compare-and-swap (CAS)" These abbreviations are acceptable without expansion: HTTP, gRPC, URL, API, CLI, SDK, CI/CD, hex. Do not use Latin abbreviations. Write "for example" instead of "e.g." and "that is" instead of "i.e." #### Numbers and units * Byte counts are explicit: "32 bytes," "64 bytes" * Hash sizes: "BLAKE3 (32 bytes)" on first mention per page * Time: use "ms" for milliseconds, "s" for seconds — "\~200ms block time" * Throughput: "10,000+ messages per second" * Storage: use "GB" for gigabytes, "KB" for kilobytes * Hex values: lowercase, no `0x` prefix unless referencing a state key prefix — "prefix `0x01`" *** ### Formatting #### Headings One H1 per page — the page title. No shape prefix on H1. All H2 headings get a shape prefix using an inline ``: ```mdx ## Section Title ``` H3 headings are plain text — no shape, no decoration. Do not skip heading levels (H2 → H4). Use sentence case for all headings: * Correct: `## State root computation` * Incorrect: `## State Root Computation` Exception: capitalize product names in headings — "Creating your first EAS build," "Configuring Simplex BFT." #### Shape assignment Cycle through the five brand shapes for H2s within a page: 1. square (`#00EEBE`) — primary concept 2. circle (`#7A3BF7`) — secondary / supporting 3. triangle (`#FA7CFA`) — technical detail 4. star (`#FAD030`) — configuration / options 5. heart (`#FE0302`) — supplementary / coda For pages with more than 5 sections, pull from the [extended shape set](/design/shapes): diamond, hexagon, bolt, shield, sparkle, leaf, flame. #### Inline code Use backticks for: * Message types: `PROJECT_CREATE`, `COMMIT_BUNDLE` * Field names: `project_id`, `old_hash`, `da_reference` * Hex prefixes: `0x01`, `0x0A` * CLI commands: `cargo test`, `bun run build` * RPC methods: `SubmitMessage`, `GetProject` * Rust types and crate names: `MemoryStore`, `commonware-consensus` Do not use backticks for: * Product names: Makechain, Simplex BFT, Commonware * Scope labels: OWNER, SIGNING, AGENT (use ALL CAPS plain text) * File names and directories — use **bold** instead: **app.json**, **src/state/** #### File and directory names Use **bold** for file names, directory names, and file extensions in prose: * Correct: "Your protocol buffer definition is in **proto/makechain.proto**." * Incorrect: "Your protocol buffer definition is in `proto/makechain.proto`." #### Code blocks Always specify the language for fenced code blocks: ```` ```rust let project_id = blake3::hash(&message_bytes); ``` ```` Use `bash` for shell commands, `rust` for Rust code, `json` for JSON, and plain triple backticks (no language) for ASCII diagrams and pseudocode. #### Tables Use markdown tables for structured reference data. Tables are the primary format for: * Message type lists with descriptions and scopes * Configuration parameters with defaults * State key prefixes and namespaces * Storage limits * Error types with triggers Always include a header row with separator: ```markdown | Type | Description | Scope | |------|-------------|-------| | `PROJECT_CREATE` | Create a new project | SIGNING | ``` #### Lists Use dashes (`-`) for unordered lists, not asterisks. Start numbered lists at `1`. Use **bold** for the lead term in definition-style lists: ```markdown - **Permissionless** — anyone can create projects and push code - **Content-addressed** — project IDs are BLAKE3 hashes of creation messages ``` Use em dashes (—) to separate the term from its definition, not colons or hyphens. #### Links Link descriptive text, not "here" or "this page": * Correct: "See the [storage limits](/protocol/storage-limits) for per-account capacity." * Incorrect: "See storage limits [here](/protocol/storage-limits)." Use relative paths for internal links: `/protocol/overview`, not `https://makechain.pages.dev/protocol/overview`. #### ASCII diagrams Use box-drawing characters for architecture diagrams in plain fenced code blocks: ``` ┌─────────────┐ │ Component │ └──────┬──────┘ │ ┌──────▼──────┐ │ Next Layer │ └─────────────┘ ``` Diagrams should be self-contained and readable without surrounding text. *** ### Protocol Documentation Protocol pages document the specification. They are reference material — precise, complete, and authoritative. #### Describe behavior, not implementation Protocol docs describe what the system does, not how the Rust code implements it. Reference implementation details (crate names, function names) belong in code comments and CLAUDE.md, not in user-facing docs. * Correct: "Account-level messages are applied serially because they modify shared account state." * Incorrect: "Account-level messages are applied serially using the `apply_account_messages` function in `execution.rs`." #### Document the envelope When introducing a message type, always specify: 1. The message type name in `SCREAMING_SNAKE_CASE` 2. The required key scope (OWNER, SIGNING, or AGENT) 3. The conflict key or ordering mechanism (CAS, LWW, append-only) 4. The semantics category (1P or 2P) #### Show the state change For each message type, describe: * **Preconditions** — what must be true for the message to be accepted * **Effect** — what state changes when the message is applied * **Failure modes** — what errors are returned and when #### Use tables for message type reference The canonical format for listing message types: ```markdown | Type | Description | Required Scope | |------|-------------|---------------| | `PROJECT_CREATE` | Create a new project with name and visibility | SIGNING | | `PROJECT_REMOVE` | Remove a project (hides refs, commits, collaborators) | OWNER | ``` #### Conflict resolution rules Always state the conflict resolution rule explicitly: * "On a timestamp tie, remove wins." * "Last-write-wins per conflict key `(project_id, field)`." * "Compare-and-swap: includes expected current hash. If the ref has moved, the update is rejected." *** ### API Documentation API pages document the gRPC service. They are functional reference — developers look things up here while coding. #### RPC method format Document each RPC with: 1. Method name in backticks: `GetProject` 2. Request fields as a table 3. Response fields as a table 4. A curl/grpcurl example when useful 5. Error conditions #### Field descriptions Write useful descriptions. Teach the developer something beyond what the type signature shows: * Correct: "`project_id` — the BLAKE3 hash of the original `PROJECT_CREATE` message (32 bytes, hex-encoded)" * Incorrect: "`project_id` — the project ID" #### Pagination All list endpoints use cursor-based pagination. Document the pattern once and reference it: * `cursor` — opaque string from a previous response. Omit for the first page. * `limit` — maximum items to return. Default 50, maximum 200. #### Streaming endpoints For streaming RPCs (`SubscribeMessages`, `SubscribeBlocks`), document: * The filter parameters * What triggers a message on the stream * Whether the stream replays historical data or is live-only *** ### Page Structure Every documentation page follows this structure: ``` # Page Title ← H1, no shape Introductory paragraph. ← 1-2 sentences establishing context ## First Section ← H2 with shape Content... ### Subsection ← H3, plain Content... ## Second Section ← H2 with shape Content... ``` #### Opening paragraph Start every page with 1-2 sentences that tell the reader what this page covers and why it matters. No preamble, no "In this section we will discuss..." * Correct: "Makechain enforces per-account storage limits to prevent unbounded state growth." * Incorrect: "This page describes the storage limits system. Storage limits are an important part of the protocol." #### One concept per page Each page covers one topic. If you find yourself writing "see also" to another section on the same page, consider whether the content should be its own page. #### End with edges Close pages with edge cases, error types, or future considerations. The reader who reaches the bottom is looking for details. *** ### Punctuation #### Oxford commas Use Oxford commas: "projects, commits, and refs" — not "projects, commits and refs." #### Em dashes Use em dashes (—) to set off parenthetical clauses. No spaces around em dashes: * Correct: "Every operation is a cryptographically signed message — verifiable without external lookups." * Incorrect: "Every operation is a cryptographically signed message - verifiable without external lookups." In MDX, write `—` directly (Unicode em dash). The `—` entity also works. #### Double quotes Use double quotes in prose. Reserve single quotes for nested quotation or code contexts: * Correct: Set the field named "id" to your project's ID. * Incorrect: Set the field named 'id' to your project's ID. #### Possessives Singular possessive: add **'s** regardless of final consonant — "BLAKE3's digest," "the process's state." Plural possessive ending in **s**: add just the apostrophe — "the validators' signatures." #### Slashes No spaces around slashes: "client/server," "Android/iOS." *** ### Glossary Core terms used throughout Makechain documentation. #### Protocol | Term | Definition | | -------------- | -------------------------------------------------------------------------------------------------------------------------------------- | | Message | A signed, self-authenticating operation envelope containing a BLAKE3 hash, Ed25519 signature, signer public key, and operation payload | | Message type | The specific operation: `PROJECT_CREATE`, `COMMIT_BUNDLE`, `REF_UPDATE`, etc. | | 1P (one-phase) | Unilateral state change with no paired undo message. Categories: Singleton, LWW Register, Append-only, State transition | | 2P (two-phase) | Add/Remove pairs operating on a set. Remove wins on timestamp tie | | CAS | Compare-and-swap — optimistic locking where an update includes the expected current value | | LWW | Last-write-wins — the most recent message by consensus order overwrites prior state | | Remove-wins | On a timestamp tie between add and remove, the remove takes precedence | | Conflict key | The tuple that identifies which state slot a message targets, for example `(project_id, field)` | #### Identity | Term | Definition | | --------------- | -------------------------------------------------------------------------------------------------------------- | | Make ID (MID) | Unique account identifier (uint64) assigned by the onchain registry | | Scope | Permission level for a registered key: OWNER (full control), SIGNING (push, manage), AGENT (automated actions) | | Claim signature | Cryptographic proof linking an external address to a Make ID. Message format: `makechain:verify:` | #### Consensus | Term | Definition | | ------------ | ------------------------------------------------------------------------------------ | | Simplex BFT | Single-chain Byzantine Fault Tolerant consensus protocol from the Commonware library | | Block | A batch of messages ordered by consensus. \~200ms block time | | Finality | A block is final after two consecutive blocks are notarized (2-chain rule). \~300ms | | Notarization | A 2/3+ validator vote to accept a proposed block | | Mempool | Queue of validated messages waiting to be included in a block | #### Execution | Term | Definition | | ----------------- | --------------------------------------------------------------------------------------------- | | Account pre-pass | Phase 1: serial execution of account-level messages that modify shared state | | Project execution | Phase 2: parallel execution of project-scoped messages grouped by `project_id` | | Overlay store | Copy-on-write state store providing isolation between parallel project groups | | Snapshot store | Read-only view of base state plus account pre-pass diffs, used as the base for overlay stores | | State root | BLAKE3 merkle root combining all per-project roots in sorted order | #### Storage | Term | Definition | | ------------ | -------------------------------------------------------------------------------------------------------------------------------------------- | | Storage unit | Yearly capacity allocation for an account. Default: 1 (free tier) | | DA layer | Data availability layer — separate storage for file content (blobs, trees), referenced by `da_reference` in commit bundles | | Pruning | Automatic removal of oldest unprotected commit metadata when a project exceeds its limit. Commits referenced by active refs are never pruned | | Ref | A named pointer (branch or tag) to a commit hash | | Fast-forward | A ref update where the new commit is a descendant of the current ref target | #### Infrastructure | Term | Definition | | ---------- | ---------------------------------------------------------------------- | | Commonware | The library of distributed systems primitives that Makechain builds on | | tonic | Rust gRPC framework used for the API layer | | rayon | Rust data-parallelism library used for parallel project execution | | QMDB | Queryable Merkle Database — planned persistent state backend | import * as Demo from '../../components/Demo.tsx' ## Create a Project Generate an Ed25519 keypair, register it with your Make ID, and create a new project. The project ID is the BLAKE3 hash of the `PROJECT_CREATE` message itself — content-addressed from the moment of creation. ### Demo Network: devnet Finality: ~300ms
} >
### CLI equivalent ```bash # Generate a keypair makechain keygen # Register the key (relayed from onchain registry) makechain register-key --scope signing # Create a project makechain create-project --name my-first-repo --visibility public ``` ### What happened 1. **Key generation** — An Ed25519 keypair is generated locally. The private key never leaves your machine. The public key is 32 bytes. 2. **Key registration** — A `KEY_ADD` message is submitted with your public key and the SIGNING scope. This message is relayed from the onchain registry into the consensus layer so validators can verify your future signatures without querying the chain. 3. **Project creation** — A `PROJECT_CREATE` message is constructed with your chosen name and visibility. The message is BLAKE3-hashed, Ed25519-signed with your private key, and submitted via gRPC. 4. **Consensus** — The message enters the mempool, passes structural validation, and is included in the next block by the current leader. After 2/3+ validators notarize the block and one more block is built on top (2-chain rule), your project is final. The project ID is the BLAKE3 hash of the `PROJECT_CREATE` message envelope — it is deterministic, content-addressed, and globally unique. import * as Demo from '../../components/Demo.tsx' ## Fork a Project Fork an existing project at a specific commit. The forked project gets a new content-addressed ID (the BLAKE3 hash of the `FORK` message) and inherits the source project's refs and commit history at that point. ### Demo Source: alice/web-framework Semantics: 1P Singleton
} >
FORK is a 1P Singleton — once created, it cannot be undone. The new project ID is the BLAKE3 hash of this message.
### CLI equivalent ```bash # Fork a project at a specific commit makechain fork \ --source e7f8a9b0c1d2... \ --commit deadbeef0123... \ --name my-web-framework \ --visibility public ``` ### What happened 1. **Read source** — You query the source project to find the commit you want to fork at. The source project must be accessible to you (public, or you are a collaborator). 2. **Fork point** — The `source_commit_hash` anchors the fork to a precise point in the source project's history. This is recorded permanently in the fork's metadata. 3. **FORK message** — A `FORK` message is submitted. This is a 1P Singleton — it creates a new resource irreversibly. The new project ID is the BLAKE3 hash of the `FORK` message itself (not the source project). This guarantees a globally unique, content-addressed ID. 4. **Account pre-pass** — `FORK` is processed in the serial account pre-pass (Phase 1) because it modifies shared account state (`project_count`). The protocol checks `project_count < storage_units * 10` before allowing the fork. If you are at capacity, the fork is rejected with `StorageLimitExceeded`. #### Cross-shard note In a future sharded architecture, `FORK` is the one operation that requires cross-shard coordination. The `FORK` message includes a state proof from the source project to verify the `source_commit_hash` exists without querying the source shard. ## Demos Interactive walkthroughs of core Makechain operations. Each demo shows the full message lifecycle — from construction to consensus finality. import * as Demo from '../../components/Demo.tsx' ## Manage Access Add collaborators to a project, assign permission levels, and manage the access control list. Collaborators are a 2P set — add and remove pairs with remove-wins semantics. ### Demo Project: my-first-repo Semantics: 2P Set (remove-wins)
} >
### Permission levels
OWNER
Full control — transfer, archive, delete, manage all collaborators
ADMIN
Manage collaborators, update project metadata, all write operations
WRITE
Push commits, update refs, create branches and tags
READ
View project state, refs, and commits (relevant for private projects)
### Update permissions
Updating an existing collaborator's permission reuses COLLABORATOR\_ADD. The count does not change — only new collaborators increment the count.
### Remove a collaborator
### How access control works * **COLLABORATOR\_ADD** requires the signer to have SIGNING scope and at least ADMIN permission on the project * **COLLABORATOR\_REMOVE** has the same requirements * The collaborator set is a **2P set** — on a timestamp tie between add and remove for the same collaborator, remove wins * Permission updates (re-adding with a different level) do not change the collaborator count * Each project supports up to **50 collaborators** per storage unit import * as Demo from '../../components/Demo.tsx' ## Push Commits Bundle commit metadata, upload content to the DA layer, and update refs — all in a single atomic flow. Consensus orders the operations and the ref update uses compare-and-swap to prevent conflicts. ### Demo Project: my-first-repo Ref: refs/heads/main
} >
### CLI equivalent ```bash # Push content (bundles commits + updates ref in one operation) makechain push --project a1b2c3d4... --ref refs/heads/main ``` ### What happened 1. **DA upload** — File content (blobs and tree structures) is uploaded to the data availability layer. The consensus layer never sees the raw content — only a `da_reference` pointing to it. 2. **Commit bundle** — A `COMMIT_BUNDLE` message declares the new commit metadata: hash, parent hashes, tree root, author, and title. Commits are ordered parent-first within the bundle. The required scope is AGENT, allowing CI/CD systems and automated tooling to push on behalf of users. 3. **Ref update** — A `REF_UPDATE` message moves `refs/heads/main` to the new commit. It includes the expected current hash (`old_hash`) for compare-and-swap. If another push landed between your read and write, the CAS check fails and the update is rejected — no silent overwrites. The update must be fast-forward (new commit descends from old) unless `force: true`. 4. **Parallel execution** — Both messages are grouped by `project_id` and executed together in the project's parallel execution group. The overlay store provides copy-on-write isolation from other projects in the same block. import * as Demo from '../../components/Demo.tsx' ## Register a Make ID Every identity on Makechain starts with a Make ID — a unique uint64 assigned by the onchain registry. You generate an Ed25519 keypair, register it onchain, and the registry relays a `KEY_ADD` message into the consensus layer. From that point, validators can verify your signatures without querying the chain. ### Demo Registry: onchain contract Relay: KEY_ADD into consensus
} >
The private key is generated locally and never leaves your machine. Deterministic signing — no nonce reuse risk.
The registry event is relayed into the Makechain consensus layer as a KEY\_ADD message. This is the bridge between the onchain registry and the protocol — validators learn about your key without querying the chain directly.
### Add more keys Once your account is active, you can register additional keys with different scopes.
AGENT keys can push commits and update refs but cannot manage collaborators or account settings. Ideal for CI/CD pipelines and AI agents.
### Set your profile
ACCOUNT\_DATA uses LWW Register semantics — the most recent message by consensus order wins per (mid, field) conflict key.
### CLI equivalent ```bash # Generate a keypair makechain keygen # Register on the onchain registry (assigns MID) makechain register # Add a SIGNING key (requires OWNER key to sign) makechain register-key --scope signing # Add an AGENT key makechain register-key --scope agent # Set profile metadata makechain set-account --field username --value alice makechain set-account --field bio --value "Building the future..." ``` ### What happened 1. **Key generation** — An Ed25519 keypair is generated locally. The public key is 32 bytes, the private key never leaves your machine. Ed25519 uses deterministic signing, so there is no nonce reuse risk. 2. **Onchain registration** — You submit a transaction to the Makechain registry contract with your public key. The registry assigns a unique Make ID (uint64) and emits an event. 3. **Relay into consensus** — The registry event is picked up and relayed into the Makechain consensus layer as a `KEY_ADD` message with OWNER scope. This is processed in the account pre-pass (Phase 1, serial) because it modifies shared account state. 4. **Account live** — After finalization (\~300ms), your account exists in consensus state. You can now create projects, push commits, add collaborators, and verify external addresses. #### Key scopes | Scope | What it can do | Typical use | | ------- | ------------------------------------------------------------- | ---------------------------- | | OWNER | Everything — manage keys, transfer projects, delete account | Your primary key | | SIGNING | Push commits, update refs, manage collaborators, set metadata | Day-to-day development | | AGENT | Push commits and update refs only | CI/CD, AI agents, automation | Each account can have up to **50 keys**. Keys are a 2P set — use `KEY_REMOVE` to revoke a compromised key. On a timestamp tie, remove wins. import * as Demo from '../../components/Demo.tsx' ## Verify Identity Link an external address (Ethereum or Solana) to your Make ID by signing a deterministic challenge message. The claim is verified on-chain and stored in consensus state. ### Demo Type: ETH_ADDRESS Scheme: EIP-191 personal_sign
} >
The challenge is deterministic — same MID always produces the same message. No nonce, no expiry.
### Solana variant
### How verification works #### Ethereum (ETH\_ADDRESS) 1. You sign the challenge `makechain:verify:` using [EIP-191](https://eips.ethereum.org/EIPS/eip-191) `personal_sign` 2. The validator recovers the signer address from the signature using secp256k1 + keccak256 3. If the recovered address matches the `address` field, the verification is accepted #### Solana (SOL\_ADDRESS) 1. You sign the challenge `makechain:verify:` with your Solana keypair 2. The validator verifies the Ed25519 signature directly — the Solana address is the public key 3. If the signature is valid, the verification is accepted #### Removal Verifications are a 2P set. Submit `VERIFICATION_REMOVE` to unlink an address. On a timestamp tie between add and remove, remove wins. ## API Reference Makechain exposes a single gRPC service (`MakechainService`) for reading and writing state. The service supports grpc-web for browser clients and server reflection for runtime discovery. ### Write Operations Submit signed messages for inclusion in the consensus pipeline. | RPC | Description | | --------------------- | ----------------------------------------------------------- | | `SubmitMessage` | Submit a single signed message (verify, validate, mempool) | | `BatchSubmitMessages` | Submit multiple signed messages atomically | | `DryRunMessage` | Validate a message against current state without submitting | ### Read Operations Query the current state of projects, accounts, refs, and commits. All list operations support cursor-based pagination (max 200 items per page). #### Projects | RPC | Description | | -------------------- | ------------------------------------------------------- | | `GetProject` | Get project metadata and status by project ID | | `GetProjectByName` | Look up a project by owner MID and project name | | `SearchProjects` | Search projects by name prefix with pagination | | `ListProjects` | List projects with optional owner filter and pagination | | `GetProjectActivity` | Recent messages for a specific project | #### Git Objects | RPC | Description | | -------------------- | ------------------------------------------------- | | `GetRef` | Get a single ref by project ID and ref name | | `ListRefs` | List all refs in a project with pagination | | `GetRefLog` | Get the update history of a ref | | `GetCommit` | Get commit metadata by project ID and commit hash | | `ListCommits` | List commits in a project with pagination | | `GetCommitAncestors` | Walk the commit graph and return ancestor chain | | `ListCollaborators` | List project collaborators with pagination | #### Accounts | RPC | Description | | -------------------- | --------------------------------------------------------------------------- | | `GetAccount` | Get account metadata, keys, storage units, project count, and verifications | | `GetAccountByKey` | Look up an account by its Ed25519 public key | | `GetAccountActivity` | Recent messages for a specific account | | `GetKey` | Inspect a single key entry (scope, status, allowed projects) | | `ListKeys` | List all keys registered to an account with pagination | | `ListVerifications` | List verified external addresses for an account | #### Blocks & Messages | RPC | Description | | -------------- | ------------------------------------------------------------------- | | `GetBlock` | Get a committed block by block number (includes transaction chunks) | | `ListBlocks` | List recent committed blocks (newest first) | | `GetMessage` | Look up a committed message by its BLAKE3 hash | | `ListMessages` | List committed messages across a range of blocks | ### Node Operations | RPC | Description | | ----------------- | -------------------------------------------------------------------------------- | | `GetNodeStatus` | Current block height, mempool size, pending blocks, network, version, and uptime | | `GetHealth` | Liveness and readiness probe for load balancers | | `GetChainStats` | Cumulative chain analytics (total messages, projects, accounts, blocks) | | `GetSnapshotInfo` | Current snapshot status (block number, entry count, state root) | | `GetMempoolInfo` | Mempool size and per-type message counts | ### Streaming | RPC | Description | | ------------------- | --------------------------------------------- | | `SubscribeMessages` | Server-streaming RPC for live message updates | | `SubscribeBlocks` | Server-streaming RPC for live block updates | `SubscribeMessages` supports filtering by: * **`project_id`** — only receive messages for a specific project * **`types`** — only receive specific message types (e.g., only `COMMIT_BUNDLE`) ### Connection The default gRPC endpoint is `localhost:50051`. Use `--grpc-addr` to configure. ```bash # gRPC (native clients) grpcurl -plaintext localhost:50051 list # CLI client cargo run --bin cli -- --endpoint http://localhost:50051 get-account --mid 1 ``` ### grpc-web Browser clients can connect via grpc-web (HTTP/1.1). The node accepts HTTP/1.1 requests and translates them to gRPC internally via `tonic-web`. CORS headers are configured to allow cross-origin requests. ## RPC Reference Complete reference for all `MakechainService` gRPC methods. All byte fields use raw bytes in gRPC and hex encoding in the CLI. ### Pagination List operations support cursor-based pagination. Pass `limit` (max 200) and receive a `next_cursor` in the response. Pass `next_cursor` as `cursor` in the next request to fetch the next page. An empty `next_cursor` means no more results. ### Error Handling All RPCs return standard gRPC status codes: | Code | Meaning | | -------------------- | --------------------------------------------- | | `NOT_FOUND` | Requested resource doesn't exist | | `INVALID_ARGUMENT` | Malformed request (e.g., invalid hash length) | | `RESOURCE_EXHAUSTED` | Rate limit exceeded | | `INTERNAL` | Server-side error (state lock poisoned, etc.) | *** ### Write Operations #### SubmitMessage Submit a single signed message for consensus inclusion. ``` rpc SubmitMessage(SubmitMessageRequest) returns (SubmitMessageResponse) ``` **Request:** A fully signed `Message` (with `hash`, `signature`, `signer`, and `data` fields populated). **Response:** * `hash` (bytes) — BLAKE3 hash of the accepted message * `accepted` (bool) — whether the message was added to the mempool * `error` (string) — error description if rejected **Rejection reasons:** invalid signature, failed structural validation, failed state pre-check (unknown key, wrong scope), rate limited, mempool full, duplicate hash. #### BatchSubmitMessages Submit up to 100 signed messages atomically. Each message is validated independently. ``` rpc BatchSubmitMessages(BatchSubmitRequest) returns (BatchSubmitResponse) ``` **Request:** `messages` — array of signed `Message` objects (max 100). **Response:** * `results` — per-message result (hash, accepted, error) * `accepted_count` / `rejected_count` — summary counters #### DryRunMessage Validate a message against current state without adding it to the mempool. ``` rpc DryRunMessage(DryRunMessageRequest) returns (DryRunMessageResponse) ``` **Request:** A signed `Message`. **Response:** * `valid` (bool) — would this message be accepted? * `error` (string) — validation error if invalid *** ### Projects #### GetProject Get project metadata by project ID. ``` rpc GetProject(GetProjectRequest) returns (GetProjectResponse) ``` **Request:** `project_id` (32 bytes) **Response:** Project metadata including `name`, `description`, `license`, `visibility`, `owner_mid`, `status` ("active"/"archived"/"removed"), `fork_source`, `ref_count`, `collaborator_count`, `commit_count`, and per-project limits (`max_refs`, `max_collaborators`, `max_commits`). #### GetProjectByName Look up a project by owner MID and project name. ``` rpc GetProjectByName(GetProjectByNameRequest) returns (GetProjectResponse) ``` **Request:** * `owner_mid` (uint64) * `name` (string) **Response:** Same as `GetProject`. #### SearchProjects Search projects by name prefix. ``` rpc SearchProjects(SearchProjectsRequest) returns (ListProjectsResponse) ``` **Request:** * `query` (string) — name prefix to match * `owner_mid` (uint64) — filter by owner (0 = all owners) * `limit` (uint32) — max results (default 50, max 200) **Response:** `projects` array + `next_cursor` for pagination. #### ListProjects List all projects with optional owner filter. ``` rpc ListProjects(ListProjectsRequest) returns (ListProjectsResponse) ``` **Request:** * `owner_mid` (uint64) — filter by owner (0 = all) * `limit` (uint32) * `cursor` (bytes) — pagination cursor #### GetProjectActivity Get recent messages for a project. ``` rpc GetProjectActivity(GetProjectActivityRequest) returns (GetProjectActivityResponse) ``` **Request:** * `project_id` (32 bytes) * `limit` (uint32) — max messages (default 50) * `types` (repeated MessageType) — filter by type (empty = all) **Response:** `messages` — array of `MessageEntry` (hash, type, timestamp, mid, signer). *** ### Refs #### GetRef Get a single ref by project and ref name. ``` rpc GetRef(GetRefRequest) returns (GetRefResponse) ``` **Request:** * `project_id` (32 bytes) * `ref_name` (bytes) — e.g., `refs/heads/main` **Response:** `project_id`, `ref_name`, `ref_type` (BRANCH/TAG), `hash` (current commit), `nonce`. #### ListRefs List all refs in a project. ``` rpc ListRefs(ListRefsRequest) returns (ListRefsResponse) ``` **Request:** `project_id`, `limit`, `cursor` **Response:** `refs` array + `next_cursor`. #### GetRefLog Get the update history of a ref (like `git reflog`). ``` rpc GetRefLog(GetRefLogRequest) returns (GetRefLogResponse) ``` **Request:** * `project_id` (32 bytes) * `ref_name` (bytes) * `limit` (uint32) **Response:** `entries` — array of `RefLogEntry` (nonce, old\_hash, new\_hash, timestamp, mid). *** ### Commits #### GetCommit Get commit metadata by hash. ``` rpc GetCommit(GetCommitRequest) returns (GetCommitResponse) ``` **Request:** `project_id` (32 bytes), `commit_hash` (32 bytes) **Response:** `CommitMeta` with `hash`, `parents`, `tree_root`, `author_mid`, `author_timestamp`, `title`, `message_hash`. #### ListCommits List commits in a project. ``` rpc ListCommits(ListCommitsRequest) returns (ListCommitsResponse) ``` **Request:** `project_id`, `limit`, `cursor` **Response:** `commits` array + `next_cursor`. #### GetCommitAncestors Walk the first-parent commit ancestry chain. ``` rpc GetCommitAncestors(GetCommitAncestorsRequest) returns (GetCommitAncestorsResponse) ``` **Request:** * `project_id` (32 bytes) * `commit_hash` (32 bytes) — starting commit * `limit` (uint32) — max ancestors to return **Response:** `ancestors` — array of `CommitMeta` in reverse chronological order. *** ### Collaborators #### ListCollaborators List collaborators for a project. ``` rpc ListCollaborators(ListCollaboratorsRequest) returns (ListCollaboratorsResponse) ``` **Request:** `project_id`, `limit`, `cursor` **Response:** `collaborators` — array of `CollaboratorEntry` (mid, permission, added\_at) + `next_cursor`. *** ### Accounts #### GetAccount Get account metadata and summary. ``` rpc GetAccount(GetAccountRequest) returns (GetAccountResponse) ``` **Request:** `mid` (uint64) **Response:** `mid`, `storage_units`, `project_count`, `key_count`, `verification_count`, `username`, `bio`, `avatar`, `website`. #### GetAccountByKey Look up an account by its Ed25519 public key. ``` rpc GetAccountByKey(GetAccountByKeyRequest) returns (GetAccountResponse) ``` **Request:** `key` (32 bytes — Ed25519 public key) **Response:** Same as `GetAccount`. #### GetAccountActivity Get recent messages authored by an account. ``` rpc GetAccountActivity(GetAccountActivityRequest) returns (GetAccountActivityResponse) ``` **Request:** * `mid` (uint64) * `limit` (uint32) **Response:** `messages` — array of `MessageEntry`. *** ### Keys #### GetKey Inspect a single key entry. ``` rpc GetKey(GetKeyRequest) returns (GetKeyResponse) ``` **Request:** `mid` (uint64), `key` (32 bytes — public key) **Response:** `mid`, `key`, `scope` (OWNER/SIGNING/AGENT), `added_at`, `allowed_projects` (for AGENT-scoped keys). #### ListKeys List all keys registered to an account. ``` rpc ListKeys(ListKeysRequest) returns (ListKeysResponse) ``` **Request:** `mid`, `limit`, `cursor` **Response:** `keys` array + `next_cursor`. *** ### Verifications #### ListVerifications List verified external addresses for an account. ``` rpc ListVerifications(ListVerificationsRequest) returns (ListVerificationsResponse) ``` **Request:** `mid`, `limit`, `cursor` **Response:** `verifications` — array of `VerificationEntry` (address, type, chain\_id, added\_at) + `next_cursor`. *** ### Blocks & Messages #### GetBlock Get a committed block by number. ``` rpc GetBlock(GetBlockRequest) returns (GetBlockResponse) ``` **Request:** `block_number` (uint64) **Response:** `block` (full Block with header, hash, chunks, transactions), `message_count`. #### ListBlocks List recent committed blocks (newest first). ``` rpc ListBlocks(ListBlocksRequest) returns (ListBlocksResponse) ``` **Request:** * `start` (uint64) — starting block number (0 = latest) * `limit` (uint32) — max blocks to return **Response:** `blocks` array. #### GetMessage Look up a committed message by its BLAKE3 hash. ``` rpc GetMessage(GetMessageRequest) returns (GetMessageResponse) ``` **Request:** `hash` (32 bytes) **Response:** `message` (full Message), `block_number` (which block it was committed in). #### ListMessages List committed messages across a range of blocks. ``` rpc ListMessages(ListMessagesRequest) returns (ListMessagesResponse) ``` **Request:** * `start_block` (uint64) — start of range (0 = latest) * `end_block` (uint64) — end of range (0 = same as start) * `limit` (uint32) — max messages **Response:** `messages` — array of `MessageEntry`. *** ### Node Operations #### GetNodeStatus Current node status. ``` rpc GetNodeStatus(GetNodeStatusRequest) returns (GetNodeStatusResponse) ``` **Response:** `block_height`, `mempool_size`, `pending_blocks`, `network` (devnet/testnet/mainnet), `version`, `uptime_seconds`. #### GetChainStats Cumulative chain analytics. ``` rpc GetChainStats(GetChainStatsRequest) returns (GetChainStatsResponse) ``` **Response:** `total_messages`, `total_projects`, `total_accounts`, `total_blocks`, `total_refs`, `total_commits`. #### GetHealth Liveness and readiness probe for load balancers. ``` rpc GetHealth(GetHealthRequest) returns (GetHealthResponse) ``` **Response:** `live` (bool), `ready` (bool), `block_height`. #### GetSnapshotInfo Current snapshot persistence status. ``` rpc GetSnapshotInfo(GetSnapshotInfoRequest) returns (GetSnapshotInfoResponse) ``` **Response:** `block_number`, `entry_count`, `state_root`, `estimated_size_bytes`. #### GetMempoolInfo Mempool size and per-type message breakdown. ``` rpc GetMempoolInfo(GetMempoolInfoRequest) returns (GetMempoolInfoResponse) ``` **Response:** `total_pending`, `by_type` (map of MessageType → count). *** ### Streaming #### SubscribeMessages Server-streaming RPC for live message updates. Receives every message as it's committed to a block. ``` rpc SubscribeMessages(SubscribeRequest) returns (stream Message) ``` **Request:** * `project_id` (bytes) — filter to a specific project (empty = all) * `types` (repeated MessageType) — filter by message type (empty = all) **Stream:** Continuous stream of `Message` objects. #### SubscribeBlocks Server-streaming RPC for live block updates. ``` rpc SubscribeBlocks(SubscribeBlocksRequest) returns (stream GetBlockResponse) ``` **Stream:** Continuous stream of `GetBlockResponse` for each committed block. *** ### Rate Limiting All write operations (`SubmitMessage`, `BatchSubmitMessages`) are rate-limited per account (MID). The default configuration allows 100 burst tokens with 10 tokens/second refill. When rate-limited, the RPC returns `RESOURCE_EXHAUSTED`. Read operations and streaming RPCs are not rate-limited.