QMDB

March 10, 2026

QMDB (Quick Merkle Database) is a verifiable database built by Commonware and LayerZero. It unifies key-value storage and Merkle tree maintenance into a single system, solving one of the oldest performance bottlenecks in blockchain infrastructure.

The core insight is an append-only twig design. Instead of mutating state in place (the way every Merkle Patricia Trie implementation works), QMDB organizes data into fixed-size immutable subtrees called “twigs” — each holding exactly 2,048 leaf entries. New writes append; old entries get marked inactive. This means:

  • One SSD read per state access (the indexer gives you the file offset directly)
  • O(1) I/O for updates — new entries go to an in-memory twig, flushed to SSD in a single sequential write when full
  • Merkleization happens entirely in memory — only the tree’s right edge changes, so you never touch disk for hash updates

Here’s how a state update actually works — say a token balance changes:

  sequenceDiagram
    participant App
    participant Indexer as Indexer (RAM)
    participant Twig as Fresh Twig (RAM)
    participant SSD

    App->>Indexer: Where is key K?
    Indexer->>SSD: Read at offset 0x3F…
    SSD-->>App: Old entry E
    App->>Twig: Append new entry E'
    Note over Twig: E'.OldId = E.Id
    App->>Indexer: K now points to E'
    Note over SSD: Old entry stays on disk,<br/>just marked inactive

No tree traversal on disk. The indexer (a B-tree in RAM) gives you the exact file offset — one SSD read. The new entry appends to the current twig in RAM. When that twig fills up (2,048 entries), it flushes to disk in a single sequential write. The Merkle tree is recomputed entirely in memory.

Traditional systems store tree nodes and data in separate key-value stores on disk, so every update triggers O((log N)^2) disk I/O. QMDB collapses that to O(1).

The numbers

On an AWS c7gd.metal (64 vCPUs, 256GB DRAM, 2 NVMe SSDs):

  • 6x throughput over RocksDB
  • 8x throughput over NOMT (the previous state-of-the-art verifiable database)
  • 601K updates/sec at 6 billion entries

On high-end hardware (i8g.metal-24xl): 2.28 million updates/sec, supporting over 1 million token transfers per second.

On a $540 mini PC (AMD R7-5825U, 64GB DDR4, 4TB NVMe): still hits 150K updates/sec up to 1 billion entries. Consumer-grade hardware running verifiable state at scale.

They tested with 15 billion entries — 10x Ethereum’s 2024 state size. Theoretical max is 280 billion entries on a single machine.

Memory efficiency

The memory footprint is remarkably small. For 1 billion entries, QMDB needs roughly 160MB of DRAM for twig roots and bitmaps. The in-memory indexer uses ~15.4 bytes per key. A hybrid SSD-optimized variant drops that to 2.3 bytes per key.

When a twig goes fully inactive (all entries overwritten), it gets pruned — the entire 2,048-entry subtree compresses down to a single 32-byte hash and a 256-byte bitmap. That’s 99.9% compression.

  graph LR
    A["Full Twig\n2,048 entries\n~256 KB"] -- "all entries overwritten\nby newer values" --> B["Pruned Twig\n32-byte hash\n+ 256-byte bitmap"]
    style A fill:#E84855,color:#fff
    style B fill:#3BB273,color:#fff

Why it matters

Every blockchain stores state — account balances, contract storage, token ownership. Proving that state is correct (or that something doesn’t exist) requires Merkle proofs. The faster you can update state and compute proofs, the more transactions your chain can process.

QMDB also supports historical state queries at any block height and exclusion proofs (proving a key doesn’t exist), which are hard problems in most Merkle tree implementations.

Exclusion proofs work because every entry stores a NextKey pointer — the lexicographically next key in the database. To prove key K doesn’t exist, you just show the entry where the gap is:

  graph LR
    A["Entry: alice\nNextKey = charlie"] --> B["Entry: charlie\nNextKey = dave"]
    B --> C["Entry: dave\nNextKey = ..."]
    style A fill:#2A40FF,color:#fff

Want to prove “bob” doesn’t exist? Show alice’s entry — since alice.NextKey = charlie, there’s no room for bob. One Merkle proof, no scanning.

Historical queries use a similar trick: every entry stores an OldId linking to the previous version of that key, so you can walk backward through time to reconstruct state at any block height.

What about agents?

I keep thinking about what this unlocks for an agent graph protocol. As autonomous agents proliferate — executing transactions, delegating to sub-agents, managing permissions — the graph of who-can-do-what mutates constantly and needs to be verifiable without trusting a central registry. QMDB’s exclusion proofs could let any agent verify “this key has not been revoked” or “this delegation does not exist” without downloading the full graph. Historical queries mean you could prove what an agent’s permission set looked like at the time it took an action — critical for auditing autonomous decisions after the fact. And at 600K+ updates/sec on commodity hardware, you could Merkleize an entire agent permission graph in real time, even as thousands of agents spin up, delegate, and expire.

Further reading

The paper is thorough and the code is MIT/Apache-2 licensed. Commonware wrote a good companion blog post explaining the design decisions and how QMDB fits into their broader infrastructure work.

https://christopherw.xyz/atom.xml