Shockwave
Last updated
Last updated
Shockwave is the batching and propagation layer of Spicenet. One of the main goals of Shockwave is to increase performance and throughput, all the while maintaining determinism. Shockwave constitutes the pipeline of batch creation, propagation to network, and verification. It also includes a novel batch layout preserving fairness while respecting priority fees, called SpiceQOS, which we'll discuss more in-depth later in the document.
First, let's get an overview of what Shockwave provides on a high-level
Transaction-level pre-confirmation granularity
Deterministic batch propagation for optimal bandwidth usage
Faster batch ingestion by validators
SpiceQOS: Novel batch layout preserving fairness while respecting priority fees.
Let's understand each component in-depth.
Currently all rollup users have to wait for their transaction to be included in a batch / block before receiving a confirmation. Meaning that a batch / block has to be prepared before constituent transactions can be confirmed. Spicenet disrupts this status quo by completely removing the concept of a block. While the smallest unit of organisation in many chains is a block, it is a transaction on Spicenet, and hence much more granular. This also means pre-confirmations can be given at the transaction-level too! What does this mean for end user UX?
This means that when a user sends a transaction, they can expect a confirmation back nearly instantly, as the protocol does not have to include their transaction in a larger block before giving a confirmation. However, one must not mistaken this mechanism as akin to not including a transaction in a batch at all. The mechanism simply ensures that the user receives a confirmation the moment they send it, and not after it is included in a batch, although in the former case, the transaction is included in a batch post a pre-confirmation.
Spicenet leverages a deterministic batch propagation mechanism to minimise the amount of retransmits and bandwidth wastage. Rather than having the current proposer to propagate batches across the network, Spicenet leverages an epidemic tree structure to create a topology of nodes wherein each parent node transmits batches to it's children nodes, and those children nodes transmit batches to their own children nodes, etc, until all nodes receive the batches. The root node is the proposer which propagates batches initially. The offset of a node in the epidemic tree is determined by the stake of the node. Higher the stake, higher the position in the tree, and faster a node receives batches from the proposer.
Another property that we spot in current blockchains is that validators have to wait for the full block to be produced before ingesting them via P2P. For big blocks, this often becomes an issue as constructing larger blocks consumes more resources and time of the proposer. Hence, Spicenet breaks down batches into "children batches". Each children batch references a single parent batch, and each parent batch is just a enumerated reference of it's children batches. How does this help? Instead of propagating the parent batch after it is built, the proposer propagates the children batches as they are being built, essentially streaming "shreds" of the parent batch. This allows nodes to consume batches faster, and thereby replicate state faster, leading to faster settlement(remember Spicenet is a Sovereign Rollup, so settlement is handled natively).
One of the core problems that Spicenet aims to solve is assigning empirical priority to transactions that are critical to the functioning of an exchange. These include oracle updates, liquidations, futures expiries and rollovers, funding events, and so on. So, how do we implement this in a permissionless blockchain? Enter SpiceQOS. SpiceQOS defines the batch layout for Spicenet such that a batch is dynamically split into a "top of batch" queue and "bottom of batch" queue. Each has it's own transaction ordering policy, targeting a separate audience of users. Ordering in top-of-batch is determined by the priority fees paid by inflight transactions. Ordering in bottom-of-batch is FIFO(first-in-first-out). Transactions are filtered in top-of-batch and bottom-of-batch based on the "split", which is a priority fee parameter calculated as a function of recent chain congestion. For example, if a user sets their priority fee below the split, their transaction automatically goes to the bottom-of-batch and is ordered FIFO, while if a user sets their priority fee above the split, their transaction goes to the top-of-batch, and is ordered based on the magnitude of the priority fee paid by the transaction(meaning higher the priority fee, higher the position of the transaction in the top-of-batch queue).