* Design and document a multisig rotation flow * Make Scanner::eventualities a HashMap so it's per-key * Don't drop eventualities, always follow through on them Technical improvements made along the way. * Start creating an isolate object to manage multisigs, which doesn't require being a signer Removes key from SubstrateBlock. * Move Scanner/Scheduler under multisigs * Move Batch construction into MultisigManager * Clarify "should" in Multisig Rotation docs * Add block_number to MultisigManager, as it controls the scanner * Move sign_plans into MultisigManager Removes ThresholdKeys from prepare_send. * Make SubstrateMutable an alias for MultisigManager * Rewrite Multisig Rotation The prior scheme had an exploit possible where funds were sent to the old multisig, then burnt on Serai to send from the new multisig, locking liquidity for 6 hours. While a fee could be applied to stragglers, to make this attack unprofitable, the newly described scheme avoids all this. * Add mini mini is a miniature version of Serai, emphasizing Serai's nature as a collection of independent clocks. The intended use is to identify race conditions and prove protocols are comprehensive regarding when certain clocks tick. This uses loom, a prior candidate for evaluating the processor/coordinator as free of race conditions (#361). * Use mini to prove a race condition in the current multisig rotation docs, and prove safety of alternatives Technically, the prior commit had mini prove the race condition. The docs currently say the activation block of the new multisig is the block after the next Batch's. If the two next Batches had already entered the mempool, prior to set_keys being called, the second next Batch would be expected to contain the new key's data yet fail to as the key wasn't public when the Batch was actually created. The naive solution is to create a Batch, publish it, wait until it's included, and only then scan the next block. This sets a bound of `Batch publication time < block time`. Optimistically, we can publish a Batch in 24s while our shortest block time is 2m. Accordingly, we should be fine with the naive solution which doesn't take advantage of throughput. #333 may significantly change latency however and require an algorithm whose throughput exceeds the rate of blocks created. In order to re-introduce parallelization, enabling throughput, we need to define a safe range of blocks to scan without Serai ordering the first one. mini demonstrates safety of scanning n blocks Serai hasn't acknowledged, so long as the first is scanned before block n+1 is (shifting the n-block window). The docs will be updated next, to reflect this. * Fix Multisig Rotation I believe this is finally good enough to be final. 1) Fixes the race condition present in the prior document, as demonstrated by mini. `Batch`s for block `n` and `n+1`, may have been in the mempool when a multisig's activation block was set to `n`. This would cause a potentially distinct `Batch` for `n+1`, despite `n+1` already having a signed `Batch`. 2) Tightens when UIs should use the new multisig to prevent eclipse attacks, and protection against `Batch` publication delays. 3) Removes liquidity fragmentation by tightening flow/handling of latency. 4) Several clarifications and documentation of reasoning. 5) Correction of "prior multisig" to "all prior multisigs" regarding historical verification, with explanation why. * Clarify terminology in mini Synchronizes it from my original thoughts on potential schema to the design actually created. * Remove most of processor's README for a reference to docs/processor This does drop some misc commentary, though none too beneficial. The section on scanning, deemed sufficiently beneficial, has been moved to a document and expanded on. * Update scanner TODOs in line with new docs * Correct documentation on Bitcoin::Block::time, and Block::time * Make the scanner in MultisigManager no longer public * Always send ConfirmKeyPair, regardless of if in-set * Cargo.lock changes from a prior commit * Add a policy document on defining a Canonical Chain I accidentally committed a version of this with a few headers earlier, and this is a proper version. * Competent MultisigManager::new * Update processor's comments * Add mini to copied files * Re-organize Scanner per multisig rotation document * Add RUST_LOG trace targets to e2e tests * Have the scanner wait once it gets too far ahead Also bug fixes. * Add activation blocks to the scanner * Split received outputs into existing/new in MultisigManager * Select the proper scheduler * Schedule multisig activation as detailed in documentation * Have the Coordinator assert if multiple `Batch`s occur within a block While the processor used to have ack_up_to_block, enabling skips in the block acked, support for this was removed while reworking it for multiple multisigs. It should happen extremely infrequently. While it would still be beneficial to have, if multiple `Batch`s could occur within a block (with the complexity here not being worth adding that ban as a policy), multiple `Batch`s were blocked for DoS reasons. * Schedule payments to the proper multisig * Correct >= to < * Use the new multisig's key for change on schedule * Don't report External TXs to prior multisig once deprecated * Forward from the old multisig to the new one at all opportunities * Move unfulfilled payments in queue from prior to new multisig * Create MultisigsDb, splitting it out of MainDb Drops the call to finish_signing from the Signer. While this will cause endless re-attempts, the Signer will still consider them completed and drop them, making this an O(n) cost at boot even if we did nothing from here. The MultisigManager should call finish_signing once the Scanner completes the Eventuality. * Don't check Scanner-emitted completions, trust they are completions Prevents needing to use async code to mark the completion and creates a fault-free model. The current model, on fault, would cause a lack of marked completion in the signer. * Fix a possible panic in the processor A shorter-chain reorg could cause this assert to trip. It's fixed by de-duplicating the data, as the assertion checked consistency. Without the potential for inconsistency, it's unnecessary. * Document why an existing TODO isn't valid * Change when we drop payments for being to the change address The earlier timing prevents creating Plans solely to the branch address, causing the payments to be dropped, and the TX to become an effective aggregation TX. * Extensively document solutions to Eventualities being potentially created after having already scanned their resolutions * When closing, drop External/Branch outputs which don't cause progress * Properly decide if Change outputs should be forward or not when closing This completes all code needed to make the old multisig have a finite lifetime. * Commentary on forwarding schemes * Provide a 1 block window, with liquidity fragmentation risks, due to latency On Bitcoin, this will be 10 minutes for the relevant Batch to be confirmed. On Monero, 2 minutes. On Ethereum, ~6 minutes. Also updates the Multisig Rotation document with the new forwarding plan. * Implement transaction forwarding from old multisig to new multisig Identifies a fault where Branch outputs which shouldn't be dropped may be, if another output fulfills their next step. Locking Branch fulfillment down to only Branch outputs is not done in this commit, but will be in the next. * Only let Branch outputs fulfill branches * Update TODOs * Move the location of handling signer events to avoid a race condition * Avoid a deadlock by using a RwLock on a single txn instead of two txns * Move Batch ID out of the Scanner * Increase from one block of latency on new keys activation to two For Monero, this offered just two minutes when our latency to publish a Batch is around a minute already. This does increase the time our liquidity can be fragmented by up to 20 minutes (Bitcoin), yet it's a stupid attack only possible once a week (when we rotate). Prioritizing normal users' transactions not being subject to forwarding is more important here. Ideally, we'd not do +2 blocks yet plus `time`, such as +10 minutes, making this agnostic of the underlying network's block scheduling. This is a complexity not worth it. * Split MultisigManager::substrate_block into multiple functions * Further tweaks to substrate_block * Acquire a lock on all Scanner operations after calling ack_block Gives time to call register_eventuality and initiate signing. * Merge sign_plans into substrate_block Also ensure the Scanner's lock isn't prematurely released. * Use a HashMap to pass to-be-forwarded instructions, not the DB * Successfully determine in ClosingExisting * Move from 2 blocks of latency when rotating to 10 minutes Superior as noted in 6d07af92ce10cfd74c17eb3400368b0150eb36d7, now trivial to implement thanks to prior commit. * Add note justifying measuring time in blocks when rotating * Implement delaying of outputs received early to the new multisig per specification * Documentation on why Branch outputs don't have the race condition concerns Change do Also ensures 6 hours is at least N::CONFIRMATIONS, for sanity purposes. * Remove TODO re: sanity checking Eventualities We sanity check the Plan the Eventuality is derived from, and the Eventuality is handled moments later (in the same file, with a clear call path). There's no reason to add such APIs to Eventualities for a sanity check given that. * Add TODO(now) for TODOs which must be done in this branch Also deprecates a pair of TODOs to TODO2, and accepts the flow of the Signer having the Eventuality. * Correct errors in potential/future flow descriptions * Accept having a single Plan Vec Per the following code consuming it, there's no benefit to bifurcating it by key. * Only issue sign_transaction on boot for the proper signer * Only set keys when participating in their construction * Misc progress Only send SubstrateBlockAck when we have a signer, as it's only used to tell the Tributary of what Plans are being signed in response to this block. Only immediately sets substrate_signer if session is 0. On boot, doesn't panic if we don't have an active key (as we wouldn't if only joining the next multisig). Continues. * Correctly detect and set retirement block Modifies the retirement block from first block meeting requirements to block CONFIRMATIONS after. Adds an ack flow to the Scanner's Confirmed event and Block event to accomplish this, which may deadlock at this time (will be fixed shortly). Removes an invalid await (after a point declared unsafe to use await) from MultisigsManager::next_event. * Remove deadlock in multisig_completed and document alternative The alternative is simpler, albeit less efficient. There's no reason to adopt it now, yet perhaps if it benefits modeling? * Handle the final step of retirement, dropping the old key and setting new to existing * Remove TODO about emitting a Block on every step If we emit on NewAsChange, we lose the purpose of the NewAsChange period. The only concern is if we reach ClosingExisting, and nothing has happened, then all coins will still be in the old multisig until something finally does. This isn't a problem worth solving, as it's latency under exceptional dead time. * Add TODO about potentially not emitting a Block event for the reitrement block * Restore accidentally deleted CI file * Pair of slight tweaks * Add missing if statement * Disable an assertion when testing One of the test flows currently abuses the Scanner in a way triggering it.
9.6 KiB
Multisig Rotation
Substrate is expected to determine when a new validator set instance will be created, and with it, a new multisig. Upon the successful creation of a new multisig, as determined by the new multisig setting their key pair on Substrate, rotation begins.
Timeline
The following timeline is established:
-
The new multisig is created, and has its keys set on Serai. Once the next
Batch
with a new external network block is published, its block becomes the "queue block". The new multisig is set to activate at the "queue block", plusCONFIRMATIONS
blocks (the "activation block").We don't use the last
Batch
's external network block, as thatBatch
may be older thanCONFIRMATIONS
blocks. Any yet-to-be-included-and-finalizedBatch
will be withinCONFIRMATIONS
blocks of what any processor has scanned however, as it'll wait for inclusion and finalization before continuing scanning. -
Once the "activation block" itself has been finalized on Serai, UIs should start exclusively using the new multisig. If the "activation block" isn't finalized within
2 * CONFIRMATIONS
blocks, UIs should stop making transactions to any multisig on that network.Waiting for Serai's finalization prevents a UI from using an unfinalized "activation block" before a re-organization to a shorter chain. If a transaction to Serai was carried from the unfinalized "activation block" to the shorter chain, it'd no longer be after the "activation block" and accordingly would be ignored.
We could not wait for Serai to finalize the block, yet instead wait for the block to have
CONFIRMATIONS
confirmations. This would prevent needing to wait for an indeterminate amount of time for Serai to finalize the "activation block", with the knowledge it should be finalized. Doing so would open UIs to eclipse attacks, where they live on an alternate chain where a possible "activation block" is finalized, yet Serai finalizes a distinct "activation block". If the alternate chain was longer than the finalized chain, the above issue would be reopened.The reason for UIs stopping under abnormal behavior is as follows. Given a sufficiently delayed
Batch
for the "activation block", UIs will use the old multisig past the point it will be deprecated. Accordingly, UIs must realize whenBatch
s are so delayed and continued transactions are a risk. While2 * CONFIRMATIONS
is presumably well within the 6 hour period (defined below), that period exists for low-fee transactions at time of congestion. It does not exist for UIs with old state, though it can be used to compensate for them (reducing the tolerance for inclusion delays).2 * CONFIRMATIONS
is before the 6 hour period is enacted, preserving the tolerance for inclusion delays, yet still should only happen under highly abnormal circumstances.In order to minimize the time it takes for "activation block" to be finalized, a
Batch
will always be created for it, regardless of it would otherwise have aBatch
created. -
The prior multisig continues handling
Batch
s andBurn
s forCONFIRMATIONS
blocks, plus 10 minutes, after the "activation block".The first
CONFIRMATIONS
blocks is due to the fact the new multisig shouldn't actually be sent coins during this period, making it irrelevant. If coins are prematurely sent to the new multisig, they're artificially delayed until the end of theCONFIRMATIONS
blocks plus 10 minutes period. This prevents an adversary from minting Serai tokens using coins in the new multisig, yet then burning them to drain the prior multisig, creating a lack of liquidity for several blocks.The reason for the 10 minutes is to provide grace to honest UIs. Since UIs will wait until Serai confirms the "activation block" for keys before sending to them, which will take
CONFIRMATIONS
blocks plus some latency, UIs would make transactions to the prior multisig past the end of this period if it wasCONFIRMATIONS
alone. Since the next period isCONFIRMATIONS
blocks, which is how long transactions take to confirm, transactions made past the end of this period would only received after the next period. After the next period, the prior multisig adds fees and a delay to all received funds (as it forwards the funds from itself to the new multisig). The 10 minutes provides grace for latency.The 10 minutes is a delay on anyone who immediately transitions to the new multisig, in a no latency environment, yet the delay is preferable to fees from forwarding. It also should be less than 10 minutes thanks to various latencies.
-
The prior multisig continues handling
Batch
s andBurn
s for anotherCONFIRMATIONS
blocks.This is for two reasons:
- Coins sent to the new multisig still need time to gain sufficient confirmations.
- All outputs belonging to the prior multisig should become available within
CONFIRMATIONS
blocks.
All
Burn
s handled during this period should use the new multisig for the change address. This should effect a transfer of most outputs.With the expected transfer of most outputs, and the new multisig receiving new external transactions, the new multisig takes the responsibility of signing all unhandled and newly emitted
Burn
s. -
For the next 6 hours, all non-
Branch
outputs received are immediately forwarded to the new multisig. Only external transactions to the new multisig are included inBatch
s.The new multisig infers the
InInstruction
, and refund address, for forwardedExternal
outputs via reading what they were for the originalExternal
output.Alternatively, the
InInstruction
, with refund address explicitly included, could be included in the forwarding transaction. This may fail if theInInstruction
omitted the refund address and is too large to fit in a transaction with one explicitly included. On such failure, the refund would be immediately issued instead. -
Once the 6 hour period has expired, the prior multisig stops handling outputs it didn't itself create. Any remaining
Eventuality
s are completed, and any available/freshly available outputs are forwarded (creating newEventuality
s which also need to successfully resolve).Once all the 6 hour period has expired, no
Eventuality
s remain, and all outputs are forwarded, the multisig publishes a finalBatch
of the first block, plusCONFIRMATIONS
, which met these conditions, regardless of if it would've otherwise had aBatch
. Then, it reports to Substrate has closed. No further actions by it, nor its validators, are expected (unless those validators remain present in the new multisig). -
The new multisig confirms all transactions from all prior multisigs were made as expected, including the reported
Batch
s.Unfortunately, we cannot solely check the immediately prior multisig due to the ability for two sequential malicious multisigs to steal. If multisig
n - 2
only transfers a fraction of its coins to multisign - 1
, multisign - 1
can 'honestly' operate on the dishonest state it was given, laundering it. This would let multisign - 1
forward the results of its as-expected operations from a dishonest starting point to the new multisig, and multisign
would attest to multisign - 1
's expected (and therefore presumed honest) operations, assuming liability. This would cause an honest multisig to face full liability for the invalid state, causing it to be fully slashed (as needed to reacquire any lost coins).This would appear short-circuitable if multisig
n - 1
transfers coins exceeding the relevant Serai tokens' supply. Serai never expects to operate in an over-solvent state, yet balance should trend upwards due to a flat fee applied to each received output (preventing a griefing attack). Any balance greater than the tokens' supply may have had funds skimmed off the top, yet they'd still guarantee the solvency of Serai without any additional fees passed to users. Unfortunately, due to the requirement to verify theBatch
s published (as else the Serai tokens' supply may be manipulated), this cannot actually be achieved (at least, not without a ZK proof the publishedBatch
s were correct). -
The new multisig reports a successful close of the prior multisig, and becomes the sole multisig with full responsibilities.
Latency and Fees
Slightly before the end of step 3, the new multisig should start receiving new
external outputs. These won't be confirmed for another CONFIRMATIONS
blocks,
and the new multisig won't start handling Burn
s for another CONFIRMATIONS
blocks plus 10 minutes. Accordingly, the new multisig should only become
responsible for Burn
s shortly after it has taken ownership of the stream of
newly received coins.
Before it takes responsibility, it also should've been transferred all internal
outputs under the standard scheduling flow. Any delayed outputs will be
immediately forwarded, and external stragglers are only reported to Serai once
sufficiently confirmed in the new multisig. Accordingly, liquidity should avoid
fragmentation during rotation. The only latency should be on the 10 minutes
present, and on delayed outputs, which should've been immediately usable, having
to wait another CONFIRMATIONS
blocks to be confirmed once forwarded.
Immediate forwarding does unfortunately prevent batching inputs to reduce fees. Given immediate forwarding only applies to latent outputs, considered exceptional, and the protocol's fee handling ensures solvency, this is accepted.