* Remove NetworkId from processor-messages
Because intent binds to the sender/receiver, it's not needed for intent.
The processor knows what the network is.
The coordinator knows which to use because it's sending this message to the
processor for that network.
Also removes the unused zeroize.
* ProcessorMessage::Completed use Session instead of key
* Move SubstrateSignId to Session
* Finish replacing key with session
* Add a function to deterministically decide which Serai blocks should be co-signed
Has a 5 minute latency between co-signs, also used as the maximal latency
before a co-sign is started.
* Get all active tributaries we're in at a specific block
* Add and route CosignSubstrateBlock, a new provided TX
* Split queued cosigns per network
* Rename BatchSignId to SubstrateSignId
* Add SubstrateSignableId, a meta-type for either Batch or Block, and modularize around it
* Handle the CosignSubstrateBlock provided TX
* Revert substrate_signer.rs to develop (and patch to still work)
Due to SubstrateSigner moving when the prior multisig closes, yet cosigning
occurring with the most recent key, a single SubstrateSigner can be reused.
We could manage multiple SubstrateSigners, yet considering the much lower
specifications for cosigning, I'd rather treat it distinctly.
* Route cosigning through the processor
* Add note to rename SubstrateSigner post-PR
I don't want to do so now in order to preserve the diff's clarity.
* Implement cosign evaluation into the coordinator
* Get tests to compile
* Bug fixes, mark blocks without cosigners available as cosigned
* Correct the ID Batch preprocesses are saved under, add log statements
* Create a dedicated function to handle cosigns
* Correct the flow around Batch verification/queueing
Verifying `Batch`s could stall when a `Batch` was signed before its
predecessors/before the block it's contained in was cosigned (the latter being
inevitable as we can't sign a block containing a signed batch before signing
the batch).
Now, Batch verification happens on a distinct async task in order to not block
the handling of processor messages. This task is the sole caller of verify in
order to ensure last_verified_batch isn't unexpectedly mutated.
When the processor message handler needs to access it, or needs to queue a
Batch, it associates the DB TXN with a lock preventing the other task from
doing so.
This lock, as currently implemented, is a poor and inefficient design. It
should be modified to the pattern used for cosign management. Additionally, a
new primitive of a DB-backed channel may be immensely valuable.
Fixes a standing potential deadlock and a deadlock introduced with the
cosigning protocol.
* Working full-stack tests
After the last commit, this only required extending a timeout.
* Replace "co-sign" with "cosign" to make finding text easier
* Update the coordinator tests to support cosigning
* Inline prior_batch calculation to prevent panic on rotation
Noticed when doing a final review of the branch.
The higher-level scanner code in multisigs/mod.rs now creates a series of plans
with limited context. These include forwarding and refunding plans, moving all
handling of forwarding flags on the scanner's clock and therefore safe.
Also simplifies the refunding a decent bit.
The prior system spawned a new connection per request to enable parallelism,
yet kept hitting hyper::IncompleteMessages I couldn't track down. This
attempts to resolve those by a long-lived socket.
Halves the amount of requests per-authenticated RPC call, and accordingly is
likely still better overall.
I don't believe this is resolved yet but this is still worth pushing.
* Update the coordinator to give key shares based on weight, not based on existence
Participants are now identified by their starting index. While this compiles,
the following is unimplemented:
1) A conversion for DKG `i` values. It assumes the threshold `i` values used
will be identical for the MuSig signature used to confirm the DKG.
2) Expansion from compressed values to full values before forwarding to the
processor.
* Add a fn to the DkgConfirmer to convert `i` values as needed
Also removes TODOs regarding Serai ensuring validator key uniqueness +
validity. The current infra achieves both.
* Have the Tributary DB track participation by shares, not by count
* Prevent a node from obtaining 34% of the maximum amount of key shares
This is actually mainly intended to set a bound on message sizes in the
coordinator. Message sizes are amplified by the amount of key shares held, so
setting an upper bound on said amount lets it determine constants. While that
upper bound could be 150, that'd be unreasonable and increase the potential for
DoS attacks.
* Correct the mechanism to detect if sufficient accumulation has occured
It used to check if the latest accumulation hit the required threshold. Now,
accumulations may jump past the required threshold. The required mechanism is
to check the threshold wasn't prior met and is now met.
* Finish updating the coordinator to handle a multiple key share per validator environment
* Adjust stategy re: preventing noce reuse in DKG Confirmer
* Add TODOs regarding dropped transactions, add possible TODO fix
* Update tests/coordinator
This doesn't add new multi-key-share tests, it solely updates the existing
single key-share tests to compile and run, with the necessary fixes to the
coordinator.
* Update processor key_gen to handle generating multiple key shares at once
* Update SubstrateSigner
* Update signer, clippy
* Update processor tests
* Update processor docker tests
Implements most of #297 to the point I'm fine closing it. The solution
implemented is distinct than originally designed, yet much simpler.
Since we have a fully-linear view of created transactions, we don't have to
per-output track operating costs incurred by that output. We can track it
across the entire Serai system, without hooking into the Eventuality system.
Also updates documentation.
Replaces plan IDs with key + ID, letting the coordinator determine the sessions
for the plans.
Properly scopes which plan IDs are set on which tributaries, and ensures we
have the necessary tributaries at time of handling.
Renames Update to SignedBatch.
Checks Batch equality via a hash of the InInstructions. That prevents needing
to keep the Batch in node state or TX introspect.
* Design and document a multisig rotation flow
* Make Scanner::eventualities a HashMap so it's per-key
* Don't drop eventualities, always follow through on them
Technical improvements made along the way.
* Start creating an isolate object to manage multisigs, which doesn't require being a signer
Removes key from SubstrateBlock.
* Move Scanner/Scheduler under multisigs
* Move Batch construction into MultisigManager
* Clarify "should" in Multisig Rotation docs
* Add block_number to MultisigManager, as it controls the scanner
* Move sign_plans into MultisigManager
Removes ThresholdKeys from prepare_send.
* Make SubstrateMutable an alias for MultisigManager
* Rewrite Multisig Rotation
The prior scheme had an exploit possible where funds were sent to the old
multisig, then burnt on Serai to send from the new multisig, locking liquidity
for 6 hours. While a fee could be applied to stragglers, to make this attack
unprofitable, the newly described scheme avoids all this.
* Add mini
mini is a miniature version of Serai, emphasizing Serai's nature as a
collection of independent clocks. The intended use is to identify race
conditions and prove protocols are comprehensive regarding when certain clocks
tick.
This uses loom, a prior candidate for evaluating the processor/coordinator as
free of race conditions (#361).
* Use mini to prove a race condition in the current multisig rotation docs, and prove safety of alternatives
Technically, the prior commit had mini prove the race condition.
The docs currently say the activation block of the new multisig is the block
after the next Batch's. If the two next Batches had already entered the
mempool, prior to set_keys being called, the second next Batch would be
expected to contain the new key's data yet fail to as the key wasn't public
when the Batch was actually created.
The naive solution is to create a Batch, publish it, wait until it's included,
and only then scan the next block. This sets a bound of
`Batch publication time < block time`. Optimistically, we can publish a Batch
in 24s while our shortest block time is 2m. Accordingly, we should be fine with
the naive solution which doesn't take advantage of throughput. #333 may
significantly change latency however and require an algorithm whose throughput
exceeds the rate of blocks created.
In order to re-introduce parallelization, enabling throughput, we need to
define a safe range of blocks to scan without Serai ordering the first one.
mini demonstrates safety of scanning n blocks Serai hasn't acknowledged, so
long as the first is scanned before block n+1 is (shifting the n-block window).
The docs will be updated next, to reflect this.
* Fix Multisig Rotation
I believe this is finally good enough to be final.
1) Fixes the race condition present in the prior document, as demonstrated by
mini.
`Batch`s for block `n` and `n+1`, may have been in the mempool when a
multisig's activation block was set to `n`. This would cause a potentially
distinct `Batch` for `n+1`, despite `n+1` already having a signed `Batch`.
2) Tightens when UIs should use the new multisig to prevent eclipse attacks,
and protection against `Batch` publication delays.
3) Removes liquidity fragmentation by tightening flow/handling of latency.
4) Several clarifications and documentation of reasoning.
5) Correction of "prior multisig" to "all prior multisigs" regarding historical
verification, with explanation why.
* Clarify terminology in mini
Synchronizes it from my original thoughts on potential schema to the design
actually created.
* Remove most of processor's README for a reference to docs/processor
This does drop some misc commentary, though none too beneficial. The section on
scanning, deemed sufficiently beneficial, has been moved to a document and
expanded on.
* Update scanner TODOs in line with new docs
* Correct documentation on Bitcoin::Block::time, and Block::time
* Make the scanner in MultisigManager no longer public
* Always send ConfirmKeyPair, regardless of if in-set
* Cargo.lock changes from a prior commit
* Add a policy document on defining a Canonical Chain
I accidentally committed a version of this with a few headers earlier, and this
is a proper version.
* Competent MultisigManager::new
* Update processor's comments
* Add mini to copied files
* Re-organize Scanner per multisig rotation document
* Add RUST_LOG trace targets to e2e tests
* Have the scanner wait once it gets too far ahead
Also bug fixes.
* Add activation blocks to the scanner
* Split received outputs into existing/new in MultisigManager
* Select the proper scheduler
* Schedule multisig activation as detailed in documentation
* Have the Coordinator assert if multiple `Batch`s occur within a block
While the processor used to have ack_up_to_block, enabling skips in the block
acked, support for this was removed while reworking it for multiple multisigs.
It should happen extremely infrequently.
While it would still be beneficial to have, if multiple `Batch`s could occur
within a block (with the complexity here not being worth adding that ban as a
policy), multiple `Batch`s were blocked for DoS reasons.
* Schedule payments to the proper multisig
* Correct >= to <
* Use the new multisig's key for change on schedule
* Don't report External TXs to prior multisig once deprecated
* Forward from the old multisig to the new one at all opportunities
* Move unfulfilled payments in queue from prior to new multisig
* Create MultisigsDb, splitting it out of MainDb
Drops the call to finish_signing from the Signer. While this will cause endless
re-attempts, the Signer will still consider them completed and drop them,
making this an O(n) cost at boot even if we did nothing from here.
The MultisigManager should call finish_signing once the Scanner completes the
Eventuality.
* Don't check Scanner-emitted completions, trust they are completions
Prevents needing to use async code to mark the completion and creates a
fault-free model. The current model, on fault, would cause a lack of marked
completion in the signer.
* Fix a possible panic in the processor
A shorter-chain reorg could cause this assert to trip. It's fixed by
de-duplicating the data, as the assertion checked consistency. Without the
potential for inconsistency, it's unnecessary.
* Document why an existing TODO isn't valid
* Change when we drop payments for being to the change address
The earlier timing prevents creating Plans solely to the branch address,
causing the payments to be dropped, and the TX to become an effective
aggregation TX.
* Extensively document solutions to Eventualities being potentially created after having already scanned their resolutions
* When closing, drop External/Branch outputs which don't cause progress
* Properly decide if Change outputs should be forward or not when closing
This completes all code needed to make the old multisig have a finite lifetime.
* Commentary on forwarding schemes
* Provide a 1 block window, with liquidity fragmentation risks, due to latency
On Bitcoin, this will be 10 minutes for the relevant Batch to be confirmed. On
Monero, 2 minutes. On Ethereum, ~6 minutes.
Also updates the Multisig Rotation document with the new forwarding plan.
* Implement transaction forwarding from old multisig to new multisig
Identifies a fault where Branch outputs which shouldn't be dropped may be, if
another output fulfills their next step. Locking Branch fulfillment down to
only Branch outputs is not done in this commit, but will be in the next.
* Only let Branch outputs fulfill branches
* Update TODOs
* Move the location of handling signer events to avoid a race condition
* Avoid a deadlock by using a RwLock on a single txn instead of two txns
* Move Batch ID out of the Scanner
* Increase from one block of latency on new keys activation to two
For Monero, this offered just two minutes when our latency to publish a Batch
is around a minute already. This does increase the time our liquidity can be
fragmented by up to 20 minutes (Bitcoin), yet it's a stupid attack only
possible once a week (when we rotate). Prioritizing normal users' transactions
not being subject to forwarding is more important here.
Ideally, we'd not do +2 blocks yet plus `time`, such as +10 minutes, making
this agnostic of the underlying network's block scheduling. This is a
complexity not worth it.
* Split MultisigManager::substrate_block into multiple functions
* Further tweaks to substrate_block
* Acquire a lock on all Scanner operations after calling ack_block
Gives time to call register_eventuality and initiate signing.
* Merge sign_plans into substrate_block
Also ensure the Scanner's lock isn't prematurely released.
* Use a HashMap to pass to-be-forwarded instructions, not the DB
* Successfully determine in ClosingExisting
* Move from 2 blocks of latency when rotating to 10 minutes
Superior as noted in 6d07af92ce10cfd74c17eb3400368b0150eb36d7, now trivial to
implement thanks to prior commit.
* Add note justifying measuring time in blocks when rotating
* Implement delaying of outputs received early to the new multisig per specification
* Documentation on why Branch outputs don't have the race condition concerns Change do
Also ensures 6 hours is at least N::CONFIRMATIONS, for sanity purposes.
* Remove TODO re: sanity checking Eventualities
We sanity check the Plan the Eventuality is derived from, and the Eventuality
is handled moments later (in the same file, with a clear call path). There's no
reason to add such APIs to Eventualities for a sanity check given that.
* Add TODO(now) for TODOs which must be done in this branch
Also deprecates a pair of TODOs to TODO2, and accepts the flow of the Signer
having the Eventuality.
* Correct errors in potential/future flow descriptions
* Accept having a single Plan Vec
Per the following code consuming it, there's no benefit to bifurcating it by
key.
* Only issue sign_transaction on boot for the proper signer
* Only set keys when participating in their construction
* Misc progress
Only send SubstrateBlockAck when we have a signer, as it's only used to tell
the Tributary of what Plans are being signed in response to this block.
Only immediately sets substrate_signer if session is 0.
On boot, doesn't panic if we don't have an active key (as we wouldn't if only
joining the next multisig). Continues.
* Correctly detect and set retirement block
Modifies the retirement block from first block meeting requirements to block
CONFIRMATIONS after.
Adds an ack flow to the Scanner's Confirmed event and Block event to accomplish
this, which may deadlock at this time (will be fixed shortly).
Removes an invalid await (after a point declared unsafe to use await) from
MultisigsManager::next_event.
* Remove deadlock in multisig_completed and document alternative
The alternative is simpler, albeit less efficient. There's no reason to adopt
it now, yet perhaps if it benefits modeling?
* Handle the final step of retirement, dropping the old key and setting new to existing
* Remove TODO about emitting a Block on every step
If we emit on NewAsChange, we lose the purpose of the NewAsChange period.
The only concern is if we reach ClosingExisting, and nothing has happened, then
all coins will still be in the old multisig until something finally does. This
isn't a problem worth solving, as it's latency under exceptional dead time.
* Add TODO about potentially not emitting a Block event for the reitrement block
* Restore accidentally deleted CI file
* Pair of slight tweaks
* Add missing if statement
* Disable an assertion when testing
One of the test flows currently abuses the Scanner in a way triggering it.
A commit made while testing moved them from network-key-indexed to
Substrate-key-indexed. Since Substrate keys have a fixed-length, fitting within
the Copy boundary, there's no reason for it to not use an array.
* restrict batch size to ~25kb
* add batch size check to node
* rate limit batches to 1 per serai block
* add support for multiple batches for block
* fix review comments
* Misc fixes
Doesn't yet update tests/processor until data flow is inspected.
* Move the block from SignId to ProcessorMessage::BatchPreprocesses
* Misc clean up
---------
Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
By default, tokio-spawned worker panics will only kill the task, not the
program. Due to our extensive use of panicking on invariants, we should ensure
the program exits.
The Processor's coins folder referred to the networks it could process, as did
its Coin trait. This, and other similar cases throughout the codebase, have now
been corrected.
Also corrects dated documentation for a key pair is confirmed under the
validator-sets pallet.
This is a horrible impl which does a full ser of everything on every change.
It's just the minimal changes to resolve this TODO and able testnet deployment.
It waited for CONFIRMATIONS + 1 confirmations, instead of CONFIRMATIONS
confirmations.
Also adds a lib interface to access the coin traits and its constants.
When we receive messages, we're provided with a message ID we can use to
prevent handling an item multiple times. That doesn't prevent us from *sending*
an item multiple times though. Thanks to the UID system, we can now not send if
already present.
Alternatively, we can remove the ordered message ID for just the UID, allowing
duplicates to be sent without issue, and handled on the receiving end.
When a Substrate block occurs, the coordinator is expected to emit
SubstrateBlock. This causes the processor to begin a variety of plans. The
processor now emits SubstrateBlockAck, explicitly listing all plan IDs, before
starting signing.
This lets the coordinator provide a SubstrateBlock transaction, and with it,
recognize all plan IDs as valid.
Prior, we would've had to have a spotty algorithm based upon the upcoming
Preprocess messages, or if we immediately provided the SubstrateBlock
transaction, then wait for the processor to inform us of the contained plans.
This creates an explicitly proper async flow not reliant on waiting for data
availability.
Alternatively, we could've replaced Preprocess with (Block, Vec<Preprocess>).
This would've been more efficient, yet also clunky due to the multiple usages
of the Preprocess message.
[0; 32] is a magic for no block has been set yet due to this being the first
key pair. If [0; 32] is the latest finalized block, the processor determines
an activation block based on timestamps.
This doesn't use an Option for ergonomic reasons.
SubstrateBlock's provision of the most recently acknowledged block has
equivalent information with the same latency. Accordingly, there's no need for
it.