* Upstream GBP, divisor, circuit abstraction, and EC gadgets from FCMP++
* Initial eVRF implementation
Not quite done yet. It needs to communicate the resulting points and proofs to
extract them from the Pedersen Commitments in order to return those, and then
be tested.
* Add the openings of the PCs to the eVRF as necessary
* Add implementation of secq256k1
* Make DKG Encryption a bit more flexible
No longer requires the use of an EncryptionKeyMessage, and allows pre-defined
keys for encryption.
* Make NUM_BITS an argument for the field macro
* Have the eVRF take a Zeroizing private key
* Initial eVRF-based DKG
* Add embedwards25519 curve
* Inline the eVRF into the DKG library
Due to how we're handling share encryption, we'd either need two circuits or to
dedicate this circuit to the DKG. The latter makes sense at this time.
* Add documentation to the eVRF-based DKG
* Add paragraph claiming robustness
* Update to the new eVRF proof
* Finish routing the eVRF functionality
Still needs errors and serialization, along with a few other TODOs.
* Add initial eVRF DKG test
* Improve eVRF DKG
Updates how we calculcate verification shares, improves performance when
extracting multiple sets of keys, and adds more to the test for it.
* Start using a proper error for the eVRF DKG
* Resolve various TODOs
Supports recovering multiple key shares from the eVRF DKG.
Inlines two loops to save 2**16 iterations.
Adds support for creating a constant time representation of scalars < NUM_BITS.
* Ban zero ECDH keys, document non-zero requirements
* Implement eVRF traits, all the way up to the DKG, for secp256k1/ed25519
* Add Ristretto eVRF trait impls
* Support participating multiple times in the eVRF DKG
* Only participate once per key, not once per key share
* Rewrite processor key-gen around the eVRF DKG
Still a WIP.
* Finish routing the new key gen in the processor
Doesn't touch the tests, coordinator, nor Substrate yet.
`cargo +nightly fmt && cargo +nightly-2024-07-01 clippy --all-features -p serai-processor`
does pass.
* Deduplicate and better document in processor key_gen
* Update serai-processor tests to the new key gen
* Correct amount of yx coefficients, get processor key gen test to pass
* Add embedded elliptic curve keys to Substrate
* Update processor key gen tests to the eVRF DKG
* Have set_keys take signature_participants, not removed_participants
Now no one is removed from the DKG. Only `t` people publish the key however.
Uses a BitVec for an efficient encoding of the participants.
* Update the coordinator binary for the new DKG
This does not yet update any tests.
* Add sensible Debug to key_gen::[Processor, Coordinator]Message
* Have the DKG explicitly declare how to interpolate its shares
Removes the hack for MuSig where we multiply keys by the inverse of their
lagrange interpolation factor.
* Replace Interpolation::None with Interpolation::Constant
Allows the MuSig DKG to keep the secret share as the original private key,
enabling deriving FROST nonces consistently regardless of the MuSig context.
* Get coordinator tests to pass
* Update spec to the new DKG
* Get clippy to pass across the repo
* cargo machete
* Add an extra sleep to ensure expected ordering of `Participation`s
* Update orchestration
* Remove bad panic in coordinator
It expected ConfirmationShare to be n-of-n, not t-of-n.
* Improve documentation on functions
* Update TX size limit
We now no longer have to support the ridiculous case of having 49 DKG
participations within a 101-of-150 DKG. It does remain quite high due to
needing to _sign_ so many times. It'd may be optimal for parties with multiple
key shares to independently send their preprocesses/shares (despite the
overhead that'll cause with signatures and the transaction structure).
* Correct error in the Processor spec document
* Update a few comments in the validator-sets pallet
* Send/Recv Participation one at a time
Sending all, then attempting to receive all in an expected order, wasn't working
even with notable delays between sending messages. This points to the mempool
not working as expected...
* Correct ThresholdKeys serialization in modular-frost test
* Updating existing TX size limit test for the new DKG parameters
* Increase time allowed for the DKG on the GH CI
* Correct construction of signature_participants in serai-client tests
Fault identified by akil.
* Further contextualize DkgConfirmer by ValidatorSet
Caught by a safety check we wouldn't reuse preprocesses across messages. That
raises the question of we were prior reusing preprocesses (reusing keys)?
Except that'd have caused a variety of signing failures (suggesting we had some
staggered timing avoiding it in practice but yes, this was possible in theory).
* Add necessary calls to set_embedded_elliptic_curve_key in coordinator set rotation tests
* Correct shimmed setting of a secq256k1 key
* cargo fmt
* Don't use `[0; 32]` for the embedded keys in the coordinator rotation test
The key_gen function expects the random values already decided.
* Big-endian secq256k1 scalars
Also restores the prior, safer, Encryption::register function.
* Clean up Ethereum
* Consistent contract address for deployed contracts
* Flesh out Router a bit
* Add a Deployer for DoS-less deployment
* Implement Router-finding
* Use CREATE2 helper present in ethers
* Move from CREATE2 to CREATE
Bit more streamlined for our use case.
* Document ethereum-serai
* Tidy tests a bit
* Test updateSeraiKey
* Use encodePacked for updateSeraiKey
* Take in the block hash to read state during
* Add a Sandbox contract to the Ethereum integration
* Add retrieval of transfers from Ethereum
* Add inInstruction function to the Router
* Augment our handling of InInstructions events with a check the transfer event also exists
* Have the Deployer error upon failed deployments
* Add --via-ir
* Make get_transaction test-only
We only used it to get transactions to confirm the resolution of Eventualities.
Eventualities need to be modularized. By introducing the dedicated
confirm_completion function, we remove the need for a non-test get_transaction
AND begin this modularization (by no longer explicitly grabbing a transaction
to check with).
* Modularize Eventuality
Almost fully-deprecates the Transaction trait for Completion. Replaces
Transaction ID with Claim.
* Modularize the Scheduler behind a trait
* Add an extremely basic account Scheduler
* Add nonce uses, key rotation to the account scheduler
* Only report the account Scheduler empty after transferring keys
Also ban payments to the branch/change/forward addresses.
* Make fns reliant on state test-only
* Start of an Ethereum integration for the processor
* Add a session to the Router to prevent updateSeraiKey replaying
This would only happen if an old key was rotated to again, which would require
n-of-n collusion (already ridiculous and a valid fault attributable event). It
just clarifies the formal arguments.
* Add a RouterCommand + SignMachine for producing it to coins/ethereum
* Ethereum which compiles
* Have branch/change/forward return an option
Also defines a UtxoNetwork extension trait for MAX_INPUTS.
* Make external_address exclusively a test fn
* Move the "account" scheduler to "smart contract"
* Remove ABI artifact
* Move refund/forward Plan creation into the Processor
We create forward Plans in the scan path, and need to know their exact fees in
the scan path. This requires adding a somewhat wonky shim_forward_plan method
so we can obtain a Plan equivalent to the actual forward Plan for fee reasons,
yet don't expect it to be the actual forward Plan (which may be distinct if
the Plan pulls from the global state, such as with a nonce).
Also properly types a Scheduler addendum such that the SC scheduler isn't
cramming the nonce to use into the N::Output type.
* Flesh out the Ethereum integration more
* Two commits ago, into the **Scheduler, not Processor
* Remove misc TODOs in SC Scheduler
* Add constructor to RouterCommandMachine
* RouterCommand read, pairing with the prior added write
* Further add serialization methods
* Have the Router's key included with the InInstruction
This does not use the key at the time of the event. This uses the key at the
end of the block for the event. Its much simpler than getting the full event
streams for each, checking when they interlace.
This does not read the state. Every block, this makes a request for every
single key update and simply chooses the last one. This allows pruning state,
only keeping the event tree. Ideally, we'd also introduce a cache to reduce the
cost of the filter (small in events yielded, long in blocks searched).
Since Serai doesn't have any forwarding TXs, nor Branches, nor change, all of
our Plans should solely have payments out, and there's no expectation of a Plan
being made under one key broken by it being received by another key.
* Add read/write to InInstruction
* Abstract the ABI for Call/OutInstruction in ethereum-serai
* Fill out signable_transaction for Ethereum
* Move ethereum-serai to alloy
Resolves#331.
* Use the opaque sol macro instead of generated files
* Move the processor over to the now-alloy-based ethereum-serai
* Use the ecrecover provided by alloy
* Have the SC use nonce for rotation, not session (an independent nonce which wasn't synchronized)
* Always use the latest keys for SC scheduled plans
* get_eventuality_completions for Ethereum
* Finish fleshing out the processor Ethereum integration as needed for serai-processor tests
This doesn't not support any actual deployments, not even the ones simulated by
serai-processor-docker-tests.
* Add alloy-simple-request-transport to the GH workflows
* cargo update
* Clarify a few comments and make one check more robust
* Use a string for 27.0 in .github
* Remove optional from no-longer-optional dependencies in processor
* Add alloy to git deny exception
* Fix no longer optional specification in processor's binaries feature
* Use a version of foundry from 2024
* Correct fetching Bitcoin TXs in the processor docker tests
* Update rustls to resolve RUSTSEC warnings
* Use the monthly nightly foundry, not the deleted daily nightly
* add input script check
* add test
* optimizations
* bug fix
* fix pr comments
* Test SegWit-encoded data using a single output (not two)
* Remove TODO used as a question, document origins when SegWit encoding
---------
Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
Moves from concatted Dockerfiles to pseudo-templated Dockerfiles via a dedicated Rust program.
Removes the unmaintained kubernetes, not because we shouldn't have/use it, but because it's unmaintained and needs to be reworked before it's present again.
Replaces the compose with the work in the new orchestrator binary which spawns everything as expected. While this arguably re-invents the wheel, it correctly manages secrets and handles the variadic Dockerfiles.
Also adds an unrelated patch for zstd and simplifies running services a bit by greater utilizing the existing infrastructure.
---
* Delete all Dockerfile fragments, add new orchestator to generate Dockerfiles
Enables greater templating.
Also delete the unmaintained kubernetes folder *for now*. This should be
restored in the future.
* Use Dockerfiles from the orchestator
* Ignore Dockerfiles in the git repo
* Remove CI job to check Dockerfiles are as expected now that they're no longer committed
* Remove old Dockerfiles from repo
* Use Debian for monero-wallet-rpc
* Remove replace_cmds for proper usage of entry-dev
Consolidates ports a bit.
Updates serai-docker-tests from "compose" to "build".
* Only write a new dockerfile if it's distinct
Preserves the updated time metadata.
* Update serai-docker-tests
* Correct the path Dockerfiles are built from
* Correct inclusion of orchestration folder in Docker builds
* Correct debug/release flagging in the cargo command
Apparently, --debug isn't an effective NOP yet an error.
* Correct path used to run the Serai node within a Dockerfile
* Correct path in Monero Dockerfile
* Attempt storing monerod in /usr/bin
* Use sudo to move into /usr/bin in CI
* Correct 18.3.0 to 18.3.1
* Escape * with quotes
* Update deny.toml, ADD orchestration in runtime Dockerfile
* Add --detach to the Monero GH CI
* Diversify dockerfiles by network
* Fixes to network-diversified orchestration
* Bitcoin and Monero testnet scripts
* Permissions and tweaks
* Flatten scripts folders
* Add missing folder specification to Monero Dockerfile
* Have monero-wallet-rpc specify the monerod login
* Have the Docker CMD specify env variables inserted at time of Dockerfile generation
They're overrideable with the global enviornment as for tests. This enables
variable generation in orchestrator and output to productionized Docker files
without creating a life-long file within the Docker container.
* Don't add Dockerfiles into Docker containers now that they have secrets
Solely add the source code for them as needed to satisfy the workspace bounds.
* Download arm64 Monero on arm64
* Ensure constant host architecture when reproducibly building the wasm
Host architecture, for some reason, can effect the generated code despite the
target architecture always being foreign to the host architecture.
* Randomly generate infrastructure keys
* Have orchestrator generate a key, be able to create/start containers
* Ensure bash is used over sh
* Clean dated docs
* Change how quoting occurs
* Standardize to sh
* Have Docker test build the dev Dockerfiles
* Only key_gen once
* cargo update
Adds a patch for zstd and reconciles the breaking nightly change which just
occurred.
* Use a dedicated network for Serai
Also fixes SERAI_HOSTNAME passed to coordinator.
* Support providing a key over the env for the Serai node
* Enable and document running daemons for tests via serai-orchestrator
Has running containers under the dev network port forward the RPC ports.
* Use volumes for bitcoin/monero
* Use bitcoin's run.sh in GH CI
* Only use the volume for testnet (not dev)
* Schedule re-attempts and add a (not filled out) match statement to actually execute them
A comment explains the methodology. To copy it here:
"""
This is because we *always* re-attempt any protocol which had participation. That doesn't
mean we *should* re-attempt this protocol.
The alternatives were:
1) Note on-chain we completed a protocol, halting re-attempts upon 34%.
2) Vote on-chain to re-attempt a protocol.
This schema doesn't have any additional messages upon the success case (whereas
alternative #1 does) and doesn't have overhead (as alternative #2 does, sending votes and
then preprocesses. This only sends preprocesses).
"""
Any signing protocol which reaches sufficient participation will be
re-attempted until it no longer does.
* Have the Substrate scanner track DKG removals/completions for the Tributary code
* Don't keep trying to publish a participant removal if we've already set keys
* Pad out the re-attempt match a bit more
* Have CosignEvaluator reload from the DB
* Correctly schedule cosign re-attempts
* Actuall spawn new DKG removal attempts
* Use u32 for Batch ID in SubstrateSignableId, finish Batch re-attempt routing
The batch ID was an opaque [u8; 5] which also included the network, yet that's
redundant and unhelpful.
* Clarify a pair of TODOs in the coordinator
* Remove old TODO
* Final comment cleanup
* Correct usage of TARGET_BLOCK_TIME in reattempt scheduler
It's in ms and I assumed it was in s.
* Have coordinator tests drop BatchReattempts which aren't relevant yet may exist
* Bug fix and pointless oddity removal
We scheduled a re-attempt upon receiving 2/3rds of preprocesses and upon
receiving 2/3rds of shares, so any signing protocol could cause two re-attempts
(not one more).
The coordinator tests randomly generated the Batch ID since it was prior an
opaque byte array. While that didn't break the test, it was pointless and did
make the already-succeeded check before re-attempting impossible to hit.
* Add log statements, correct dead-lock in coordinator tests
* Increase pessimistic timeout on recv_message to compensate for tighter best-case timeouts
* Further bump timeout by a minute
AFAICT, GH failed by just a few seconds.
This also is worst-case in a single instance, making it fine to be decently long.
* Further further bump timeout due to lack of distinct error
* Remove NetworkId from processor-messages
Because intent binds to the sender/receiver, it's not needed for intent.
The processor knows what the network is.
The coordinator knows which to use because it's sending this message to the
processor for that network.
Also removes the unused zeroize.
* ProcessorMessage::Completed use Session instead of key
* Move SubstrateSignId to Session
* Finish replacing key with session
* Add SignalsConfig to chain_spec
* Correct multiexp feature flagging for rand_core std
* Remove bincode for borsh
Replaces a non-canonical encoding with a canonical encoding which additionally
should be faster.
Also fixes an issue where we used bincode in transcripts where it cannot be
trusted.
This ended up fixing a myriad of other bugs observed, unfortunately.
Accordingly, it either has to be merged or the bug fixes from it must be ported
to a new PR.
* Make serde optional, minimize usage
* Make borsh an optional dependency of substrate/ crates
* Remove unused dependencies
* Use [u8; 64] where possible in the processor messages
* Correct borsh feature flagging
* Add a function to deterministically decide which Serai blocks should be co-signed
Has a 5 minute latency between co-signs, also used as the maximal latency
before a co-sign is started.
* Get all active tributaries we're in at a specific block
* Add and route CosignSubstrateBlock, a new provided TX
* Split queued cosigns per network
* Rename BatchSignId to SubstrateSignId
* Add SubstrateSignableId, a meta-type for either Batch or Block, and modularize around it
* Handle the CosignSubstrateBlock provided TX
* Revert substrate_signer.rs to develop (and patch to still work)
Due to SubstrateSigner moving when the prior multisig closes, yet cosigning
occurring with the most recent key, a single SubstrateSigner can be reused.
We could manage multiple SubstrateSigners, yet considering the much lower
specifications for cosigning, I'd rather treat it distinctly.
* Route cosigning through the processor
* Add note to rename SubstrateSigner post-PR
I don't want to do so now in order to preserve the diff's clarity.
* Implement cosign evaluation into the coordinator
* Get tests to compile
* Bug fixes, mark blocks without cosigners available as cosigned
* Correct the ID Batch preprocesses are saved under, add log statements
* Create a dedicated function to handle cosigns
* Correct the flow around Batch verification/queueing
Verifying `Batch`s could stall when a `Batch` was signed before its
predecessors/before the block it's contained in was cosigned (the latter being
inevitable as we can't sign a block containing a signed batch before signing
the batch).
Now, Batch verification happens on a distinct async task in order to not block
the handling of processor messages. This task is the sole caller of verify in
order to ensure last_verified_batch isn't unexpectedly mutated.
When the processor message handler needs to access it, or needs to queue a
Batch, it associates the DB TXN with a lock preventing the other task from
doing so.
This lock, as currently implemented, is a poor and inefficient design. It
should be modified to the pattern used for cosign management. Additionally, a
new primitive of a DB-backed channel may be immensely valuable.
Fixes a standing potential deadlock and a deadlock introduced with the
cosigning protocol.
* Working full-stack tests
After the last commit, this only required extending a timeout.
* Replace "co-sign" with "cosign" to make finding text easier
* Update the coordinator tests to support cosigning
* Inline prior_batch calculation to prevent panic on rotation
Noticed when doing a final review of the branch.
If a user transferred in without an InInstruction, and the amount exactly
matched a forwarded output, the user's output would fulfill the
forwarding. Then the forwarded output would come along, have no InInstruction,
and be refunded (to the prior multisig) when the user should've been refunded.
Adding this new address type resolves such concerns.
This code is still largely designed around the idea a payment for a network is
fungible with any other, which isn't true. This starts moving past that.
Asserts are added to ensure the integrity of coin to the scheduler (which is
now per key per coin, not per key alone) and in Bitcoin/Monero prepare_send.
The prior system spawned a new connection per request to enable parallelism,
yet kept hitting hyper::IncompleteMessages I couldn't track down. This
attempts to resolve those by a long-lived socket.
Halves the amount of requests per-authenticated RPC call, and accordingly is
likely still better overall.
I don't believe this is resolved yet but this is still worth pushing.
Removes bitcoin-serai's usage of sha2 for bitcoin-hashes. While sha2 is still
in play due to modular-frost (more specifically, due to ciphersuite), this
offers a bit more performance (assuming equivalency between sha2 and
bitcoin-hashes' impl) due to removing a static for a const.
Makes secp256k1 a dev dependency for bitcoin-serai. While secp256k1 is still
pulled in via bitcoin, it's hopefully slightly better to compile now and makes
usage of secp256k1 an implementation detail of bitcoin (letting it change it
freely).
Also offers slightly more efficient signing as we don't decode to a signature
just to re-encode for the transaction.
Removes a 20s sleep for a check every second, up to 20 times, for reduced test
times in the processor.
* Move pallet-asset-conversion
* update licensing
* initial integration
* Integrate Currency & Assets types
* integrate liquidity tokens
* fmt
* integrate dex pallet tests
* fmt
* compilation error fixes
* integrate dex benchmarks
* fmt
* cargo clippy
* replace all occurrences of "asset" with "coin"
* add the actual add liq/swap logic to in-instructions
* add client side & tests
* fix deny
* Lint and changes
- Renames InInstruction::AddLiquidity to InInstruction::SwapAndAddLiquidity
- Makes create_pool an internal function
- Makes dex-pallet exclusively create pools against a native coin
- Removes various fees
- Adds new crates to GH workflow
* Fix rebase artifacts
* Correct other rebase artifact
* Correct CI specification for liquidity-tokens
* Correct primitives' test to the standardized pallet account scheme
---------
Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
* Update the coordinator to give key shares based on weight, not based on existence
Participants are now identified by their starting index. While this compiles,
the following is unimplemented:
1) A conversion for DKG `i` values. It assumes the threshold `i` values used
will be identical for the MuSig signature used to confirm the DKG.
2) Expansion from compressed values to full values before forwarding to the
processor.
* Add a fn to the DkgConfirmer to convert `i` values as needed
Also removes TODOs regarding Serai ensuring validator key uniqueness +
validity. The current infra achieves both.
* Have the Tributary DB track participation by shares, not by count
* Prevent a node from obtaining 34% of the maximum amount of key shares
This is actually mainly intended to set a bound on message sizes in the
coordinator. Message sizes are amplified by the amount of key shares held, so
setting an upper bound on said amount lets it determine constants. While that
upper bound could be 150, that'd be unreasonable and increase the potential for
DoS attacks.
* Correct the mechanism to detect if sufficient accumulation has occured
It used to check if the latest accumulation hit the required threshold. Now,
accumulations may jump past the required threshold. The required mechanism is
to check the threshold wasn't prior met and is now met.
* Finish updating the coordinator to handle a multiple key share per validator environment
* Adjust stategy re: preventing noce reuse in DKG Confirmer
* Add TODOs regarding dropped transactions, add possible TODO fix
* Update tests/coordinator
This doesn't add new multi-key-share tests, it solely updates the existing
single key-share tests to compile and run, with the necessary fixes to the
coordinator.
* Update processor key_gen to handle generating multiple key shares at once
* Update SubstrateSigner
* Update signer, clippy
* Update processor tests
* Update processor docker tests
Even though the intent was to test against 0.17.3.2, and a Monero 0.17.3.2 node
was running, the processor now uses docker which will always use 0.18.
Accordingly, while the intent was valid, it was pointless.
This is unfortunate, as testing against 0.17 helped protect against edge cases.
The infra to preserve their tests isn't worth the benefit we'd gain from said
tests however.
The lack of locking the connection when making an authenticated request, which
is actually two sequential requests, risked another caller making a request in
between, invalidating the state.
Now, only unauthenticated connections share a connection object.
Disables the unused zmq RPC.
Removes authentication which seems to be unstable as hell when under load
(see #351).
No longer use Network::Isolated as it's not needed here (the Monero nodes run
with `--offline`).
Implements most of #297 to the point I'm fine closing it. The solution
implemented is distinct than originally designed, yet much simpler.
Since we have a fully-linear view of created transactions, we don't have to
per-output track operating costs incurred by that output. We can track it
across the entire Serai system, without hooking into the Eventuality system.
Also updates documentation.
* Revert "Correct the prior documented TOCTOU"
This reverts commit d50fe87801.
* Correct the prior documented TOCTOU
d50fe87801 edited the challenge for the Batch to
fix it. This won't produce Batch n+1 until Batch n is successfully published
and verified. It's an alternative strategy able to be reviewed, with a much
smaller impact to scope.
Now, if a malicious validator set publishes a malicious `Batch` at the last
moment, it'll cause all future `Batch`s signed by the next validator set to
require a bool being set (yet they never will set it).
This will prevent the handover.
The only overhead is having two distinct `batch_message` calls on-chain.
Prior, we only supported a single Tributary per network, and spawned a task to
handled Processor messages per Tributary. Now, we handle Processor messages per
network, yet we still only supported a single Tributary in that handling
function.
Now, when we handle a message, we load the Tributary which is relevant. Once we
know it, we ensure we have it (preventing race conditions), and then proceed.
We do need work to check if we should have a Tributary, or if we're not
participating. We also need to check if a Tributary has been retired, meaning
we shouldn't handle any transactions related to them, and to clean up retired
Tributaries.
* Design and document a multisig rotation flow
* Make Scanner::eventualities a HashMap so it's per-key
* Don't drop eventualities, always follow through on them
Technical improvements made along the way.
* Start creating an isolate object to manage multisigs, which doesn't require being a signer
Removes key from SubstrateBlock.
* Move Scanner/Scheduler under multisigs
* Move Batch construction into MultisigManager
* Clarify "should" in Multisig Rotation docs
* Add block_number to MultisigManager, as it controls the scanner
* Move sign_plans into MultisigManager
Removes ThresholdKeys from prepare_send.
* Make SubstrateMutable an alias for MultisigManager
* Rewrite Multisig Rotation
The prior scheme had an exploit possible where funds were sent to the old
multisig, then burnt on Serai to send from the new multisig, locking liquidity
for 6 hours. While a fee could be applied to stragglers, to make this attack
unprofitable, the newly described scheme avoids all this.
* Add mini
mini is a miniature version of Serai, emphasizing Serai's nature as a
collection of independent clocks. The intended use is to identify race
conditions and prove protocols are comprehensive regarding when certain clocks
tick.
This uses loom, a prior candidate for evaluating the processor/coordinator as
free of race conditions (#361).
* Use mini to prove a race condition in the current multisig rotation docs, and prove safety of alternatives
Technically, the prior commit had mini prove the race condition.
The docs currently say the activation block of the new multisig is the block
after the next Batch's. If the two next Batches had already entered the
mempool, prior to set_keys being called, the second next Batch would be
expected to contain the new key's data yet fail to as the key wasn't public
when the Batch was actually created.
The naive solution is to create a Batch, publish it, wait until it's included,
and only then scan the next block. This sets a bound of
`Batch publication time < block time`. Optimistically, we can publish a Batch
in 24s while our shortest block time is 2m. Accordingly, we should be fine with
the naive solution which doesn't take advantage of throughput. #333 may
significantly change latency however and require an algorithm whose throughput
exceeds the rate of blocks created.
In order to re-introduce parallelization, enabling throughput, we need to
define a safe range of blocks to scan without Serai ordering the first one.
mini demonstrates safety of scanning n blocks Serai hasn't acknowledged, so
long as the first is scanned before block n+1 is (shifting the n-block window).
The docs will be updated next, to reflect this.
* Fix Multisig Rotation
I believe this is finally good enough to be final.
1) Fixes the race condition present in the prior document, as demonstrated by
mini.
`Batch`s for block `n` and `n+1`, may have been in the mempool when a
multisig's activation block was set to `n`. This would cause a potentially
distinct `Batch` for `n+1`, despite `n+1` already having a signed `Batch`.
2) Tightens when UIs should use the new multisig to prevent eclipse attacks,
and protection against `Batch` publication delays.
3) Removes liquidity fragmentation by tightening flow/handling of latency.
4) Several clarifications and documentation of reasoning.
5) Correction of "prior multisig" to "all prior multisigs" regarding historical
verification, with explanation why.
* Clarify terminology in mini
Synchronizes it from my original thoughts on potential schema to the design
actually created.
* Remove most of processor's README for a reference to docs/processor
This does drop some misc commentary, though none too beneficial. The section on
scanning, deemed sufficiently beneficial, has been moved to a document and
expanded on.
* Update scanner TODOs in line with new docs
* Correct documentation on Bitcoin::Block::time, and Block::time
* Make the scanner in MultisigManager no longer public
* Always send ConfirmKeyPair, regardless of if in-set
* Cargo.lock changes from a prior commit
* Add a policy document on defining a Canonical Chain
I accidentally committed a version of this with a few headers earlier, and this
is a proper version.
* Competent MultisigManager::new
* Update processor's comments
* Add mini to copied files
* Re-organize Scanner per multisig rotation document
* Add RUST_LOG trace targets to e2e tests
* Have the scanner wait once it gets too far ahead
Also bug fixes.
* Add activation blocks to the scanner
* Split received outputs into existing/new in MultisigManager
* Select the proper scheduler
* Schedule multisig activation as detailed in documentation
* Have the Coordinator assert if multiple `Batch`s occur within a block
While the processor used to have ack_up_to_block, enabling skips in the block
acked, support for this was removed while reworking it for multiple multisigs.
It should happen extremely infrequently.
While it would still be beneficial to have, if multiple `Batch`s could occur
within a block (with the complexity here not being worth adding that ban as a
policy), multiple `Batch`s were blocked for DoS reasons.
* Schedule payments to the proper multisig
* Correct >= to <
* Use the new multisig's key for change on schedule
* Don't report External TXs to prior multisig once deprecated
* Forward from the old multisig to the new one at all opportunities
* Move unfulfilled payments in queue from prior to new multisig
* Create MultisigsDb, splitting it out of MainDb
Drops the call to finish_signing from the Signer. While this will cause endless
re-attempts, the Signer will still consider them completed and drop them,
making this an O(n) cost at boot even if we did nothing from here.
The MultisigManager should call finish_signing once the Scanner completes the
Eventuality.
* Don't check Scanner-emitted completions, trust they are completions
Prevents needing to use async code to mark the completion and creates a
fault-free model. The current model, on fault, would cause a lack of marked
completion in the signer.
* Fix a possible panic in the processor
A shorter-chain reorg could cause this assert to trip. It's fixed by
de-duplicating the data, as the assertion checked consistency. Without the
potential for inconsistency, it's unnecessary.
* Document why an existing TODO isn't valid
* Change when we drop payments for being to the change address
The earlier timing prevents creating Plans solely to the branch address,
causing the payments to be dropped, and the TX to become an effective
aggregation TX.
* Extensively document solutions to Eventualities being potentially created after having already scanned their resolutions
* When closing, drop External/Branch outputs which don't cause progress
* Properly decide if Change outputs should be forward or not when closing
This completes all code needed to make the old multisig have a finite lifetime.
* Commentary on forwarding schemes
* Provide a 1 block window, with liquidity fragmentation risks, due to latency
On Bitcoin, this will be 10 minutes for the relevant Batch to be confirmed. On
Monero, 2 minutes. On Ethereum, ~6 minutes.
Also updates the Multisig Rotation document with the new forwarding plan.
* Implement transaction forwarding from old multisig to new multisig
Identifies a fault where Branch outputs which shouldn't be dropped may be, if
another output fulfills their next step. Locking Branch fulfillment down to
only Branch outputs is not done in this commit, but will be in the next.
* Only let Branch outputs fulfill branches
* Update TODOs
* Move the location of handling signer events to avoid a race condition
* Avoid a deadlock by using a RwLock on a single txn instead of two txns
* Move Batch ID out of the Scanner
* Increase from one block of latency on new keys activation to two
For Monero, this offered just two minutes when our latency to publish a Batch
is around a minute already. This does increase the time our liquidity can be
fragmented by up to 20 minutes (Bitcoin), yet it's a stupid attack only
possible once a week (when we rotate). Prioritizing normal users' transactions
not being subject to forwarding is more important here.
Ideally, we'd not do +2 blocks yet plus `time`, such as +10 minutes, making
this agnostic of the underlying network's block scheduling. This is a
complexity not worth it.
* Split MultisigManager::substrate_block into multiple functions
* Further tweaks to substrate_block
* Acquire a lock on all Scanner operations after calling ack_block
Gives time to call register_eventuality and initiate signing.
* Merge sign_plans into substrate_block
Also ensure the Scanner's lock isn't prematurely released.
* Use a HashMap to pass to-be-forwarded instructions, not the DB
* Successfully determine in ClosingExisting
* Move from 2 blocks of latency when rotating to 10 minutes
Superior as noted in 6d07af92ce10cfd74c17eb3400368b0150eb36d7, now trivial to
implement thanks to prior commit.
* Add note justifying measuring time in blocks when rotating
* Implement delaying of outputs received early to the new multisig per specification
* Documentation on why Branch outputs don't have the race condition concerns Change do
Also ensures 6 hours is at least N::CONFIRMATIONS, for sanity purposes.
* Remove TODO re: sanity checking Eventualities
We sanity check the Plan the Eventuality is derived from, and the Eventuality
is handled moments later (in the same file, with a clear call path). There's no
reason to add such APIs to Eventualities for a sanity check given that.
* Add TODO(now) for TODOs which must be done in this branch
Also deprecates a pair of TODOs to TODO2, and accepts the flow of the Signer
having the Eventuality.
* Correct errors in potential/future flow descriptions
* Accept having a single Plan Vec
Per the following code consuming it, there's no benefit to bifurcating it by
key.
* Only issue sign_transaction on boot for the proper signer
* Only set keys when participating in their construction
* Misc progress
Only send SubstrateBlockAck when we have a signer, as it's only used to tell
the Tributary of what Plans are being signed in response to this block.
Only immediately sets substrate_signer if session is 0.
On boot, doesn't panic if we don't have an active key (as we wouldn't if only
joining the next multisig). Continues.
* Correctly detect and set retirement block
Modifies the retirement block from first block meeting requirements to block
CONFIRMATIONS after.
Adds an ack flow to the Scanner's Confirmed event and Block event to accomplish
this, which may deadlock at this time (will be fixed shortly).
Removes an invalid await (after a point declared unsafe to use await) from
MultisigsManager::next_event.
* Remove deadlock in multisig_completed and document alternative
The alternative is simpler, albeit less efficient. There's no reason to adopt
it now, yet perhaps if it benefits modeling?
* Handle the final step of retirement, dropping the old key and setting new to existing
* Remove TODO about emitting a Block on every step
If we emit on NewAsChange, we lose the purpose of the NewAsChange period.
The only concern is if we reach ClosingExisting, and nothing has happened, then
all coins will still be in the old multisig until something finally does. This
isn't a problem worth solving, as it's latency under exceptional dead time.
* Add TODO about potentially not emitting a Block event for the reitrement block
* Restore accidentally deleted CI file
* Pair of slight tweaks
* Add missing if statement
* Disable an assertion when testing
One of the test flows currently abuses the Scanner in a way triggering it.