Commit graph

42 commits

Author SHA1 Message Date
Luke Parker
1e8a9ec5bd Smash out the signer
Abstract, to be done for the transactions, the batches, the cosigns, the slash
reports, everything. It has a minimal API itself, intending to be as clear as
possible.
2024-09-19 23:36:32 -07:00
Luke Parker
e4e4245ee3
One Round DKG (#589)
* Upstream GBP, divisor, circuit abstraction, and EC gadgets from FCMP++

* Initial eVRF implementation

Not quite done yet. It needs to communicate the resulting points and proofs to
extract them from the Pedersen Commitments in order to return those, and then
be tested.

* Add the openings of the PCs to the eVRF as necessary

* Add implementation of secq256k1

* Make DKG Encryption a bit more flexible

No longer requires the use of an EncryptionKeyMessage, and allows pre-defined
keys for encryption.

* Make NUM_BITS an argument for the field macro

* Have the eVRF take a Zeroizing private key

* Initial eVRF-based DKG

* Add embedwards25519 curve

* Inline the eVRF into the DKG library

Due to how we're handling share encryption, we'd either need two circuits or to
dedicate this circuit to the DKG. The latter makes sense at this time.

* Add documentation to the eVRF-based DKG

* Add paragraph claiming robustness

* Update to the new eVRF proof

* Finish routing the eVRF functionality

Still needs errors and serialization, along with a few other TODOs.

* Add initial eVRF DKG test

* Improve eVRF DKG

Updates how we calculcate verification shares, improves performance when
extracting multiple sets of keys, and adds more to the test for it.

* Start using a proper error for the eVRF DKG

* Resolve various TODOs

Supports recovering multiple key shares from the eVRF DKG.

Inlines two loops to save 2**16 iterations.

Adds support for creating a constant time representation of scalars < NUM_BITS.

* Ban zero ECDH keys, document non-zero requirements

* Implement eVRF traits, all the way up to the DKG, for secp256k1/ed25519

* Add Ristretto eVRF trait impls

* Support participating multiple times in the eVRF DKG

* Only participate once per key, not once per key share

* Rewrite processor key-gen around the eVRF DKG

Still a WIP.

* Finish routing the new key gen in the processor

Doesn't touch the tests, coordinator, nor Substrate yet.
`cargo +nightly fmt && cargo +nightly-2024-07-01 clippy --all-features -p serai-processor`
does pass.

* Deduplicate and better document in processor key_gen

* Update serai-processor tests to the new key gen

* Correct amount of yx coefficients, get processor key gen test to pass

* Add embedded elliptic curve keys to Substrate

* Update processor key gen tests to the eVRF DKG

* Have set_keys take signature_participants, not removed_participants

Now no one is removed from the DKG. Only `t` people publish the key however.

Uses a BitVec for an efficient encoding of the participants.

* Update the coordinator binary for the new DKG

This does not yet update any tests.

* Add sensible Debug to key_gen::[Processor, Coordinator]Message

* Have the DKG explicitly declare how to interpolate its shares

Removes the hack for MuSig where we multiply keys by the inverse of their
lagrange interpolation factor.

* Replace Interpolation::None with Interpolation::Constant

Allows the MuSig DKG to keep the secret share as the original private key,
enabling deriving FROST nonces consistently regardless of the MuSig context.

* Get coordinator tests to pass

* Update spec to the new DKG

* Get clippy to pass across the repo

* cargo machete

* Add an extra sleep to ensure expected ordering of `Participation`s

* Update orchestration

* Remove bad panic in coordinator

It expected ConfirmationShare to be n-of-n, not t-of-n.

* Improve documentation on  functions

* Update TX size limit

We now no longer have to support the ridiculous case of having 49 DKG
participations within a 101-of-150 DKG. It does remain quite high due to
needing to _sign_ so many times. It'd may be optimal for parties with multiple
key shares to independently send their preprocesses/shares (despite the
overhead that'll cause with signatures and the transaction structure).

* Correct error in the Processor spec document

* Update a few comments in the validator-sets pallet

* Send/Recv Participation one at a time

Sending all, then attempting to receive all in an expected order, wasn't working
even with notable delays between sending messages. This points to the mempool
not working as expected...

* Correct ThresholdKeys serialization in modular-frost test

* Updating existing TX size limit test for the new DKG parameters

* Increase time allowed for the DKG on the GH CI

* Correct construction of signature_participants in serai-client tests

Fault identified by akil.

* Further contextualize DkgConfirmer by ValidatorSet

Caught by a safety check we wouldn't reuse preprocesses across messages. That
raises the question of we were prior reusing preprocesses (reusing keys)?
Except that'd have caused a variety of signing failures (suggesting we had some
staggered timing avoiding it in practice but yes, this was possible in theory).

* Add necessary calls to set_embedded_elliptic_curve_key in coordinator set rotation tests

* Correct shimmed setting of a secq256k1 key

* cargo fmt

* Don't use `[0; 32]` for the embedded keys in the coordinator rotation test

The key_gen function expects the random values already decided.

* Big-endian secq256k1 scalars

Also restores the prior, safer, Encryption::register function.
2024-09-19 21:43:26 -04:00
Luke Parker
4913873b10
Slash reports (#523)
* report_slashes plumbing in Substrate

Notably delays the SetRetired event until it provides a slash report or the set
after it becomes the set to report its slashes.

* Add dedicated AcceptedHandover event

* Add SlashReport TX to Tributary

* Create SlashReport TXs

* Handle SlashReport TXs

* Add logic to generate a SlashReport to the coordinator

* Route SlashReportSigner into the processor

* Finish routing the SlashReport signing/TX publication

* Add serai feature to processor's serai-client
2024-01-29 03:48:53 -05:00
Luke Parker
c2fffb9887
Correct a couple years of accumulated typos 2023-12-17 02:06:51 -05:00
Luke Parker
065d314e2a
Further expand clippy workspace lints
Achieves a notable amount of reduced async and clones.
2023-12-17 00:04:49 -05:00
Luke Parker
6a172825aa
Reattempts (#483)
* Schedule re-attempts and add a (not filled out) match statement to actually execute them

A comment explains the methodology. To copy it here:

"""
This is because we *always* re-attempt any protocol which had participation. That doesn't
mean we *should* re-attempt this protocol.

The alternatives were:
1) Note on-chain we completed a protocol, halting re-attempts upon 34%.
2) Vote on-chain to re-attempt a protocol.

This schema doesn't have any additional messages upon the success case (whereas
alternative #1 does) and doesn't have overhead (as alternative #2 does, sending votes and
then preprocesses. This only sends preprocesses).
"""

Any signing protocol which reaches sufficient participation will be
re-attempted until it no longer does.

* Have the Substrate scanner track DKG removals/completions for the Tributary code

* Don't keep trying to publish a participant removal if we've already set keys

* Pad out the re-attempt match a bit more

* Have CosignEvaluator reload from the DB

* Correctly schedule cosign re-attempts

* Actuall spawn new DKG removal attempts

* Use u32 for Batch ID in SubstrateSignableId, finish Batch re-attempt routing

The batch ID was an opaque [u8; 5] which also included the network, yet that's
redundant and unhelpful.

* Clarify a pair of TODOs in the coordinator

* Remove old TODO

* Final comment cleanup

* Correct usage of TARGET_BLOCK_TIME in reattempt scheduler

It's in ms and I assumed it was in s.

* Have coordinator tests drop BatchReattempts which aren't relevant yet may exist

* Bug fix and pointless oddity removal

We scheduled a re-attempt upon receiving 2/3rds of preprocesses and upon
receiving 2/3rds of shares, so any signing protocol could cause two re-attempts
(not one more).

The coordinator tests randomly generated the Batch ID since it was prior an
opaque byte array. While that didn't break the test, it was pointless and did
make the already-succeeded check before re-attempting impossible to hit.

* Add log statements, correct dead-lock in coordinator tests

* Increase pessimistic timeout on recv_message to compensate for tighter best-case timeouts

* Further bump timeout by a minute

AFAICT, GH failed by just a few seconds.

This also is worst-case in a single instance, making it fine to be decently long.

* Further further bump timeout due to lack of distinct error
2023-12-12 12:28:53 -05:00
Luke Parker
571195bfda
Resolve #360 (#456)
* Remove NetworkId from processor-messages

Because intent binds to the sender/receiver, it's not needed for intent.

The processor knows what the network is.

The coordinator knows which to use because it's sending this message to the
processor for that network.

Also removes the unused zeroize.

* ProcessorMessage::Completed use Session instead of key

* Move SubstrateSignId to Session

* Finish replacing key with session
2023-11-26 12:14:23 -05:00
Luke Parker
d60e007126
Add a binaries feature to the processor to reduce dependencies when used as a lib
processor isn't intended to be used as a library, yet serai-processor-tests
does pull it in as a lib. This caused serai-processor-tests to need to compile
rocksdb, which added multiple minutes to the compilation time.
2023-11-25 04:04:52 -05:00
Luke Parker
b296be8515
Replace bincode with borsh (#452)
* Add SignalsConfig to chain_spec

* Correct multiexp feature flagging for rand_core std

* Remove bincode for borsh

Replaces a non-canonical encoding with a canonical encoding which additionally
should be faster.

Also fixes an issue where we used bincode in transcripts where it cannot be
trusted.

This ended up fixing a myriad of other bugs observed, unfortunately.
Accordingly, it either has to be merged or the bug fixes from it must be ported
to a new PR.

* Make serde optional, minimize usage

* Make borsh an optional dependency of substrate/ crates

* Remove unused dependencies

* Use [u8; 64] where possible in the processor messages

* Correct borsh feature flagging
2023-11-25 04:01:11 -05:00
Luke Parker
369af0fab5
\#339 addendum 2023-11-15 20:23:19 -05:00
Luke Parker
96f1d26f7a
Add a cosigning protocol to ensure finalizations are unique (#433)
* Add a function to deterministically decide which Serai blocks should be co-signed

Has a 5 minute latency between co-signs, also used as the maximal latency
before a co-sign is started.

* Get all active tributaries we're in at a specific block

* Add and route CosignSubstrateBlock, a new provided TX

* Split queued cosigns per network

* Rename BatchSignId to SubstrateSignId

* Add SubstrateSignableId, a meta-type for either Batch or Block, and modularize around it

* Handle the CosignSubstrateBlock provided TX

* Revert substrate_signer.rs to develop (and patch to still work)

Due to SubstrateSigner moving when the prior multisig closes, yet cosigning
occurring with the most recent key, a single SubstrateSigner can be reused.
We could manage multiple SubstrateSigners, yet considering the much lower
specifications for cosigning, I'd rather treat it distinctly.

* Route cosigning through the processor

* Add note to rename SubstrateSigner post-PR

I don't want to do so now in order to preserve the diff's clarity.

* Implement cosign evaluation into the coordinator

* Get tests to compile

* Bug fixes, mark blocks without cosigners available as cosigned

* Correct the ID Batch preprocesses are saved under, add log statements

* Create a dedicated function to handle cosigns

* Correct the flow around Batch verification/queueing

Verifying `Batch`s could stall when a `Batch` was signed before its
predecessors/before the block it's contained in was cosigned (the latter being
inevitable as we can't sign a block containing a signed batch before signing
the batch).

Now, Batch verification happens on a distinct async task in order to not block
the handling of processor messages. This task is the sole caller of verify in
order to ensure last_verified_batch isn't unexpectedly mutated.

When the processor message handler needs to access it, or needs to queue a
Batch, it associates the DB TXN with a lock preventing the other task from
doing so.

This lock, as currently implemented, is a poor and inefficient design. It
should be modified to the pattern used for cosign management. Additionally, a
new primitive of a DB-backed channel may be immensely valuable.

Fixes a standing potential deadlock and a deadlock introduced with the
cosigning protocol.

* Working full-stack tests

After the last commit, this only required extending a timeout.

* Replace "co-sign" with "cosign" to make finding text easier

* Update the coordinator tests to support cosigning

* Inline prior_batch calculation to prevent panic on rotation

Noticed when doing a final review of the branch.
2023-11-15 16:57:21 -05:00
Luke Parker
54f1929078
Route blame between Processor and Coordinator (#427)
* Have processor report errors during the DKG to the coordinator

* Add RemoveParticipant, InvalidDkgShare to coordinator

* Route DKG blame around coordinator

* Allow public construction of AdditionalBlameMachine

Necessary for upcoming work on handling DKG blame in the processor and
coordinator.

Additionally fixes a publicly reachable panic when commitments parsed with one
ThresholdParams are used in a machine using another set of ThresholdParams.

Renames InvalidProofOfKnowledge to InvalidCommitments.

* Remove unused error from dleq

* Implement support for VerifyBlame in the processor

* Have coordinator send the processor share message relevant to Blame

* Remove desync between processors reporting InvalidShare and ones reporting GeneratedKeyPair

* Route blame on sign between processor and coordinator

Doesn't yet act on it in coordinator.

* Move txn usage as needed for stable Rust to build

* Correct InvalidDkgShare serialization
2023-11-12 07:24:41 -05:00
Luke Parker
c03fb6c71b
Add dedicated BatchSignId 2023-11-06 20:06:36 -05:00
Luke Parker
e05b77d830
Support multiple key shares per validator (#416)
* Update the coordinator to give key shares based on weight, not based on existence

Participants are now identified by their starting index. While this compiles,
the following is unimplemented:

1) A conversion for DKG `i` values. It assumes the threshold `i` values used
will be identical for the MuSig signature used to confirm the DKG.
2) Expansion from compressed values to full values before forwarding to the
processor.

* Add a fn to the DkgConfirmer to convert `i` values as needed

Also removes TODOs regarding Serai ensuring validator key uniqueness +
validity. The current infra achieves both.

* Have the Tributary DB track participation by shares, not by count

* Prevent a node from obtaining 34% of the maximum amount of key shares

This is actually mainly intended to set a bound on message sizes in the
coordinator. Message sizes are amplified by the amount of key shares held, so
setting an upper bound on said amount lets it determine constants. While that
upper bound could be 150, that'd be unreasonable and increase the potential for
DoS attacks.

* Correct the mechanism to detect if sufficient accumulation has occured

It used to check if the latest accumulation hit the required threshold. Now,
accumulations may jump past the required threshold. The required mechanism is
to check the threshold wasn't prior met and is now met.

* Finish updating the coordinator to handle a multiple key share per validator environment

* Adjust stategy re: preventing noce reuse in DKG Confirmer

* Add TODOs regarding dropped transactions, add possible TODO fix

* Update tests/coordinator

This doesn't add new multi-key-share tests, it solely updates the existing
single key-share tests to compile and run, with the necessary fixes to the
coordinator.

* Update processor key_gen to handle generating multiple key shares at once

* Update SubstrateSigner

* Update signer, clippy

* Update processor tests

* Update processor docker tests
2023-11-04 19:26:13 -04:00
akildemir
fdfce9e207
Coins pallet (#399)
* initial implementation

* add function to get a balance of an account

* add support for multiple coins

* rename pallet to "coins-pallet"

* replace balances, assets and tokens pallet with coins pallet in runtime

* add total supply info

* update client side for new Coins pallet

* handle fees

* bug fixes

* Update FeeAccount test

* Fmt

* fix pr comments

* remove extraneous Imbalance type

* Minor tweaks

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-10-19 06:22:21 -04:00
Luke Parker
584943d1e9
Modify SubstrateBlockAck as needed
Replaces plan IDs with key + ID, letting the coordinator determine the sessions
for the plans.

Properly scopes which plan IDs are set on which tributaries, and ensures we
have the necessary tributaries at time of handling.
2023-10-14 20:37:54 -04:00
Luke Parker
0eff3d9453
Add Batch messages from processor, verify Batchs published on-chain
Renames Update to SignedBatch.

Checks Batch equality via a hash of the InInstructions. That prevents needing
to keep the Batch in node state or TX introspect.
2023-09-29 03:51:01 -04:00
Luke Parker
ca69f97fef
Add support for multiple multisigs to the processor (#377)
* Design and document a multisig rotation flow

* Make Scanner::eventualities a HashMap so it's per-key

* Don't drop eventualities, always follow through on them

Technical improvements made along the way.

* Start creating an isolate object to manage multisigs, which doesn't require being a signer

Removes key from SubstrateBlock.

* Move Scanner/Scheduler under multisigs

* Move Batch construction into MultisigManager

* Clarify "should" in Multisig Rotation docs

* Add block_number to MultisigManager, as it controls the scanner

* Move sign_plans into MultisigManager

Removes ThresholdKeys from prepare_send.

* Make SubstrateMutable an alias for MultisigManager

* Rewrite Multisig Rotation

The prior scheme had an exploit possible where funds were sent to the old
multisig, then burnt on Serai to send from the new multisig, locking liquidity
for 6 hours. While a fee could be applied to stragglers, to make this attack
unprofitable, the newly described scheme avoids all this.

* Add mini

mini is a miniature version of Serai, emphasizing Serai's nature as a
collection of independent clocks. The intended use is to identify race
conditions and prove protocols are comprehensive regarding when certain clocks
tick.

This uses loom, a prior candidate for evaluating the processor/coordinator as
free of race conditions (#361).

* Use mini to prove a race condition in the current multisig rotation docs, and prove safety of alternatives

Technically, the prior commit had mini prove the race condition.

The docs currently say the activation block of the new multisig is the block
after the next Batch's. If the two next Batches had already entered the
mempool, prior to set_keys being called, the second next Batch would be
expected to contain the new key's data yet fail to as the key wasn't public
when the Batch was actually created.

The naive solution is to create a Batch, publish it, wait until it's included,
and only then scan the next block. This sets a bound of
`Batch publication time < block time`. Optimistically, we can publish a Batch
in 24s while our shortest block time is 2m. Accordingly, we should be fine with
the naive solution which doesn't take advantage of throughput. #333 may
significantly change latency however and require an algorithm whose throughput
exceeds the rate of blocks created.

In order to re-introduce parallelization, enabling throughput, we need to
define a safe range of blocks to scan without Serai ordering the first one.
mini demonstrates safety of scanning n blocks Serai hasn't acknowledged, so
long as the first is scanned before block n+1 is (shifting the n-block window).

The docs will be updated next, to reflect this.

* Fix Multisig Rotation

I believe this is finally good enough to be final.

1) Fixes the race condition present in the prior document, as demonstrated by
mini.

`Batch`s for block `n` and `n+1`, may have been in the mempool when a
multisig's activation block was set to `n`. This would cause a potentially
distinct `Batch` for `n+1`, despite `n+1` already having a signed `Batch`.

2) Tightens when UIs should use the new multisig to prevent eclipse attacks,
and protection against `Batch` publication delays.

3) Removes liquidity fragmentation by tightening flow/handling of latency.

4) Several clarifications and documentation of reasoning.

5) Correction of "prior multisig" to "all prior multisigs" regarding historical
verification, with explanation why.

* Clarify terminology in mini

Synchronizes it from my original thoughts on potential schema to the design
actually created.

* Remove most of processor's README for a reference to docs/processor

This does drop some misc commentary, though none too beneficial. The section on
scanning, deemed sufficiently beneficial, has been moved to a document and
expanded on.

* Update scanner TODOs in line with new docs

* Correct documentation on Bitcoin::Block::time, and Block::time

* Make the scanner in MultisigManager no longer public

* Always send ConfirmKeyPair, regardless of if in-set

* Cargo.lock changes from a prior commit

* Add a policy document on defining a Canonical Chain

I accidentally committed a version of this with a few headers earlier, and this
is a proper version.

* Competent MultisigManager::new

* Update processor's comments

* Add mini to copied files

* Re-organize Scanner per multisig rotation document

* Add RUST_LOG trace targets to e2e tests

* Have the scanner wait once it gets too far ahead

Also bug fixes.

* Add activation blocks to the scanner

* Split received outputs into existing/new in MultisigManager

* Select the proper scheduler

* Schedule multisig activation as detailed in documentation

* Have the Coordinator assert if multiple `Batch`s occur within a block

While the processor used to have ack_up_to_block, enabling skips in the block
acked, support for this was removed while reworking it for multiple multisigs.
It should happen extremely infrequently.

While it would still be beneficial to have, if multiple `Batch`s could occur
within a block (with the complexity here not being worth adding that ban as a
policy), multiple `Batch`s were blocked for DoS reasons.

* Schedule payments to the proper multisig

* Correct >= to <

* Use the new multisig's key for change on schedule

* Don't report External TXs to prior multisig once deprecated

* Forward from the old multisig to the new one at all opportunities

* Move unfulfilled payments in queue from prior to new multisig

* Create MultisigsDb, splitting it out of MainDb

Drops the call to finish_signing from the Signer. While this will cause endless
re-attempts, the Signer will still consider them completed and drop them,
making this an O(n) cost at boot even if we did nothing from here.

The MultisigManager should call finish_signing once the Scanner completes the
Eventuality.

* Don't check Scanner-emitted completions, trust they are completions

Prevents needing to use async code to mark the completion and creates a
fault-free model. The current model, on fault, would cause a lack of marked
completion in the signer.

* Fix a possible panic in the processor

A shorter-chain reorg could cause this assert to trip. It's fixed by
de-duplicating the data, as the assertion checked consistency. Without the
potential for inconsistency, it's unnecessary.

* Document why an existing TODO isn't valid

* Change when we drop payments for being to the change address

The earlier timing prevents creating Plans solely to the branch address,
causing the payments to be dropped, and the TX to become an effective
aggregation TX.

* Extensively document solutions to Eventualities being potentially created after having already scanned their resolutions

* When closing, drop External/Branch outputs which don't cause progress

* Properly decide if Change outputs should be forward or not when closing

This completes all code needed to make the old multisig have a finite lifetime.

* Commentary on forwarding schemes

* Provide a 1 block window, with liquidity fragmentation risks, due to latency

On Bitcoin, this will be 10 minutes for the relevant Batch to be confirmed. On
Monero, 2 minutes. On Ethereum, ~6 minutes.

Also updates the Multisig Rotation document with the new forwarding plan.

* Implement transaction forwarding from old multisig to new multisig

Identifies a fault where Branch outputs which shouldn't be dropped may be, if
another output fulfills their next step. Locking Branch fulfillment down to
only Branch outputs is not done in this commit, but will be in the next.

* Only let Branch outputs fulfill branches

* Update TODOs

* Move the location of handling signer events to avoid a race condition

* Avoid a deadlock by using a RwLock on a single txn instead of two txns

* Move Batch ID out of the Scanner

* Increase from one block of latency on new keys activation to two

For Monero, this offered just two minutes when our latency to publish a Batch
is around a minute already. This does increase the time our liquidity can be
fragmented by up to 20 minutes (Bitcoin), yet it's a stupid attack only
possible once a week (when we rotate). Prioritizing normal users' transactions
not being subject to forwarding is more important here.

Ideally, we'd not do +2 blocks yet plus `time`, such as +10 minutes, making
this agnostic of the underlying network's block scheduling. This is a
complexity not worth it.

* Split MultisigManager::substrate_block into multiple functions

* Further tweaks to substrate_block

* Acquire a lock on all Scanner operations after calling ack_block

Gives time to call register_eventuality and initiate signing.

* Merge sign_plans into substrate_block

Also ensure the Scanner's lock isn't prematurely released.

* Use a HashMap to pass to-be-forwarded instructions, not the DB

* Successfully determine in ClosingExisting

* Move from 2 blocks of latency when rotating to 10 minutes

Superior as noted in 6d07af92ce10cfd74c17eb3400368b0150eb36d7, now trivial to
implement thanks to prior commit.

* Add note justifying measuring time in blocks when rotating

* Implement delaying of outputs received early to the new multisig per specification

* Documentation on why Branch outputs don't have the race condition concerns Change do

Also ensures 6 hours is at least N::CONFIRMATIONS, for sanity purposes.

* Remove TODO re: sanity checking Eventualities

We sanity check the Plan the Eventuality is derived from, and the Eventuality
is handled moments later (in the same file, with a clear call path). There's no
reason to add such APIs to Eventualities for a sanity check given that.

* Add TODO(now) for TODOs which must be done in this branch

Also deprecates a pair of TODOs to TODO2, and accepts the flow of the Signer
having the Eventuality.

* Correct errors in potential/future flow descriptions

* Accept having a single Plan Vec

Per the following code consuming it, there's no benefit to bifurcating it by
key.

* Only issue sign_transaction on boot for the proper signer

* Only set keys when participating in their construction

* Misc progress

Only send SubstrateBlockAck when we have a signer, as it's only used to tell
the Tributary of what Plans are being signed in response to this block.

Only immediately sets substrate_signer if session is 0.

On boot, doesn't panic if we don't have an active key (as we wouldn't if only
joining the next multisig). Continues.

* Correctly detect and set retirement block

Modifies the retirement block from first block meeting requirements to block
CONFIRMATIONS after.

Adds an ack flow to the Scanner's Confirmed event and Block event to accomplish
this, which may deadlock at this time (will be fixed shortly).

Removes an invalid await (after a point declared unsafe to use await) from
MultisigsManager::next_event.

* Remove deadlock in multisig_completed and document alternative

The alternative is simpler, albeit less efficient. There's no reason to adopt
it now, yet perhaps if it benefits modeling?

* Handle the final step of retirement, dropping the old key and setting new to existing

* Remove TODO about emitting a Block on every step

If we emit on NewAsChange, we lose the purpose of the NewAsChange period.

The only concern is if we reach ClosingExisting, and nothing has happened, then
all coins will still be in the old multisig until something finally does. This
isn't a problem worth solving, as it's latency under exceptional dead time.

* Add TODO about potentially not emitting a Block event for the reitrement block

* Restore accidentally deleted CI file

* Pair of slight tweaks

* Add missing if statement

* Disable an assertion when testing

One of the test flows currently abuses the Scanner in a way triggering it.
2023-09-25 09:48:15 -04:00
Luke Parker
69c3fad7ce
cargo fmt 2023-09-02 16:32:42 -04:00
Luke Parker
7d8e08d5b4
Use scale instead of bincode throughout processor-messages/processor DB
scale is canonical, bincode is not.
2023-09-02 07:54:09 -04:00
Luke Parker
bccdabb53d
Use a single Substrate signer, per intentions in #227
Removes key from Update as well, since it's no longer variable.
2023-08-24 20:30:50 -04:00
Luke Parker
61418b4e9f
Update Update and substrate_signers to [u8; 32] from Vec<u8>
A commit made while testing moved them from network-key-indexed to
Substrate-key-indexed. Since Substrate keys have a fixed-length, fitting within
the Copy boundary, there's no reason for it to not use an array.
2023-08-24 13:24:56 -04:00
akildemir
e680eabb62
Improve batch handling (#316)
* restrict batch size to ~25kb

* add batch size check to node

* rate limit batches to 1 per serai block

* add support for multiple batches for block

* fix review comments

* Misc fixes

Doesn't yet update tests/processor until data flow is inspected.

* Move the block from SignId to ProcessorMessage::BatchPreprocesses

* Misc clean up

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-08-14 11:57:38 -04:00
Luke Parker
9f143a9742
Replace "coin" with "network"
The Processor's coins folder referred to the networks it could process, as did
its Coin trait. This, and other similar cases throughout the codebase, have now
been corrected.

Also corrects dated documentation for a key pair is confirmed under the
validator-sets pallet.
2023-07-30 16:11:30 -04:00
Luke Parker
a2493cfafc
Sub-CoordinatorMessage -> CoordinatorMessage via From/Into 2023-07-25 17:33:05 -04:00
Luke Parker
807ec30762
Update the flow for completed signing processes
Now, an on-chain transaction exists. This resolves some ambiguities and
provides greater coordination.
2023-07-14 14:05:12 -04:00
Luke Parker
219adc7657
Rename uid to intent 2023-05-08 22:21:41 -04:00
Luke Parker
cc531d630e
Add a UID function to messages
When we receive messages, we're provided with a message ID we can use to
prevent handling an item multiple times. That doesn't prevent us from *sending*
an item multiple times though. Thanks to the UID system, we can now not send if
already present.

Alternatively, we can remove the ordered message ID for just the UID, allowing
duplicates to be sent without issue, and handled on the receiving end.
2023-04-25 02:46:18 -04:00
Luke Parker
a404944b90
Add a SubstrateBlockAck message to the processor
When a Substrate block occurs, the coordinator is expected to emit
SubstrateBlock. This causes the processor to begin a variety of plans. The
processor now emits SubstrateBlockAck, explicitly listing all plan IDs, before
starting signing.

This lets the coordinator provide a SubstrateBlock transaction, and with it,
recognize all plan IDs as valid.

Prior, we would've had to have a spotty algorithm based upon the upcoming
Preprocess messages, or if we immediately provided the SubstrateBlock
transaction, then wait for the processor to inform us of the contained plans.

This creates an explicitly proper async flow not reliant on waiting for data
availability.

Alternatively, we could've replaced Preprocess with (Block, Vec<Preprocess>).
This would've been more efficient, yet also clunky due to the multiple usages
of the Preprocess message.
2023-04-20 15:26:22 -04:00
Luke Parker
396e5322b4
Code a method to determine the activation block before any block has consensus
[0; 32] is a magic for no block has been set yet due to this being the first
key pair. If [0; 32] is the latest finalized block, the processor determines
an activation block based on timestamps.

This doesn't use an Option for ergonomic reasons.
2023-04-18 03:04:52 -04:00
Luke Parker
6f3b5f4535
Tweak ConfirmKeyPair to alleviate database requirements of coordinator 2023-04-18 01:09:22 -04:00
Luke Parker
7579c71765
Add note to processor_messages 2023-04-17 23:11:44 -04:00
Luke Parker
5a499de4ca
Remove BatchSigned
SubstrateBlock's provision of the most recently acknowledged block has
equivalent information with the same latency. Accordingly, there's no need for
it.
2023-04-17 20:19:15 -04:00
Luke Parker
e26b861d25
Move ConfirmKeyPair from key_gen to substrate
Clarifies the emitter and accordingly why its mutations are justified.
2023-04-17 19:40:17 -04:00
Luke Parker
b2169a7316
cargo +nightly fmt 2023-04-15 23:09:39 -04:00
Luke Parker
e2571a43aa
Correct processor flow to have the coordinator decide signing set/re-attempts
The signing set should be the first group to submit preprocesses to Tributary.
Re-attempts shouldn't be once every 30s, yet n blocks since the last relevant
message.

Removes the use of an async task/channel in the signer (and Substrate signer).
Also removes the need to be able to get the time from a coin's block, which was
a fragile system marked with a TODO already.
2023-04-15 23:01:07 -04:00
Luke Parker
e21fc5ff3c
Merge AckBlock with Burns
Offers greater efficiency while reducing concerns re: atomicity.
2023-04-15 18:38:40 -04:00
Luke Parker
d323fc8b7b
Handle signing batches in the processor
Duplicates the existing signer for one tailored to batch signing.
2023-04-10 11:11:46 -04:00
Luke Parker
b9f38fb354
Update processor message flow around the new SignedBatch flow 2023-04-10 02:51:36 -04:00
Luke Parker
426346dd5a
Have the processor DKG output a Ristretto key
This will be used to sign InInstructions.
2023-03-31 10:15:07 -04:00
Luke Parker
9157f8d0a0
Update procesor/correct prior commit 2023-03-25 04:06:25 -04:00
Luke Parker
ba82dac18c
Processor (#259)
* Initial work on a message box

* Finish message-box (untested)

* Expand documentation

* Embed the recipient in the signature challenge

Prevents a message from A -> B from being read as from A -> C.

* Update documentation by bifurcating sender/receiver

* Panic on receiving an invalid signature

If we've received an invalid signature in an authenticated system, a 
service is malicious, critically faulty (equivalent to malicious), or 
the message layer has been compromised (or is otherwise critically 
faulty).

Please note a receiver who handles a message they shouldn't will trigger 
this. That falls under being critically faulty.

* Documentation and helper methods

SecureMessage::new and SecureMessage::serialize.

Secure Debug for MessageBox.

* Have SecureMessage not be serialized by default

Allows passing around in-memory, if desired, and moves the error from 
decrypt to new (which performs deserialization).

Decrypt no longer has an error since it panics if given an invalid 
signature, due to this being intranet code.

* Explain and improve nonce handling

Includes a missing zeroize call.

* Rebase to latest develop

Updates to transcript 0.2.0.

* Add a test for the MessageBox

* Export PrivateKey and PublicKey

* Also test serialization

* Add a key_gen binary to message_box

* Have SecureMessage support Serde

* Add encrypt_to_bytes and decrypt_from_bytes

* Support String ser via base64

* Rename encrypt/decrypt to encrypt_bytes/decrypt_to_bytes

* Directly operate with values supporting Borsh

* Use bincode instead of Borsh

By staying inside of serde, we'll support many more structs. While 
bincode isn't canonical, we don't need canonicity on an authenticated, 
internal system.

* Turn PrivateKey, PublicKey into structs

Uses Zeroizing for the PrivateKey per #150.

* from_string functions intended for loading from an env

* Use &str for PublicKey from_string (now from_str)

The PrivateKey takes the String to take ownership of its memory and 
zeroize it. That isn't needed with PublicKeys.

* Finish updating from develop

* Resolve warning

* Use ZeroizingAlloc on the key_gen binary

* Move message-box from crypto/ to common/

* Move key serialization functions to ser

* add/remove functions in MessageBox

* Implement Hash on dalek_ff_group Points

* Make MessageBox generic to its key

Exposes a &'static str variant for internal use and a RistrettoPoint 
variant for external use.

* Add Private to_string as deprecated

Stub before more competent tooling is deployed.

* Private to_public

* Test both Internal and External MessageBox, only use PublicKey in the pub API

* Remove panics on invalid signatures

Leftover from when this was solely internal which is now unsafe.

* Chicken scratch a Scanner task

* Add a write function to the DKG library

Enables writing directly to a file.

Also modifies serialize to return Zeroizing<Vec<u8>> instead of just Vec<u8>.

* Make dkg::encryption pub

* Remove encryption from MessageBox

* Use a 64-bit block number in Substrate

We use a 64-bit block number in general since u32 only works for 120 years
(with a 1 second block time). As some chains even push the 1 second threshold,
especially ones based on DAG consensus, this becomes potentially as low as 60
years.

While that should still be plenty, it's not worth wondering/debating. Since
Serai uses 64-bit block numbers elsewhere, this ensures consistency.

* Misc crypto lints

* Get the scanner scratch to compile

* Initial scanner test

* First few lines of scheduler

* Further work on scheduler, solidify API

* Define Scheduler TX format

* Branch creation algorithm

* Document when the branch algorithm isn't perfect

* Only scanned confirmed blocks

* Document Coin

* Remove Canonical/ChainNumber from processor

The processor should be abstracted from canonical numbers thanks to the
coordinator, making this unnecessary.

* Add README documenting processor flow

* Use Zeroize on substrate primitives

* Define messages from/to the processor

* Correct over-specified versioning

* Correct build re: in_instructions::primitives

* Debug/some serde in crypto/

* Use a struct for ValidatorSetInstance

* Add a processor key_gen task

Redos DB handling code.

* Replace trait + impl with wrapper struct

* Add a key confirmation flow to the key gen task

* Document concerns on key_gen

* Start on a signer task

* Add Send to FROST traits

* Move processor lib.rs to main.rs

Adds a dummy main to reduce clippy dead_code warnings.

* Further flesh out main.rs

* Move the DB trait to AsRef<[u8]>

* Signer task

* Remove a panic in bitcoin when there's insufficient funds

Unchecked underflow.

* Have Monero's mine_block mine one block, not 10

It was initially a nicety to deal with the 10 block lock. C::CONFIRMATIONS
should be used for that instead.

* Test signer

* Replace channel expects with log statements

The expects weren't problematic and had nicer code. They just clutter test
output.

* Remove the old wallet file

It predates the coordinator design and shouldn't be used.

* Rename tests/scan.rs to tests/scanner.rs

* Add a wallet test

Complements the recently removed wallet file by adding a test for the scanner,
scheduler, and signer together.

* Work on a run function

Triggers a clippy ICE.

* Resolve clippy ICE

The issue was the non-fully specified lambda in signer.

* Add KeyGenEvent and KeyGenOrder

Needed so we get KeyConfirmed messages from the key gen task.

While we could've read the CoordinatorMessage to see that, routing through the
key gen tasks ensures we only handle it once it's been successfully saved to
disk.

* Expand scanner test

* Clarify processor documentation

* Have the Scanner load keys on boot/save outputs to disk

* Use Vec<u8> for Block ID

Much more flexible.

* Panic if we see the same output multiple times

* Have the Scanner DB mark itself as corrupt when doing a multi-put

This REALLY should be a TX. Since we don't have a TX API right now, this at
least offers detection.

* Have DST'd DB keys accept AsRef<[u8]>

* Restore polling all signers

Writes a custom future to do so.

Also loads signers on boot using what the scanner claims are active keys.

* Schedule OutInstructions

Adds a data field to Payment.

Also cleans some dead code.

* Panic if we create an invalid transaction

Saves the TX once it's successfully signed so if we do panic, we have a copy.

* Route coordinator messages to their respective signer

Requires adding key to the SignId.

* Send SignTransaction orders for all plans

* Add a timer to retry sign_plans when prepare_send fails

* Minor fmt'ing

* Basic Fee API

* Move the change key into Plan

* Properly route activation_number

* Remove ScannerEvent::Block

It's not used under current designs

* Nicen logs

* Add utilities to get a block's number

* Have main issue AckBlock

Also has a few misc lints.

* Parse instructions out of outputs

* Tweak TODOs and remove an unwrap

* Update Bitcoin max input/output quantity

* Only read one piece of data from Monero

Due to output randomization, it's infeasible.

* Embed plan IDs into the TXs they create

We need to stop attempting signing if we've already signed a protocol. Ideally,
any one of the participating signers should be able to provide a proof the TX
was successfully signed. We can't just run a second signing protocol though as
a single malicious signer could complete the TX signature, and publish it,
yet not complete the secondary signature.

The TX itself has to be sufficient to show that the TX matches the plan. This
is done by embedding the ID, so matching addresses/amounts plans are
distinguished, and by allowing verification a TX actually matches a set of
addresses/amounts.

For Monero, this will need augmenting with the ephemeral keys (or usage of a
static seed for them).

* Don't use OP_RETURN to encode the plan ID on Bitcoin

We can use the inputs to distinguih identical-output plans without issue.

* Update OP_RETURN data access

It's not required to be the last output.

* Add Eventualities to Monero

An Eventuality is an effective equivalent to a SignableTransaction. That is
declared not by the inputs it spends, yet the outputs it creates.
Eventualities are also bound to a 32-byte RNG seed, enabling usage of a
hash-based identifier in a SignableTransaction, allowing multiple
SignableTransactions with the same output set to have different Eventualities.

In order to prevent triggering the burning bug, the RNG seed is hashed with
the planned-to-be-used inputs' output keys. While this does bind to them, it's
only loosely bound. The TX actually created may use different inputs entirely
if a forgery is crafted (which requires no brute forcing).

Binding to the key images would provide a strong binding, yet would require
knowing the key images, which requires active communication with the spend
key.

The purpose of this is so a multisig can identify if a Transaction the entire
group planned has been executed by a subset of the group or not. Once a plan
is created, it can have an Eventuality made. The Eventuality's extra is able
to be inserted into a HashMap, so all new on-chain transactions can be
trivially checked as potential candidates. Once a potential candidate is found,
a check involving ECC ops can be performed.

While this is arguably a DoS vector, the underlying Monero blockchain would
need to be spammed with transactions to trigger it. Accordingly, it becomes
a Monero blockchain DoS vector, when this code is written on the premise
of the Monero blockchain functioning. Accordingly, it is considered handled.

If a forgery does match, it must have created the exact same outputs the
multisig would've. Accordingly, it's argued the multisig shouldn't mind.

This entire suite of code is only necessary due to the lack of outgoing
view keys, yet it's able to avoid an interactive protocol to communicate
key images on every single received output.

While this could be locked to the multisig feature, there's no practical
benefit to doing so.

* Add support for encoding Monero address to instructions

* Move Serai's Monero address encoding into serai-client

serai-client is meant to be a single library enabling using Serai. While it was
originally written as an RPC client for Serai, apps actually using Serai will
primarily be sending transactions on connected networks. Sending those
transactions require proper {In, Out}Instructions, including proper address
encoding.

Not only has address encoding been moved, yet the subxt client is now behind
a feature. coin integrations have their own features, which are on by default.
primitives are always exposed.

* Reorganize file layout a bit, add feature flags to processor

* Tidy up ETH Dockerfile

* Add Bitcoin address encoding

* Move Bitcoin::Address to serai-client's

* Comment where tweaking needs to happen

* Add an API to check if a plan was completed in a specific TX

This allows any participating signer to submit the TX ID to prevent further
signing attempts.

Also performs some API cleanup.

* Minimize FROST dependencies

* Use a seeded RNG for key gen

* Tweak keys from Key gen

* Test proper usage of Branch/Change addresses

Adds a more descriptive error to an error case in decoys, and pads Monero
payments as needed.

* Also test spending the change output

* Add queued_plans to the Scheduler

queued_plans is for payments to be issued when an amount appears, yet the
amount is currently pre-fee. One the output is actually created, the
Scheduler should be notified of the amount it was created with, moving from
queued_plans to plans under the actual amount.

Also tightens debug_asserts to asserts for invariants which may are at risk of
being exclusive to prod.

* Add missing tweak_keys call

* Correct decoy selection height handling

* Add a few log statements to the scheduler

* Simplify test's get_block_number

* Simplify, while making more robust, branch address handling in Scheduler

* Have fees deducted from payments

Corrects Monero's handling of fees when there's no change address.

Adds a DUST variable, as needed due to 1_00_000_000 not being enough to pay
its fee on Monero.

* Add comment to Monero

* Consolidate BTC/XMR prepare_send code

These aren't fully consolidated. We'd need a SignableTransaction trait for
that. This is a lot cleaner though.

* Ban integrated addresses

The reasoning why is accordingly documented.

* Tidy TODOs/dust handling

* Update README TODO

* Use a determinisitic protocol version in Monero

* Test rebuilt KeyGen machines function as expected

* Use a more robust KeyGen entropy system

* Add DB TXNs

Also load entropy from env

* Add a loop for processing messages from substrate

Allows detecting if we're behind, and if so, waiting to handle the message

* Set Monero MAX_INPUTS properly

The previous number was based on an old hard fork. With the ring size having
increased, transactions have since got larger.

* Distinguish TODOs into TODO and TODO2s

TODO2s are for after protonet

* Zeroize secret share repr in ThresholdCore write

* Work on Eventualities

Adds serialization and stops signing when an eventuality is proven.

* Use a more robust DB key schema

* Update to {k, p}256 0.12

* cargo +nightly clippy

* cargo update

* Slight message-box tweaks

* Update to recent Monero merge

* Add a Coordinator trait for communication with coordinator

* Remove KeyGenHandle for just KeyGen

While KeyGen previously accepted instructions over a channel, this breaks the
ack flow needed for coordinator communication. Now, KeyGen is the direct object
with a handle() function for messages.

Thankfully, this ended up being rather trivial for KeyGen as it has no
background tasks.

* Add a handle function to Signer

Enables determining when it's finished handling a CoordinatorMessage and
therefore creating an acknowledgement.

* Save transactions used to complete eventualities

* Use a more intelligent sleep in the signer

* Emit SignedTransaction with the first ID *we can still get from our node*

* Move Substrate message handling into the new coordinator recv loop

* Add handle function to Scanner

* Remove the plans timer

Enables ensuring the ordring on the handling of plans.

* Remove the outputs function which panicked if a precondition wasn't met

The new API only returns outputs upon satisfaction of the precondition.

* Convert SignerOrder::SignTransaction to a function

* Remove the key_gen object from sign_plans

* Refactor out get_fee/prepare_send into dedicated functions

* Save plans being signed to the DB

* Reload transactions being signed on boot

* Stop reloading TXs being signed (and report it to peers)

* Remove message-box from the processor branch

We don't use it here yet.

* cargo +nightly fmt

* Move back common/zalloc

* Update subxt to 0.27

* Zeroize ^1.5, not 1

* Update GitHub workflow

* Remove usage of SignId in completed
2023-03-16 22:59:40 -04:00