* Rename the coins folder to networks
Ethereum isn't a coin. It's a network.
Resolves#357.
* More renames of coins -> networks in orchestration
* Correct paths in tests/
* cargo fmt
* Remove unsafe creation of dalek_ff_group::EdwardsPoint in BP+
* Rename Bulletproofs to Bulletproof, since they are a single Bulletproof
Also bifurcates prove with prove_plus, and adds a few documentation items.
* Make CLSAG signing private
Also adds a bit more documentation and does a bit more tidying.
* Remove the distribution cache
It's a notable bandwidth/performance improvement, yet it's not ready. We need a
dedicated Distribution struct which is managed by the wallet and passed in.
While we can do that now, it's not currently worth the effort.
* Tidy Borromean/MLSAG a tad
* Remove experimental feature from monero-serai
* Move amount_decryption into EncryptedAmount::decrypt
* Various RingCT doc comments
* Begin crate smashing
* Further documentation, start shoring up API boundaries of existing crates
* Document and clean clsag
* Add a dedicated send/recv CLSAG mask struct
Abstracts the types used internally.
Also moves the tests from monero-serai to monero-clsag.
* Smash out monero-bulletproofs
Removes usage of dalek-ff-group/multiexp for curve25519-dalek.
Makes compiling in the generators an optional feature.
Adds a structured batch verifier which should be notably more performant.
Documentation and clean up still necessary.
* Correct no-std builds for monero-clsag and monero-bulletproofs
* Tidy and document monero-bulletproofs
I still don't like the impl of the original Bulletproofs...
* Error if missing documentation
* Smash out MLSAG
* Smash out Borromean
* Tidy up monero-serai as a meta crate
* Smash out RPC, wallet
* Document the RPC
* Improve docs a bit
* Move Protocol to monero-wallet
* Incomplete work on using Option to remove panic cases
* Finish documenting monero-serai
* Remove TODO on reading pseudo_outs for AggregateMlsagBorromean
* Only read transactions with one Input::Gen or all Input::ToKey
Also adds a helper to fetch a transaction's prefix.
* Smash out polyseed
* Smash out seed
* Get the repo to compile again
* Smash out Monero addresses
* Document cargo features
Credit to @hinto-janai for adding such sections to their work on documenting
monero-serai in #568.
* Fix deserializing v2 miner transactions
* Rewrite monero-wallet's send code
I have yet to redo the multisig code and the builder. This should be much
cleaner, albeit slower due to redoing work.
This compiles with clippy --all-features. I have to finish the multisig/builder
for --all-targets to work (and start updating the rest of Serai).
* Add SignableTransaction Read/Write
* Restore Monero multisig TX code
* Correct invalid RPC type def in monero-rpc
* Update monero-wallet tests to compile
Some are _consistently_ failing due to the inputs we attempt to spend being too
young. I'm unsure what's up with that. Most seem to pass _consistently_,
implying it's not a random issue yet some configuration/env aspect.
* Clean and document monero-address
* Sync rest of repo with monero-serai changes
* Represent height/block number as a u32
* Diversify ViewPair/Scanner into ViewPair/GuaranteedViewPair and Scanner/GuaranteedScanner
Also cleans the Scanner impl.
* Remove non-small-order view key bound
Guaranteed addresses are in fact guaranteed even with this due to prefixing key
images causing zeroing the ECDH to not zero the shared key.
* Finish documenting monero-serai
* Correct imports for no-std
* Remove possible panic in monero-serai on systems < 32 bits
This was done by requiring the system's usize can represent a certain number.
* Restore the reserialize chain binary
* fmt, machete, GH CI
* Correct misc TODOs in monero-serai
* Have Monero test runner evaluate an Eventuality for all signed TXs
* Fix a pair of bugs in the decoy tests
Unfortunately, this test is still failing.
* Fix remaining bugs in monero-wallet tests
* Reject torsioned spend keys to ensure we can spend the outputs we scan
* Tidy inlined epee code in the RPC
* Correct the accidental swap of stagenet/testnet address bytes
* Remove unused dep from processor
* Handle Monero fee logic properly in the processor
* Document v2 TX/RCT output relation assumed when scanning
* Adjust how we mine the initial blocks due to some CI test failures
* Fix weight estimation for RctType::ClsagBulletproof TXs
* Again increase the amount of blocks we mine prior to running tests
* Correct the if check about when to mine blocks on start
Finally fixes the lack of decoy candidates failures in CI.
* Run Monero on Debian, even for internal testnets
Change made due to a segfault incurred when locally testing.
https://github.com/monero-project/monero/issues/9141 for the upstream.
* Don't attempt running tests on the verify-chain binary
Adds a minimum XMR fee to the processor and runs fmt.
* Increase minimum Monero fee in processor
I'm truly unsure why this is required right now.
* Distinguish fee from necessary_fee in monero-wallet
If there's no change, the fee is difference of the inputs to the outputs. The
prior code wouldn't check that amount is greater than or equal to the necessary
fee, and returning the would-be change amount as the fee isn't necessarily
helpful.
Now the fee is validated in such cases and the necessary fee is returned,
enabling operating off of that.
* Restore minimum Monero fee from develop
* Clean up Ethereum
* Consistent contract address for deployed contracts
* Flesh out Router a bit
* Add a Deployer for DoS-less deployment
* Implement Router-finding
* Use CREATE2 helper present in ethers
* Move from CREATE2 to CREATE
Bit more streamlined for our use case.
* Document ethereum-serai
* Tidy tests a bit
* Test updateSeraiKey
* Use encodePacked for updateSeraiKey
* Take in the block hash to read state during
* Add a Sandbox contract to the Ethereum integration
* Add retrieval of transfers from Ethereum
* Add inInstruction function to the Router
* Augment our handling of InInstructions events with a check the transfer event also exists
* Have the Deployer error upon failed deployments
* Add --via-ir
* Make get_transaction test-only
We only used it to get transactions to confirm the resolution of Eventualities.
Eventualities need to be modularized. By introducing the dedicated
confirm_completion function, we remove the need for a non-test get_transaction
AND begin this modularization (by no longer explicitly grabbing a transaction
to check with).
* Modularize Eventuality
Almost fully-deprecates the Transaction trait for Completion. Replaces
Transaction ID with Claim.
* Modularize the Scheduler behind a trait
* Add an extremely basic account Scheduler
* Add nonce uses, key rotation to the account scheduler
* Only report the account Scheduler empty after transferring keys
Also ban payments to the branch/change/forward addresses.
* Make fns reliant on state test-only
* Start of an Ethereum integration for the processor
* Add a session to the Router to prevent updateSeraiKey replaying
This would only happen if an old key was rotated to again, which would require
n-of-n collusion (already ridiculous and a valid fault attributable event). It
just clarifies the formal arguments.
* Add a RouterCommand + SignMachine for producing it to coins/ethereum
* Ethereum which compiles
* Have branch/change/forward return an option
Also defines a UtxoNetwork extension trait for MAX_INPUTS.
* Make external_address exclusively a test fn
* Move the "account" scheduler to "smart contract"
* Remove ABI artifact
* Move refund/forward Plan creation into the Processor
We create forward Plans in the scan path, and need to know their exact fees in
the scan path. This requires adding a somewhat wonky shim_forward_plan method
so we can obtain a Plan equivalent to the actual forward Plan for fee reasons,
yet don't expect it to be the actual forward Plan (which may be distinct if
the Plan pulls from the global state, such as with a nonce).
Also properly types a Scheduler addendum such that the SC scheduler isn't
cramming the nonce to use into the N::Output type.
* Flesh out the Ethereum integration more
* Two commits ago, into the **Scheduler, not Processor
* Remove misc TODOs in SC Scheduler
* Add constructor to RouterCommandMachine
* RouterCommand read, pairing with the prior added write
* Further add serialization methods
* Have the Router's key included with the InInstruction
This does not use the key at the time of the event. This uses the key at the
end of the block for the event. Its much simpler than getting the full event
streams for each, checking when they interlace.
This does not read the state. Every block, this makes a request for every
single key update and simply chooses the last one. This allows pruning state,
only keeping the event tree. Ideally, we'd also introduce a cache to reduce the
cost of the filter (small in events yielded, long in blocks searched).
Since Serai doesn't have any forwarding TXs, nor Branches, nor change, all of
our Plans should solely have payments out, and there's no expectation of a Plan
being made under one key broken by it being received by another key.
* Add read/write to InInstruction
* Abstract the ABI for Call/OutInstruction in ethereum-serai
* Fill out signable_transaction for Ethereum
* Move ethereum-serai to alloy
Resolves#331.
* Use the opaque sol macro instead of generated files
* Move the processor over to the now-alloy-based ethereum-serai
* Use the ecrecover provided by alloy
* Have the SC use nonce for rotation, not session (an independent nonce which wasn't synchronized)
* Always use the latest keys for SC scheduled plans
* get_eventuality_completions for Ethereum
* Finish fleshing out the processor Ethereum integration as needed for serai-processor tests
This doesn't not support any actual deployments, not even the ones simulated by
serai-processor-docker-tests.
* Add alloy-simple-request-transport to the GH workflows
* cargo update
* Clarify a few comments and make one check more robust
* Use a string for 27.0 in .github
* Remove optional from no-longer-optional dependencies in processor
* Add alloy to git deny exception
* Fix no longer optional specification in processor's binaries feature
* Use a version of foundry from 2024
* Correct fetching Bitcoin TXs in the processor docker tests
* Update rustls to resolve RUSTSEC warnings
* Use the monthly nightly foundry, not the deleted daily nightly
* Monero: fix decoy selection algo and add test for latest spendable
- DSA only selected coinbase outputs and didn't match the wallet2
implementation
- Added test to make sure DSA will select a decoy output from the
most recent unlocked block
- Made usage of "height" in DSA consistent with other usage of
"height" in Monero code (height == num blocks in chain)
- Rely on monerod RPC response for output's unlocked status
* xmr runner tests mine until outputs are unlocked
* fingerprintable canoncial select decoys
* Separate fingerprintable canonical function
Makes it simpler for callers who are unconcered with consistent
canonical output selection across multiple clients to rely on
the simpler Decoy::select and not worry about fingerprintable
canonical
* fix merge conflicts
* Put back TODO for issue #104
* Fix incorrect check on distribution len
The RingCT distribution on mainnet doesn't start until well after
genesis, so the distribution length is expected to be < height.
To be clear, this was my mistake from this series of changes
to the DSA. I noticed this mistake because the DSA would error
when running on mainnet.
* monero: Use fee priority enums from monero repo CLI/RPC wallets
* Update processor for fee priority change
* Remove FeePriority::Default
Done in consultation with @j-berman.
The RPC/CLI/GUI almost always adjust up except barring very explicit commands,
hence why FeePriority 0 is now only exposed via the explicit command of
FeePriority::Custom { priority: 0 }.
Also helps with terminology.
---------
Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
* add input script check
* add test
* optimizations
* bug fix
* fix pr comments
* Test SegWit-encoded data using a single output (not two)
* Remove TODO used as a question, document origins when SegWit encoding
---------
Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
Moves from concatted Dockerfiles to pseudo-templated Dockerfiles via a dedicated Rust program.
Removes the unmaintained kubernetes, not because we shouldn't have/use it, but because it's unmaintained and needs to be reworked before it's present again.
Replaces the compose with the work in the new orchestrator binary which spawns everything as expected. While this arguably re-invents the wheel, it correctly manages secrets and handles the variadic Dockerfiles.
Also adds an unrelated patch for zstd and simplifies running services a bit by greater utilizing the existing infrastructure.
---
* Delete all Dockerfile fragments, add new orchestator to generate Dockerfiles
Enables greater templating.
Also delete the unmaintained kubernetes folder *for now*. This should be
restored in the future.
* Use Dockerfiles from the orchestator
* Ignore Dockerfiles in the git repo
* Remove CI job to check Dockerfiles are as expected now that they're no longer committed
* Remove old Dockerfiles from repo
* Use Debian for monero-wallet-rpc
* Remove replace_cmds for proper usage of entry-dev
Consolidates ports a bit.
Updates serai-docker-tests from "compose" to "build".
* Only write a new dockerfile if it's distinct
Preserves the updated time metadata.
* Update serai-docker-tests
* Correct the path Dockerfiles are built from
* Correct inclusion of orchestration folder in Docker builds
* Correct debug/release flagging in the cargo command
Apparently, --debug isn't an effective NOP yet an error.
* Correct path used to run the Serai node within a Dockerfile
* Correct path in Monero Dockerfile
* Attempt storing monerod in /usr/bin
* Use sudo to move into /usr/bin in CI
* Correct 18.3.0 to 18.3.1
* Escape * with quotes
* Update deny.toml, ADD orchestration in runtime Dockerfile
* Add --detach to the Monero GH CI
* Diversify dockerfiles by network
* Fixes to network-diversified orchestration
* Bitcoin and Monero testnet scripts
* Permissions and tweaks
* Flatten scripts folders
* Add missing folder specification to Monero Dockerfile
* Have monero-wallet-rpc specify the monerod login
* Have the Docker CMD specify env variables inserted at time of Dockerfile generation
They're overrideable with the global enviornment as for tests. This enables
variable generation in orchestrator and output to productionized Docker files
without creating a life-long file within the Docker container.
* Don't add Dockerfiles into Docker containers now that they have secrets
Solely add the source code for them as needed to satisfy the workspace bounds.
* Download arm64 Monero on arm64
* Ensure constant host architecture when reproducibly building the wasm
Host architecture, for some reason, can effect the generated code despite the
target architecture always being foreign to the host architecture.
* Randomly generate infrastructure keys
* Have orchestrator generate a key, be able to create/start containers
* Ensure bash is used over sh
* Clean dated docs
* Change how quoting occurs
* Standardize to sh
* Have Docker test build the dev Dockerfiles
* Only key_gen once
* cargo update
Adds a patch for zstd and reconciles the breaking nightly change which just
occurred.
* Use a dedicated network for Serai
Also fixes SERAI_HOSTNAME passed to coordinator.
* Support providing a key over the env for the Serai node
* Enable and document running daemons for tests via serai-orchestrator
Has running containers under the dev network port forward the RPC ports.
* Use volumes for bitcoin/monero
* Use bitcoin's run.sh in GH CI
* Only use the volume for testnet (not dev)
* report_slashes plumbing in Substrate
Notably delays the SetRetired event until it provides a slash report or the set
after it becomes the set to report its slashes.
* Add dedicated AcceptedHandover event
* Add SlashReport TX to Tributary
* Create SlashReport TXs
* Handle SlashReport TXs
* Add logic to generate a SlashReport to the coordinator
* Route SlashReportSigner into the processor
* Finish routing the SlashReport signing/TX publication
* Add serai feature to processor's serai-client
* Use an extended timeout for DKGs specifically
* Add a log statement when message-queue connection fails
* Add a 60 second keep-alive to connections
* Use zalloc for processor/message-queue/coordinator
An additional layer which protects us against edge cases with Zeroizing
(objects which don't support it or don't miss it).
* Add further logs to message-queue
* Further increase re-attempt timeouts in CI
* Remove misplaced continue inmessage-queue client
Fixes observed CI failures.
* Revert "Further increase re-attempt timeouts in CI"
This reverts commit 3723530cf6.
With a DKG removal comes a reduction in the amount of participants which was
ignored by re-attempts.
Now, we determine n/i based on the parties removed, and deterministically
obtain the context of who was removd.
* Schedule re-attempts and add a (not filled out) match statement to actually execute them
A comment explains the methodology. To copy it here:
"""
This is because we *always* re-attempt any protocol which had participation. That doesn't
mean we *should* re-attempt this protocol.
The alternatives were:
1) Note on-chain we completed a protocol, halting re-attempts upon 34%.
2) Vote on-chain to re-attempt a protocol.
This schema doesn't have any additional messages upon the success case (whereas
alternative #1 does) and doesn't have overhead (as alternative #2 does, sending votes and
then preprocesses. This only sends preprocesses).
"""
Any signing protocol which reaches sufficient participation will be
re-attempted until it no longer does.
* Have the Substrate scanner track DKG removals/completions for the Tributary code
* Don't keep trying to publish a participant removal if we've already set keys
* Pad out the re-attempt match a bit more
* Have CosignEvaluator reload from the DB
* Correctly schedule cosign re-attempts
* Actuall spawn new DKG removal attempts
* Use u32 for Batch ID in SubstrateSignableId, finish Batch re-attempt routing
The batch ID was an opaque [u8; 5] which also included the network, yet that's
redundant and unhelpful.
* Clarify a pair of TODOs in the coordinator
* Remove old TODO
* Final comment cleanup
* Correct usage of TARGET_BLOCK_TIME in reattempt scheduler
It's in ms and I assumed it was in s.
* Have coordinator tests drop BatchReattempts which aren't relevant yet may exist
* Bug fix and pointless oddity removal
We scheduled a re-attempt upon receiving 2/3rds of preprocesses and upon
receiving 2/3rds of shares, so any signing protocol could cause two re-attempts
(not one more).
The coordinator tests randomly generated the Batch ID since it was prior an
opaque byte array. While that didn't break the test, it was pointless and did
make the already-succeeded check before re-attempting impossible to hit.
* Add log statements, correct dead-lock in coordinator tests
* Increase pessimistic timeout on recv_message to compensate for tighter best-case timeouts
* Further bump timeout by a minute
AFAICT, GH failed by just a few seconds.
This also is worst-case in a single instance, making it fine to be decently long.
* Further further bump timeout due to lack of distinct error
* Move logic for evaluating if a cosign should occur to its own file
Cleans it up and makes it more robust.
* Have expected_next_batch return an error instead of retrying
While convenient to offer an error-free implementation, it potentially caused
very long lived lock acquisitions in handle_processor_message.
* Unify and clean DkgConfirmer and DkgRemoval
Does so via adding a new file for the common code, SigningProtocol.
Modifies from_cache to return the preprocess with the machine, as there's no
reason not to. Also removes an unused Result around the type.
Clarifies the security around deterministic nonces, removing them for
saved-to-disk cached preprocesses. The cached preprocesses are encrypted as the
DB is not a proper secret store.
Moves arguments always present in the protocol from function arguments into the
struct itself.
Removes the horribly ugly code in DkgRemoval, fixing multiple issues present
with it which would cause it to fail on use.
* Set SeraiBlockNumber in cosign.rs as it's used by the cosigning protocol
* Remove unnecessary Clone from lambdas in coordinator
* Remove the EventDb from Tributary scanner
We used per-Transaction DB TXNs so on error, we don't have to rescan the entire
block yet only the rest of it. We prevented scanning multiple transactions by
tracking which we already had.
This is over-engineered and not worth it.
* Implement borsh for HasEvents, removing the manual encoding
* Merge DkgConfirmer and DkgRemoval into signing_protocol.rs
Fixes a bug in DkgConfirmer which would cause it to improperly handle indexes
if any validator had multiple key shares.
* Strictly type DataSpecification's Label
* Correct threshold_i_map_to_keys_and_musig_i_map
It didn't include the participant's own index and accordingly was offset.
* Create TributaryBlockHandler
This struct contains all variables prior passed to handle_block and stops them
from being passed around again and again.
This also ensures fatal_slash is only called while handling a block, as needed
as it expects to operate under perfect consensus.
* Inline accumulate, store confirmation nonces with shares
Inlining accumulate makes sense due to the amount of data accumulate needed to
be passed.
Storing confirmation nonces with shares ensures that both are available or
neither. Prior, one could be yet the other may not have been (requiring an
assert in runtime to ensure we didn't bungle it somehow).
* Create helper functions for handling DkgRemoval/SubstrateSign/Sign Tributary TXs
* Move Label into SignData
All of our transactions which use SignData end up with the same common usage
pattern for Label, justifying this.
Removes 3 transactions, explicitly de-duplicating their handlers.
* Remove CurrentlyCompletingKeyPair for the non-contextual DkgKeyPair
* Remove the manual read/write for TributarySpec for borsh
This struct doesn't have any optimizations booned by the manual impl. Using
borsh reduces our scope.
* Use temporary variables to further minimize LoC in tributary handler
* Remove usage of tuples for non-trivial Tributary transactions
* Remove serde from dkg
serde could be used to deserialize intenrally inconsistent objects which could
lead to panics or faults.
The BorshDeserialize derives have been replaced with a manual implementation
which won't produce inconsistent objects.
* Abstract Future generics using new trait definitions in coordinator
* Move published_signed_transaction to tributary/mod.rs to reduce the size of main.rs
* Split coordinator/src/tributary/mod.rs into spec.rs and transaction.rs
* Make it clear not providing a change address is fingerprintable
When no change address is provided, all change is shunted to the
fee. This PR makes it clear to the caller that it is fingerprintable
when the caller does this.
* Review comments
They're a bit more binding, smaller, provided by the Rust bitcoin library,
sane, and we don't have to worry about malleability since all of our inputs are
SegWit.
* Use redb and in Dockerfiles
The motivation for redb was to remove the multiple rocksdb compile times from
CI.
* Correct feature flagging of coordinator and message-queue in Dockerfiles
* Correct message-queue DB type alias
* Use consistent table typing in redb
* Correct rebase artifacts
* Correct removal of binaries feature from message-queue
* Correct processor feature flagging
* Replace redb with parity-db
It still has much better compile times yet doesn't block when creating multiple
transactions. It also is actively maintained and doesn't grow our tree. The MPT
aspects are irrelevant.
* Correct stray Redb
* clippy warning
* Correct txn get
* Use debug builds in our Dockerfiles to reduce CI times
Also enables only spawning the mdns service when debug in the coordinator.
* Correct underflow in processor
Prior undetected due to relase builds not having bounds checks enabled.
* Restore Serai release due to CI/RPC failures caused by compiling it in debug mode
This is *probably* worth an issue filed upstream, if it can be tracked down.
* Correct failing debug asserts in Monero
These debug asserts assumed there was a change address to take the remainder.
If there's no change address, the remainder is shunted to the fee, causing the
fee to be distinct from the estimate.
We presumably need to modify monero-serai such that change: None isn't valid,
and users must use Change::Fingerprintable(None).
* Remove NetworkId from processor-messages
Because intent binds to the sender/receiver, it's not needed for intent.
The processor knows what the network is.
The coordinator knows which to use because it's sending this message to the
processor for that network.
Also removes the unused zeroize.
* ProcessorMessage::Completed use Session instead of key
* Move SubstrateSignId to Session
* Finish replacing key with session
Monero doesn't assert the time increases with each block, solely that it
doesn't decrease. Now, the block number is added to the time to ensure it
increases.
processor isn't intended to be used as a library, yet serai-processor-tests
does pull it in as a lib. This caused serai-processor-tests to need to compile
rocksdb, which added multiple minutes to the compilation time.
* Add SignalsConfig to chain_spec
* Correct multiexp feature flagging for rand_core std
* Remove bincode for borsh
Replaces a non-canonical encoding with a canonical encoding which additionally
should be faster.
Also fixes an issue where we used bincode in transcripts where it cannot be
trusted.
This ended up fixing a myriad of other bugs observed, unfortunately.
Accordingly, it either has to be merged or the bug fixes from it must be ported
to a new PR.
* Make serde optional, minimize usage
* Make borsh an optional dependency of substrate/ crates
* Remove unused dependencies
* Use [u8; 64] where possible in the processor messages
* Correct borsh feature flagging
* implement db macro for processor/substrate_signer
* Use ()
* Correct AttemptDb usage of ()
* () -> &()
---------
Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
* Add a function to deterministically decide which Serai blocks should be co-signed
Has a 5 minute latency between co-signs, also used as the maximal latency
before a co-sign is started.
* Get all active tributaries we're in at a specific block
* Add and route CosignSubstrateBlock, a new provided TX
* Split queued cosigns per network
* Rename BatchSignId to SubstrateSignId
* Add SubstrateSignableId, a meta-type for either Batch or Block, and modularize around it
* Handle the CosignSubstrateBlock provided TX
* Revert substrate_signer.rs to develop (and patch to still work)
Due to SubstrateSigner moving when the prior multisig closes, yet cosigning
occurring with the most recent key, a single SubstrateSigner can be reused.
We could manage multiple SubstrateSigners, yet considering the much lower
specifications for cosigning, I'd rather treat it distinctly.
* Route cosigning through the processor
* Add note to rename SubstrateSigner post-PR
I don't want to do so now in order to preserve the diff's clarity.
* Implement cosign evaluation into the coordinator
* Get tests to compile
* Bug fixes, mark blocks without cosigners available as cosigned
* Correct the ID Batch preprocesses are saved under, add log statements
* Create a dedicated function to handle cosigns
* Correct the flow around Batch verification/queueing
Verifying `Batch`s could stall when a `Batch` was signed before its
predecessors/before the block it's contained in was cosigned (the latter being
inevitable as we can't sign a block containing a signed batch before signing
the batch).
Now, Batch verification happens on a distinct async task in order to not block
the handling of processor messages. This task is the sole caller of verify in
order to ensure last_verified_batch isn't unexpectedly mutated.
When the processor message handler needs to access it, or needs to queue a
Batch, it associates the DB TXN with a lock preventing the other task from
doing so.
This lock, as currently implemented, is a poor and inefficient design. It
should be modified to the pattern used for cosign management. Additionally, a
new primitive of a DB-backed channel may be immensely valuable.
Fixes a standing potential deadlock and a deadlock introduced with the
cosigning protocol.
* Working full-stack tests
After the last commit, this only required extending a timeout.
* Replace "co-sign" with "cosign" to make finding text easier
* Update the coordinator tests to support cosigning
* Inline prior_batch calculation to prevent panic on rotation
Noticed when doing a final review of the branch.
* Have processor report errors during the DKG to the coordinator
* Add RemoveParticipant, InvalidDkgShare to coordinator
* Route DKG blame around coordinator
* Allow public construction of AdditionalBlameMachine
Necessary for upcoming work on handling DKG blame in the processor and
coordinator.
Additionally fixes a publicly reachable panic when commitments parsed with one
ThresholdParams are used in a machine using another set of ThresholdParams.
Renames InvalidProofOfKnowledge to InvalidCommitments.
* Remove unused error from dleq
* Implement support for VerifyBlame in the processor
* Have coordinator send the processor share message relevant to Blame
* Remove desync between processors reporting InvalidShare and ones reporting GeneratedKeyPair
* Route blame on sign between processor and coordinator
Doesn't yet act on it in coordinator.
* Move txn usage as needed for stable Rust to build
* Correct InvalidDkgShare serialization
If a user transferred in without an InInstruction, and the amount exactly
matched a forwarded output, the user's output would fulfill the
forwarding. Then the forwarded output would come along, have no InInstruction,
and be refunded (to the prior multisig) when the user should've been refunded.
Adding this new address type resolves such concerns.
The higher-level scanner code in multisigs/mod.rs now creates a series of plans
with limited context. These include forwarding and refunding plans, moving all
handling of forwarding flags on the scanner's clock and therefore safe.
Also simplifies the refunding a decent bit.
This code is still largely designed around the idea a payment for a network is
fungible with any other, which isn't true. This starts moving past that.
Asserts are added to ensure the integrity of coin to the scheduler (which is
now per key per coin, not per key alone) and in Bitcoin/Monero prepare_send.
The prior system spawned a new connection per request to enable parallelism,
yet kept hitting hyper::IncompleteMessages I couldn't track down. This
attempts to resolve those by a long-lived socket.
Halves the amount of requests per-authenticated RPC call, and accordingly is
likely still better overall.
I don't believe this is resolved yet but this is still worth pushing.
Removes bitcoin-serai's usage of sha2 for bitcoin-hashes. While sha2 is still
in play due to modular-frost (more specifically, due to ciphersuite), this
offers a bit more performance (assuming equivalency between sha2 and
bitcoin-hashes' impl) due to removing a static for a const.
Makes secp256k1 a dev dependency for bitcoin-serai. While secp256k1 is still
pulled in via bitcoin, it's hopefully slightly better to compile now and makes
usage of secp256k1 an implementation detail of bitcoin (letting it change it
freely).
Also offers slightly more efficient signing as we don't decode to a signature
just to re-encode for the transaction.
Removes a 20s sleep for a check every second, up to 20 times, for reduced test
times in the processor.
* Move pallet-asset-conversion
* update licensing
* initial integration
* Integrate Currency & Assets types
* integrate liquidity tokens
* fmt
* integrate dex pallet tests
* fmt
* compilation error fixes
* integrate dex benchmarks
* fmt
* cargo clippy
* replace all occurrences of "asset" with "coin"
* add the actual add liq/swap logic to in-instructions
* add client side & tests
* fix deny
* Lint and changes
- Renames InInstruction::AddLiquidity to InInstruction::SwapAndAddLiquidity
- Makes create_pool an internal function
- Makes dex-pallet exclusively create pools against a native coin
- Removes various fees
- Adds new crates to GH workflow
* Fix rebase artifacts
* Correct other rebase artifact
* Correct CI specification for liquidity-tokens
* Correct primitives' test to the standardized pallet account scheme
---------
Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
* db_macro
* wip: converted prcessor/key_gen to use create_db macro
* wip: converted prcessor/key_gen to use create_db macro
* wip: formatting
* fix: added no_run to doc
* fix: documentation example had extra parenths
* fix: ignore doc test entirely
* Corrections from rebasing
* Misc lint
---------
Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
* Update the coordinator to give key shares based on weight, not based on existence
Participants are now identified by their starting index. While this compiles,
the following is unimplemented:
1) A conversion for DKG `i` values. It assumes the threshold `i` values used
will be identical for the MuSig signature used to confirm the DKG.
2) Expansion from compressed values to full values before forwarding to the
processor.
* Add a fn to the DkgConfirmer to convert `i` values as needed
Also removes TODOs regarding Serai ensuring validator key uniqueness +
validity. The current infra achieves both.
* Have the Tributary DB track participation by shares, not by count
* Prevent a node from obtaining 34% of the maximum amount of key shares
This is actually mainly intended to set a bound on message sizes in the
coordinator. Message sizes are amplified by the amount of key shares held, so
setting an upper bound on said amount lets it determine constants. While that
upper bound could be 150, that'd be unreasonable and increase the potential for
DoS attacks.
* Correct the mechanism to detect if sufficient accumulation has occured
It used to check if the latest accumulation hit the required threshold. Now,
accumulations may jump past the required threshold. The required mechanism is
to check the threshold wasn't prior met and is now met.
* Finish updating the coordinator to handle a multiple key share per validator environment
* Adjust stategy re: preventing noce reuse in DKG Confirmer
* Add TODOs regarding dropped transactions, add possible TODO fix
* Update tests/coordinator
This doesn't add new multi-key-share tests, it solely updates the existing
single key-share tests to compile and run, with the necessary fixes to the
coordinator.
* Update processor key_gen to handle generating multiple key shares at once
* Update SubstrateSigner
* Update signer, clippy
* Update processor tests
* Update processor docker tests
If a crate has std set, it should enable std for all dependencies in order to
let them properly select which algorithms to use. Some crates fallback to
slower/worse algorithms on no-std.
Also more aggressively sets default-features = false leading to a *10%*
reduction in the amount of crates coordinator builds.
Even though the intent was to test against 0.17.3.2, and a Monero 0.17.3.2 node
was running, the processor now uses docker which will always use 0.18.
Accordingly, while the intent was valid, it was pointless.
This is unfortunate, as testing against 0.17 helped protect against edge cases.
The infra to preserve their tests isn't worth the benefit we'd gain from said
tests however.
The lack of locking the connection when making an authenticated request, which
is actually two sequential requests, risked another caller making a request in
between, invalidating the state.
Now, only unauthenticated connections share a connection object.
Disables the unused zmq RPC.
Removes authentication which seems to be unstable as hell when under load
(see #351).
No longer use Network::Isolated as it's not needed here (the Monero nodes run
with `--offline`).
Also halves the minimum fee policy, which still may be 2x-4x higher than
necessary due to API limitations within bitcoin-serai (which we can fix as it's
within our scope).
Monero would select decoys with a new RNG seed, which may have used more bytes,
increasing the fee.
There's a few comments here.
1) Non-determinism wasn't removed via distinguishing the edits. It was done by
removing part of the transcript. A TODO exists to improve this.
2) Distinct TX fees is a test failure, not an issue in prod *unless* the distinct
fee is greater. So long as the distinct fee is lesser, it's fine.
3) Removing outputs is expected to only decrease fees.
The existing code should've mostly handled this fine. Only a single edge case
(TX fee reduction on no-change Plans) would cause an improper increase in
operating costs.
* initial implementation
* add function to get a balance of an account
* add support for multiple coins
* rename pallet to "coins-pallet"
* replace balances, assets and tokens pallet with coins pallet in runtime
* add total supply info
* update client side for new Coins pallet
* handle fees
* bug fixes
* Update FeeAccount test
* Fmt
* fix pr comments
* remove extraneous Imbalance type
* Minor tweaks
---------
Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
Implements most of #297 to the point I'm fine closing it. The solution
implemented is distinct than originally designed, yet much simpler.
Since we have a fully-linear view of created transactions, we don't have to
per-output track operating costs incurred by that output. We can track it
across the entire Serai system, without hooking into the Eventuality system.
Also updates documentation.
Replaces plan IDs with key + ID, letting the coordinator determine the sessions
for the plans.
Properly scopes which plan IDs are set on which tributaries, and ensures we
have the necessary tributaries at time of handling.
* Revert "Correct the prior documented TOCTOU"
This reverts commit d50fe87801.
* Correct the prior documented TOCTOU
d50fe87801 edited the challenge for the Batch to
fix it. This won't produce Batch n+1 until Batch n is successfully published
and verified. It's an alternative strategy able to be reviewed, with a much
smaller impact to scope.
Now, if a malicious validator set publishes a malicious `Batch` at the last
moment, it'll cause all future `Batch`s signed by the next validator set to
require a bool being set (yet they never will set it).
This will prevent the handover.
The only overhead is having two distinct `batch_message` calls on-chain.
Renames Update to SignedBatch.
Checks Batch equality via a hash of the InInstructions. That prevents needing
to keep the Batch in node state or TX introspect.
Prior, we only supported a single Tributary per network, and spawned a task to
handled Processor messages per Tributary. Now, we handle Processor messages per
network, yet we still only supported a single Tributary in that handling
function.
Now, when we handle a message, we load the Tributary which is relevant. Once we
know it, we ensure we have it (preventing race conditions), and then proceed.
We do need work to check if we should have a Tributary, or if we're not
participating. We also need to check if a Tributary has been retired, meaning
we shouldn't handle any transactions related to them, and to clean up retired
Tributaries.
* Design and document a multisig rotation flow
* Make Scanner::eventualities a HashMap so it's per-key
* Don't drop eventualities, always follow through on them
Technical improvements made along the way.
* Start creating an isolate object to manage multisigs, which doesn't require being a signer
Removes key from SubstrateBlock.
* Move Scanner/Scheduler under multisigs
* Move Batch construction into MultisigManager
* Clarify "should" in Multisig Rotation docs
* Add block_number to MultisigManager, as it controls the scanner
* Move sign_plans into MultisigManager
Removes ThresholdKeys from prepare_send.
* Make SubstrateMutable an alias for MultisigManager
* Rewrite Multisig Rotation
The prior scheme had an exploit possible where funds were sent to the old
multisig, then burnt on Serai to send from the new multisig, locking liquidity
for 6 hours. While a fee could be applied to stragglers, to make this attack
unprofitable, the newly described scheme avoids all this.
* Add mini
mini is a miniature version of Serai, emphasizing Serai's nature as a
collection of independent clocks. The intended use is to identify race
conditions and prove protocols are comprehensive regarding when certain clocks
tick.
This uses loom, a prior candidate for evaluating the processor/coordinator as
free of race conditions (#361).
* Use mini to prove a race condition in the current multisig rotation docs, and prove safety of alternatives
Technically, the prior commit had mini prove the race condition.
The docs currently say the activation block of the new multisig is the block
after the next Batch's. If the two next Batches had already entered the
mempool, prior to set_keys being called, the second next Batch would be
expected to contain the new key's data yet fail to as the key wasn't public
when the Batch was actually created.
The naive solution is to create a Batch, publish it, wait until it's included,
and only then scan the next block. This sets a bound of
`Batch publication time < block time`. Optimistically, we can publish a Batch
in 24s while our shortest block time is 2m. Accordingly, we should be fine with
the naive solution which doesn't take advantage of throughput. #333 may
significantly change latency however and require an algorithm whose throughput
exceeds the rate of blocks created.
In order to re-introduce parallelization, enabling throughput, we need to
define a safe range of blocks to scan without Serai ordering the first one.
mini demonstrates safety of scanning n blocks Serai hasn't acknowledged, so
long as the first is scanned before block n+1 is (shifting the n-block window).
The docs will be updated next, to reflect this.
* Fix Multisig Rotation
I believe this is finally good enough to be final.
1) Fixes the race condition present in the prior document, as demonstrated by
mini.
`Batch`s for block `n` and `n+1`, may have been in the mempool when a
multisig's activation block was set to `n`. This would cause a potentially
distinct `Batch` for `n+1`, despite `n+1` already having a signed `Batch`.
2) Tightens when UIs should use the new multisig to prevent eclipse attacks,
and protection against `Batch` publication delays.
3) Removes liquidity fragmentation by tightening flow/handling of latency.
4) Several clarifications and documentation of reasoning.
5) Correction of "prior multisig" to "all prior multisigs" regarding historical
verification, with explanation why.
* Clarify terminology in mini
Synchronizes it from my original thoughts on potential schema to the design
actually created.
* Remove most of processor's README for a reference to docs/processor
This does drop some misc commentary, though none too beneficial. The section on
scanning, deemed sufficiently beneficial, has been moved to a document and
expanded on.
* Update scanner TODOs in line with new docs
* Correct documentation on Bitcoin::Block::time, and Block::time
* Make the scanner in MultisigManager no longer public
* Always send ConfirmKeyPair, regardless of if in-set
* Cargo.lock changes from a prior commit
* Add a policy document on defining a Canonical Chain
I accidentally committed a version of this with a few headers earlier, and this
is a proper version.
* Competent MultisigManager::new
* Update processor's comments
* Add mini to copied files
* Re-organize Scanner per multisig rotation document
* Add RUST_LOG trace targets to e2e tests
* Have the scanner wait once it gets too far ahead
Also bug fixes.
* Add activation blocks to the scanner
* Split received outputs into existing/new in MultisigManager
* Select the proper scheduler
* Schedule multisig activation as detailed in documentation
* Have the Coordinator assert if multiple `Batch`s occur within a block
While the processor used to have ack_up_to_block, enabling skips in the block
acked, support for this was removed while reworking it for multiple multisigs.
It should happen extremely infrequently.
While it would still be beneficial to have, if multiple `Batch`s could occur
within a block (with the complexity here not being worth adding that ban as a
policy), multiple `Batch`s were blocked for DoS reasons.
* Schedule payments to the proper multisig
* Correct >= to <
* Use the new multisig's key for change on schedule
* Don't report External TXs to prior multisig once deprecated
* Forward from the old multisig to the new one at all opportunities
* Move unfulfilled payments in queue from prior to new multisig
* Create MultisigsDb, splitting it out of MainDb
Drops the call to finish_signing from the Signer. While this will cause endless
re-attempts, the Signer will still consider them completed and drop them,
making this an O(n) cost at boot even if we did nothing from here.
The MultisigManager should call finish_signing once the Scanner completes the
Eventuality.
* Don't check Scanner-emitted completions, trust they are completions
Prevents needing to use async code to mark the completion and creates a
fault-free model. The current model, on fault, would cause a lack of marked
completion in the signer.
* Fix a possible panic in the processor
A shorter-chain reorg could cause this assert to trip. It's fixed by
de-duplicating the data, as the assertion checked consistency. Without the
potential for inconsistency, it's unnecessary.
* Document why an existing TODO isn't valid
* Change when we drop payments for being to the change address
The earlier timing prevents creating Plans solely to the branch address,
causing the payments to be dropped, and the TX to become an effective
aggregation TX.
* Extensively document solutions to Eventualities being potentially created after having already scanned their resolutions
* When closing, drop External/Branch outputs which don't cause progress
* Properly decide if Change outputs should be forward or not when closing
This completes all code needed to make the old multisig have a finite lifetime.
* Commentary on forwarding schemes
* Provide a 1 block window, with liquidity fragmentation risks, due to latency
On Bitcoin, this will be 10 minutes for the relevant Batch to be confirmed. On
Monero, 2 minutes. On Ethereum, ~6 minutes.
Also updates the Multisig Rotation document with the new forwarding plan.
* Implement transaction forwarding from old multisig to new multisig
Identifies a fault where Branch outputs which shouldn't be dropped may be, if
another output fulfills their next step. Locking Branch fulfillment down to
only Branch outputs is not done in this commit, but will be in the next.
* Only let Branch outputs fulfill branches
* Update TODOs
* Move the location of handling signer events to avoid a race condition
* Avoid a deadlock by using a RwLock on a single txn instead of two txns
* Move Batch ID out of the Scanner
* Increase from one block of latency on new keys activation to two
For Monero, this offered just two minutes when our latency to publish a Batch
is around a minute already. This does increase the time our liquidity can be
fragmented by up to 20 minutes (Bitcoin), yet it's a stupid attack only
possible once a week (when we rotate). Prioritizing normal users' transactions
not being subject to forwarding is more important here.
Ideally, we'd not do +2 blocks yet plus `time`, such as +10 minutes, making
this agnostic of the underlying network's block scheduling. This is a
complexity not worth it.
* Split MultisigManager::substrate_block into multiple functions
* Further tweaks to substrate_block
* Acquire a lock on all Scanner operations after calling ack_block
Gives time to call register_eventuality and initiate signing.
* Merge sign_plans into substrate_block
Also ensure the Scanner's lock isn't prematurely released.
* Use a HashMap to pass to-be-forwarded instructions, not the DB
* Successfully determine in ClosingExisting
* Move from 2 blocks of latency when rotating to 10 minutes
Superior as noted in 6d07af92ce10cfd74c17eb3400368b0150eb36d7, now trivial to
implement thanks to prior commit.
* Add note justifying measuring time in blocks when rotating
* Implement delaying of outputs received early to the new multisig per specification
* Documentation on why Branch outputs don't have the race condition concerns Change do
Also ensures 6 hours is at least N::CONFIRMATIONS, for sanity purposes.
* Remove TODO re: sanity checking Eventualities
We sanity check the Plan the Eventuality is derived from, and the Eventuality
is handled moments later (in the same file, with a clear call path). There's no
reason to add such APIs to Eventualities for a sanity check given that.
* Add TODO(now) for TODOs which must be done in this branch
Also deprecates a pair of TODOs to TODO2, and accepts the flow of the Signer
having the Eventuality.
* Correct errors in potential/future flow descriptions
* Accept having a single Plan Vec
Per the following code consuming it, there's no benefit to bifurcating it by
key.
* Only issue sign_transaction on boot for the proper signer
* Only set keys when participating in their construction
* Misc progress
Only send SubstrateBlockAck when we have a signer, as it's only used to tell
the Tributary of what Plans are being signed in response to this block.
Only immediately sets substrate_signer if session is 0.
On boot, doesn't panic if we don't have an active key (as we wouldn't if only
joining the next multisig). Continues.
* Correctly detect and set retirement block
Modifies the retirement block from first block meeting requirements to block
CONFIRMATIONS after.
Adds an ack flow to the Scanner's Confirmed event and Block event to accomplish
this, which may deadlock at this time (will be fixed shortly).
Removes an invalid await (after a point declared unsafe to use await) from
MultisigsManager::next_event.
* Remove deadlock in multisig_completed and document alternative
The alternative is simpler, albeit less efficient. There's no reason to adopt
it now, yet perhaps if it benefits modeling?
* Handle the final step of retirement, dropping the old key and setting new to existing
* Remove TODO about emitting a Block on every step
If we emit on NewAsChange, we lose the purpose of the NewAsChange period.
The only concern is if we reach ClosingExisting, and nothing has happened, then
all coins will still be in the old multisig until something finally does. This
isn't a problem worth solving, as it's latency under exceptional dead time.
* Add TODO about potentially not emitting a Block event for the reitrement block
* Restore accidentally deleted CI file
* Pair of slight tweaks
* Add missing if statement
* Disable an assertion when testing
One of the test flows currently abuses the Scanner in a way triggering it.
Eventualities need to be binding not just to a plan, yet to the execution of
the plan (the outputs). Bitcoin's Eventuality definition short-cutted this
under a honest multisig assumption, causing the following issue:
If multisig n+1 is verifying multisig n's actions, as detailed in
multi-multisig's document on multisig rotation, it'll check no outstanding
eventualities exist. If we solely bind to the plan, a malicious multisig n
could steal outbound payments yet cause the plan to be marked as successfully
completed.
By modifying the eventuality to also include the expected outputs, this is no
longer possible. Binding to the expected input is preserved in order to remain
binding to the plan (allowing two plans with the same output-set to co-exist).
Fixes where ram_scanned is updated in processor. The prior version, while safe,
would redo massive amounts of work during periods of inactivity. It also hit an
undocumented invariant where get_eventuality_completions assumes new blocks,
yet redone work wouldn't have new blocks.
Modifies Monero's generate_blocks to return the hashes of the generated blocks.
A commit made while testing moved them from network-key-indexed to
Substrate-key-indexed. Since Substrate keys have a fixed-length, fitting within
the Copy boundary, there's no reason for it to not use an array.
* restrict batch size to ~25kb
* add batch size check to node
* rate limit batches to 1 per serai block
* add support for multiple batches for block
* fix review comments
* Misc fixes
Doesn't yet update tests/processor until data flow is inspected.
* Move the block from SignId to ProcessorMessage::BatchPreprocesses
* Misc clean up
---------
Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
By default, tokio-spawned worker panics will only kill the task, not the
program. Due to our extensive use of panicking on invariants, we should ensure
the program exits.
The Processor's coins folder referred to the networks it could process, as did
its Coin trait. This, and other similar cases throughout the codebase, have now
been corrected.
Also corrects dated documentation for a key pair is confirmed under the
validator-sets pallet.
This is a horrible impl which does a full ser of everything on every change.
It's just the minimal changes to resolve this TODO and able testnet deployment.
Due to the ordered message-queue, there's no benefit to multiple emissions as
there's no risk a completion will be missed. If it has yet to be read, sending
another which only be read after isn't helpful.
Simplifies code a decent bit.
This is technically over-agressive, as a dropped output will reduce the fee,
yet this edge case is so minor the flow for it to not be over-aggressive (over
a few fractions of a cent) is by no means worth it.
Fixes the crash causable by the WIP send_test.
It waited for CONFIRMATIONS + 1 confirmations, instead of CONFIRMATIONS
confirmations.
Also adds a lib interface to access the coin traits and its constants.
All uses were safe due to addresses being converted to script_pubkeys which
don't embed their network. The only risk of there being an issue is if a
future address spec did embed the net ID into the script_pubkey and that was
moved to.
This resolves the audit note and does offer that tightening.
* add mlsag
* fix last commit
* fix miner v1 txs
* fix non-miner v1 txs
* add borromean + fix mlsag
* add block hash calculations
* fix for the jokester that added unreduced scalars
to the borromean signature of
2368d846e671bf79a1f84c6d3af9f0bfe296f043f50cf17ae5e485384a53707b
* Add Borromean range proof verifying functionality
* Add MLSAG verifying functionality
* fmt & clippy :)
* update MLSAG, ss2_elements will always be 2
* Add MgSig proving
* Tidy block.rs
* Tidy Borromean, fix bugs in last commit, replace todo! with unreachable!
* Mark legacy EcdhInfo amount decryption as experimental
* Correct comments
* Write a new impl of the merkle algorithm
This one tries to be understandable.
* Only pull in things only needed for experimental when experimental
* Stop caching the Monero block hash now in processor that we have Block::hash
* Corrections for recent processor commit
* Use a clearer algorithm for the merkle
Should also be more efficient due to not shifting as often.
* Tidy Mlsag
* Remove verify_rct_* from Mlsag
Both methods were ports from Monero, overtly specific without clear
documentation. They need to be added back in, with documentation, or included
in a node which provides the necessary further context for them to be naturally
understandable.
* Move mlsag/mod.rs to mlsag.rs
This should only be a folder if it has multiple files.
* Replace EcdhInfo terminology
The ECDH encrypted the amount, yet this struct contained the encrypted amount,
not some ECDH.
Also corrects the types on the original EcdhInfo struct.
* Correct handling of commitment masks when scanning
* Route read_array through read_raw_vec
* Misc lint
* Make a proper RctType enum
No longer caches RctType in the RctSignatures as well.
* Replace Vec<Bulletproofs> with Bulletproofs
Monero uses aggregated range proofs, so there's only ever one Bulletproof. This
is enforced with a consensus rule as well, making this safe.
As for why Monero uses a vec, it's probably due to the lack of variadic typing
used. Its effectively an Option for them, yet we don't need an Option since we
do have variadic typing (enums).
* Add necessary checks to Eventuality re: supported protocols
* Fix for block 202612 and fix merkel root calculations
* MLSAG (de)serialisation fix
ss_2_elements will not always be 2 as rct type 1 transactions are not enforced to have one input
* Revert "MLSAG (de)serialisation fix"
This reverts commit 5e710e0c96.
here it checks number of MGs == number of inputs:
0a1eaf26f9/src/cryptonote_core/tx_verification_utils.cpp (L60-59)
and here it checks for RctTypeFull number of MGs == 1:
0a1eaf26f9/src/ringct/rctSigs.cpp (L1325)
so number of inputs == 1
so ss_2_elements == 2
* update `MlsagAggregate` comment
* cargo update
Resolves a yanked crate
* Move location of serai-client in Cargo.toml
---------
Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
Provides a DST, and associated metadata as beneficial.
Also utilizes MuSig's context to session-bind. Since set_keys_messages also
binds to set, this is semi-redundant, yet that's appreciated.
When we receive messages, we're provided with a message ID we can use to
prevent handling an item multiple times. That doesn't prevent us from *sending*
an item multiple times though. Thanks to the UID system, we can now not send if
already present.
Alternatively, we can remove the ordered message ID for just the UID, allowing
duplicates to be sent without issue, and handled on the receiving end.
When a Substrate block occurs, the coordinator is expected to emit
SubstrateBlock. This causes the processor to begin a variety of plans. The
processor now emits SubstrateBlockAck, explicitly listing all plan IDs, before
starting signing.
This lets the coordinator provide a SubstrateBlock transaction, and with it,
recognize all plan IDs as valid.
Prior, we would've had to have a spotty algorithm based upon the upcoming
Preprocess messages, or if we immediately provided the SubstrateBlock
transaction, then wait for the processor to inform us of the contained plans.
This creates an explicitly proper async flow not reliant on waiting for data
availability.
Alternatively, we could've replaced Preprocess with (Block, Vec<Preprocess>).
This would've been more efficient, yet also clunky due to the multiple usages
of the Preprocess message.
There is the ability to cause state bloat by flooding Tributary.
KeyGen/Sign specifically shouldn't allow bloat since we check the
commitments/preprocesses/shares for validity. Accordingly, any invalid data
(such as bloat) should be detected.
It was posssible to place bloat after the valid data. Doing so would be
considered a valid KeyGen/Sign message, yet could add up to 50k kB per sign.
[0; 32] is a magic for no block has been set yet due to this being the first
key pair. If [0; 32] is the latest finalized block, the processor determines
an activation block based on timestamps.
This doesn't use an Option for ergonomic reasons.
It originally wasn't an enum so software which had yet to update before an
integration wouldn't error (as now enums are strictly typed). The strict typing
is preferable though.
SubstrateBlock's provision of the most recently acknowledged block has
equivalent information with the same latency. Accordingly, there's no need for
it.
Clearly establishes why consistency is guaranteed from a Rust borrow-checker
mindset. While there are plenty of... 'violations', they're clearly explained.
Hopefully, this method of thinking helps promote/ensure consistency in the
future.
The signing set should be the first group to submit preprocesses to Tributary.
Re-attempts shouldn't be once every 30s, yet n blocks since the last relevant
message.
Removes the use of an async task/channel in the signer (and Substrate signer).
Also removes the need to be able to get the time from a coin's block, which was
a fragile system marked with a TODO already.