I didn't remove async-recursion when I updated the repo to 1.77 as I forgot we
used it in the tests. I still had to add some Box::pins, which may have been a
valid option, on the prior Rust version, yet at least resolves everything now.
Also updates everything which doesn't introduce further depends.
* Monero: fix decoy selection algo and add test for latest spendable
- DSA only selected coinbase outputs and didn't match the wallet2
implementation
- Added test to make sure DSA will select a decoy output from the
most recent unlocked block
- Made usage of "height" in DSA consistent with other usage of
"height" in Monero code (height == num blocks in chain)
- Rely on monerod RPC response for output's unlocked status
* xmr runner tests mine until outputs are unlocked
* fingerprintable canoncial select decoys
* Separate fingerprintable canonical function
Makes it simpler for callers who are unconcered with consistent
canonical output selection across multiple clients to rely on
the simpler Decoy::select and not worry about fingerprintable
canonical
* fix merge conflicts
* Put back TODO for issue #104
* Fix incorrect check on distribution len
The RingCT distribution on mainnet doesn't start until well after
genesis, so the distribution length is expected to be < height.
To be clear, this was my mistake from this series of changes
to the DSA. I noticed this mistake because the DSA would error
when running on mainnet.
* monero: Use fee priority enums from monero repo CLI/RPC wallets
* Update processor for fee priority change
* Remove FeePriority::Default
Done in consultation with @j-berman.
The RPC/CLI/GUI almost always adjust up except barring very explicit commands,
hence why FeePriority 0 is now only exposed via the explicit command of
FeePriority::Custom { priority: 0 }.
Also helps with terminology.
---------
Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
Moves from concatted Dockerfiles to pseudo-templated Dockerfiles via a dedicated Rust program.
Removes the unmaintained kubernetes, not because we shouldn't have/use it, but because it's unmaintained and needs to be reworked before it's present again.
Replaces the compose with the work in the new orchestrator binary which spawns everything as expected. While this arguably re-invents the wheel, it correctly manages secrets and handles the variadic Dockerfiles.
Also adds an unrelated patch for zstd and simplifies running services a bit by greater utilizing the existing infrastructure.
---
* Delete all Dockerfile fragments, add new orchestator to generate Dockerfiles
Enables greater templating.
Also delete the unmaintained kubernetes folder *for now*. This should be
restored in the future.
* Use Dockerfiles from the orchestator
* Ignore Dockerfiles in the git repo
* Remove CI job to check Dockerfiles are as expected now that they're no longer committed
* Remove old Dockerfiles from repo
* Use Debian for monero-wallet-rpc
* Remove replace_cmds for proper usage of entry-dev
Consolidates ports a bit.
Updates serai-docker-tests from "compose" to "build".
* Only write a new dockerfile if it's distinct
Preserves the updated time metadata.
* Update serai-docker-tests
* Correct the path Dockerfiles are built from
* Correct inclusion of orchestration folder in Docker builds
* Correct debug/release flagging in the cargo command
Apparently, --debug isn't an effective NOP yet an error.
* Correct path used to run the Serai node within a Dockerfile
* Correct path in Monero Dockerfile
* Attempt storing monerod in /usr/bin
* Use sudo to move into /usr/bin in CI
* Correct 18.3.0 to 18.3.1
* Escape * with quotes
* Update deny.toml, ADD orchestration in runtime Dockerfile
* Add --detach to the Monero GH CI
* Diversify dockerfiles by network
* Fixes to network-diversified orchestration
* Bitcoin and Monero testnet scripts
* Permissions and tweaks
* Flatten scripts folders
* Add missing folder specification to Monero Dockerfile
* Have monero-wallet-rpc specify the monerod login
* Have the Docker CMD specify env variables inserted at time of Dockerfile generation
They're overrideable with the global enviornment as for tests. This enables
variable generation in orchestrator and output to productionized Docker files
without creating a life-long file within the Docker container.
* Don't add Dockerfiles into Docker containers now that they have secrets
Solely add the source code for them as needed to satisfy the workspace bounds.
* Download arm64 Monero on arm64
* Ensure constant host architecture when reproducibly building the wasm
Host architecture, for some reason, can effect the generated code despite the
target architecture always being foreign to the host architecture.
* Randomly generate infrastructure keys
* Have orchestrator generate a key, be able to create/start containers
* Ensure bash is used over sh
* Clean dated docs
* Change how quoting occurs
* Standardize to sh
* Have Docker test build the dev Dockerfiles
* Only key_gen once
* cargo update
Adds a patch for zstd and reconciles the breaking nightly change which just
occurred.
* Use a dedicated network for Serai
Also fixes SERAI_HOSTNAME passed to coordinator.
* Support providing a key over the env for the Serai node
* Enable and document running daemons for tests via serai-orchestrator
Has running containers under the dev network port forward the RPC ports.
* Use volumes for bitcoin/monero
* Use bitcoin's run.sh in GH CI
* Only use the volume for testnet (not dev)
* Route validators for any active set through sc-authority-discovery
Additionally adds an RPC route to retrieve their P2P addresses.
* Have the coordinator get peers from substrate
* Have the RPC return one address, not up to 3
Prevents the coordinator from believing it has 3 peers when it has one.
* Add missing feature to serai-client
* Correct network argument in serai-client for p2p_validators call
* Add a test in serai-client to check DHT population with a much quicker failure than the coordinator tests
* Update to latest Substrate
Removes distinguishing BABE/AuthorityDiscovery keys which causes
sc_authority_discovery to populate as desired.
* Update to a properly tagged substrate commit
* Add all dialed to peers to GossipSub
* cargo fmt
* Reduce common code in serai-coordinator-tests with amore involved new_test
* Use a recursive async function to spawn `n` DockerTests with the necessary networking configuration
* Merge UNIQUE_ID and ONE_AT_A_TIME
* Tidy up the new recursive code in tests/coordinator
* Use a Mutex in CONTEXT to let it be set multiple times
* Make complimentary edits to full-stack tests
* Augment coordinator P2p connection logs
* Drop lock acquisitions before recursing
* Better scope lock acquisitions in full-stack, preventing a deadlock
* Ensure OUTER_OPS is reset across the test boundary
* Add cargo deny allowance for dockertest fork
* Schedule re-attempts and add a (not filled out) match statement to actually execute them
A comment explains the methodology. To copy it here:
"""
This is because we *always* re-attempt any protocol which had participation. That doesn't
mean we *should* re-attempt this protocol.
The alternatives were:
1) Note on-chain we completed a protocol, halting re-attempts upon 34%.
2) Vote on-chain to re-attempt a protocol.
This schema doesn't have any additional messages upon the success case (whereas
alternative #1 does) and doesn't have overhead (as alternative #2 does, sending votes and
then preprocesses. This only sends preprocesses).
"""
Any signing protocol which reaches sufficient participation will be
re-attempted until it no longer does.
* Have the Substrate scanner track DKG removals/completions for the Tributary code
* Don't keep trying to publish a participant removal if we've already set keys
* Pad out the re-attempt match a bit more
* Have CosignEvaluator reload from the DB
* Correctly schedule cosign re-attempts
* Actuall spawn new DKG removal attempts
* Use u32 for Batch ID in SubstrateSignableId, finish Batch re-attempt routing
The batch ID was an opaque [u8; 5] which also included the network, yet that's
redundant and unhelpful.
* Clarify a pair of TODOs in the coordinator
* Remove old TODO
* Final comment cleanup
* Correct usage of TARGET_BLOCK_TIME in reattempt scheduler
It's in ms and I assumed it was in s.
* Have coordinator tests drop BatchReattempts which aren't relevant yet may exist
* Bug fix and pointless oddity removal
We scheduled a re-attempt upon receiving 2/3rds of preprocesses and upon
receiving 2/3rds of shares, so any signing protocol could cause two re-attempts
(not one more).
The coordinator tests randomly generated the Batch ID since it was prior an
opaque byte array. While that didn't break the test, it was pointless and did
make the already-succeeded check before re-attempting impossible to hit.
* Add log statements, correct dead-lock in coordinator tests
* Increase pessimistic timeout on recv_message to compensate for tighter best-case timeouts
* Further bump timeout by a minute
AFAICT, GH failed by just a few seconds.
This also is worst-case in a single instance, making it fine to be decently long.
* Further further bump timeout due to lack of distinct error
Event retrieval was prior:
- Retrieve all events in the block, which may be hundreds of KB
- Filter to just a few
Since it's frequent to want multiple sets of events, each filtered in their own
way, this caused the retrieval to happen multiple times. Now, it only will
happen once.
Also has the scoped clients take a reference, not an owned TemporalSerai.
* Make it clear not providing a change address is fingerprintable
When no change address is provided, all change is shunted to the
fee. This PR makes it clear to the caller that it is fingerprintable
when the caller does this.
* Review comments
Uses a full-fledged serai-abi to do so.
Removes use of UncheckedExtrinsic as a pointlessly (for us) length-prefixed
block with a more complicated signing algorithm than advantageous.
In the future, we should considering consolidating the various primitives
crates. I'm not convinced we benefit from one primitives crate per pallet.
They're a bit more binding, smaller, provided by the Rust bitcoin library,
sane, and we don't have to worry about malleability since all of our inputs are
SegWit.
* Remove subxt
Removes ~20 crates from our Cargo.lock.
Removes downloading the metadata and enables removing the getMetadata RPC route
(relevant to #379).
Moves forward #337.
Done now due to distinctions in the subxt 0.32 API surface which make it
justifiable to not update.
* fmt, update due to deny triggering on a yanked crate
* Correct the handling of substrate_block_notifier now that it's ephemeral, not long-lived
* Correct URL in tests/coordinator from ws to http
* Remove NetworkId from processor-messages
Because intent binds to the sender/receiver, it's not needed for intent.
The processor knows what the network is.
The coordinator knows which to use because it's sending this message to the
processor for that network.
Also removes the unused zeroize.
* ProcessorMessage::Completed use Session instead of key
* Move SubstrateSignId to Session
* Finish replacing key with session
* Move message-queue to a fully binary representation
Additionally adds a timeout to the message queue test.
* coordinator clippy
* Remove contention for the message-queue socket by using per-request sockets
* clippy
For some reason, these constantly failed for me while waiting for the key pair
to confirm. This adds a sleep during the mining process, to ensure blocks
actually have time between them, and mines several more blocks to handle the
median code recently added.
* Add SignalsConfig to chain_spec
* Correct multiexp feature flagging for rand_core std
* Remove bincode for borsh
Replaces a non-canonical encoding with a canonical encoding which additionally
should be faster.
Also fixes an issue where we used bincode in transcripts where it cannot be
trusted.
This ended up fixing a myriad of other bugs observed, unfortunately.
Accordingly, it either has to be merged or the bug fixes from it must be ported
to a new PR.
* Make serde optional, minimize usage
* Make borsh an optional dependency of substrate/ crates
* Remove unused dependencies
* Use [u8; 64] where possible in the processor messages
* Correct borsh feature flagging
* Add a function to deterministically decide which Serai blocks should be co-signed
Has a 5 minute latency between co-signs, also used as the maximal latency
before a co-sign is started.
* Get all active tributaries we're in at a specific block
* Add and route CosignSubstrateBlock, a new provided TX
* Split queued cosigns per network
* Rename BatchSignId to SubstrateSignId
* Add SubstrateSignableId, a meta-type for either Batch or Block, and modularize around it
* Handle the CosignSubstrateBlock provided TX
* Revert substrate_signer.rs to develop (and patch to still work)
Due to SubstrateSigner moving when the prior multisig closes, yet cosigning
occurring with the most recent key, a single SubstrateSigner can be reused.
We could manage multiple SubstrateSigners, yet considering the much lower
specifications for cosigning, I'd rather treat it distinctly.
* Route cosigning through the processor
* Add note to rename SubstrateSigner post-PR
I don't want to do so now in order to preserve the diff's clarity.
* Implement cosign evaluation into the coordinator
* Get tests to compile
* Bug fixes, mark blocks without cosigners available as cosigned
* Correct the ID Batch preprocesses are saved under, add log statements
* Create a dedicated function to handle cosigns
* Correct the flow around Batch verification/queueing
Verifying `Batch`s could stall when a `Batch` was signed before its
predecessors/before the block it's contained in was cosigned (the latter being
inevitable as we can't sign a block containing a signed batch before signing
the batch).
Now, Batch verification happens on a distinct async task in order to not block
the handling of processor messages. This task is the sole caller of verify in
order to ensure last_verified_batch isn't unexpectedly mutated.
When the processor message handler needs to access it, or needs to queue a
Batch, it associates the DB TXN with a lock preventing the other task from
doing so.
This lock, as currently implemented, is a poor and inefficient design. It
should be modified to the pattern used for cosign management. Additionally, a
new primitive of a DB-backed channel may be immensely valuable.
Fixes a standing potential deadlock and a deadlock introduced with the
cosigning protocol.
* Working full-stack tests
After the last commit, this only required extending a timeout.
* Replace "co-sign" with "cosign" to make finding text easier
* Update the coordinator tests to support cosigning
* Inline prior_batch calculation to prevent panic on rotation
Noticed when doing a final review of the branch.
* De-duplicate Dockerfiles by using a bash file to concatenate common parts
Resolves#375.
Dockerfiles are still committed to the repo to avoid a dependency on bash.
* Add a CI job to confirm the committed dockerfiles are the currently generated ones
* Create dedicated Dockerfiles per processor network
Ensures the compromising of network-specific dependencies doesn't lead to a
compromise of the build process for all processors.
* Dockerfile corrections
* Correct call to build processor Docker image in tests/processor
The prior system spawned a new connection per request to enable parallelism,
yet kept hitting hyper::IncompleteMessages I couldn't track down. This
attempts to resolve those by a long-lived socket.
Halves the amount of requests per-authenticated RPC call, and accordingly is
likely still better overall.
I don't believe this is resolved yet but this is still worth pushing.
* Update the coordinator to give key shares based on weight, not based on existence
Participants are now identified by their starting index. While this compiles,
the following is unimplemented:
1) A conversion for DKG `i` values. It assumes the threshold `i` values used
will be identical for the MuSig signature used to confirm the DKG.
2) Expansion from compressed values to full values before forwarding to the
processor.
* Add a fn to the DkgConfirmer to convert `i` values as needed
Also removes TODOs regarding Serai ensuring validator key uniqueness +
validity. The current infra achieves both.
* Have the Tributary DB track participation by shares, not by count
* Prevent a node from obtaining 34% of the maximum amount of key shares
This is actually mainly intended to set a bound on message sizes in the
coordinator. Message sizes are amplified by the amount of key shares held, so
setting an upper bound on said amount lets it determine constants. While that
upper bound could be 150, that'd be unreasonable and increase the potential for
DoS attacks.
* Correct the mechanism to detect if sufficient accumulation has occured
It used to check if the latest accumulation hit the required threshold. Now,
accumulations may jump past the required threshold. The required mechanism is
to check the threshold wasn't prior met and is now met.
* Finish updating the coordinator to handle a multiple key share per validator environment
* Adjust stategy re: preventing noce reuse in DKG Confirmer
* Add TODOs regarding dropped transactions, add possible TODO fix
* Update tests/coordinator
This doesn't add new multi-key-share tests, it solely updates the existing
single key-share tests to compile and run, with the necessary fixes to the
coordinator.
* Update processor key_gen to handle generating multiple key shares at once
* Update SubstrateSigner
* Update signer, clippy
* Update processor tests
* Update processor docker tests
* Updates to modern dockertest
* More updates to latest dockertest
* Update Cargo.lock to dockertest with handle restored
* clippy coordinator tests
* clippy full-stack tests
* Remove kayabaNerve branch for official repo's latest commit hash
* Update serai-client, remove reliance on the existence of a handle fn
* Don't use the hex encoding of unique_id in dockertests
Gets our hostnames just below 64 bytes, resolving test failures on at least
Debian-based systems.
* Use Network::Isolated for all dockertest instances
* Correct error from prior commit's edits
* initial implementation
* add function to get a balance of an account
* add support for multiple coins
* rename pallet to "coins-pallet"
* replace balances, assets and tokens pallet with coins pallet in runtime
* add total supply info
* update client side for new Coins pallet
* handle fees
* bug fixes
* Update FeeAccount test
* Fmt
* fix pr comments
* remove extraneous Imbalance type
* Minor tweaks
---------
Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
Replaces plan IDs with key + ID, letting the coordinator determine the sessions
for the plans.
Properly scopes which plan IDs are set on which tributaries, and ensures we
have the necessary tributaries at time of handling.