mirror of
https://github.com/serai-dex/serai.git
synced 2024-11-16 17:07:35 +00:00
Ethereum Integration (#557)
* Clean up Ethereum * Consistent contract address for deployed contracts * Flesh out Router a bit * Add a Deployer for DoS-less deployment * Implement Router-finding * Use CREATE2 helper present in ethers * Move from CREATE2 to CREATE Bit more streamlined for our use case. * Document ethereum-serai * Tidy tests a bit * Test updateSeraiKey * Use encodePacked for updateSeraiKey * Take in the block hash to read state during * Add a Sandbox contract to the Ethereum integration * Add retrieval of transfers from Ethereum * Add inInstruction function to the Router * Augment our handling of InInstructions events with a check the transfer event also exists * Have the Deployer error upon failed deployments * Add --via-ir * Make get_transaction test-only We only used it to get transactions to confirm the resolution of Eventualities. Eventualities need to be modularized. By introducing the dedicated confirm_completion function, we remove the need for a non-test get_transaction AND begin this modularization (by no longer explicitly grabbing a transaction to check with). * Modularize Eventuality Almost fully-deprecates the Transaction trait for Completion. Replaces Transaction ID with Claim. * Modularize the Scheduler behind a trait * Add an extremely basic account Scheduler * Add nonce uses, key rotation to the account scheduler * Only report the account Scheduler empty after transferring keys Also ban payments to the branch/change/forward addresses. * Make fns reliant on state test-only * Start of an Ethereum integration for the processor * Add a session to the Router to prevent updateSeraiKey replaying This would only happen if an old key was rotated to again, which would require n-of-n collusion (already ridiculous and a valid fault attributable event). It just clarifies the formal arguments. * Add a RouterCommand + SignMachine for producing it to coins/ethereum * Ethereum which compiles * Have branch/change/forward return an option Also defines a UtxoNetwork extension trait for MAX_INPUTS. * Make external_address exclusively a test fn * Move the "account" scheduler to "smart contract" * Remove ABI artifact * Move refund/forward Plan creation into the Processor We create forward Plans in the scan path, and need to know their exact fees in the scan path. This requires adding a somewhat wonky shim_forward_plan method so we can obtain a Plan equivalent to the actual forward Plan for fee reasons, yet don't expect it to be the actual forward Plan (which may be distinct if the Plan pulls from the global state, such as with a nonce). Also properly types a Scheduler addendum such that the SC scheduler isn't cramming the nonce to use into the N::Output type. * Flesh out the Ethereum integration more * Two commits ago, into the **Scheduler, not Processor * Remove misc TODOs in SC Scheduler * Add constructor to RouterCommandMachine * RouterCommand read, pairing with the prior added write * Further add serialization methods * Have the Router's key included with the InInstruction This does not use the key at the time of the event. This uses the key at the end of the block for the event. Its much simpler than getting the full event streams for each, checking when they interlace. This does not read the state. Every block, this makes a request for every single key update and simply chooses the last one. This allows pruning state, only keeping the event tree. Ideally, we'd also introduce a cache to reduce the cost of the filter (small in events yielded, long in blocks searched). Since Serai doesn't have any forwarding TXs, nor Branches, nor change, all of our Plans should solely have payments out, and there's no expectation of a Plan being made under one key broken by it being received by another key. * Add read/write to InInstruction * Abstract the ABI for Call/OutInstruction in ethereum-serai * Fill out signable_transaction for Ethereum * Move ethereum-serai to alloy Resolves #331. * Use the opaque sol macro instead of generated files * Move the processor over to the now-alloy-based ethereum-serai * Use the ecrecover provided by alloy * Have the SC use nonce for rotation, not session (an independent nonce which wasn't synchronized) * Always use the latest keys for SC scheduled plans * get_eventuality_completions for Ethereum * Finish fleshing out the processor Ethereum integration as needed for serai-processor tests This doesn't not support any actual deployments, not even the ones simulated by serai-processor-docker-tests. * Add alloy-simple-request-transport to the GH workflows * cargo update * Clarify a few comments and make one check more robust * Use a string for 27.0 in .github * Remove optional from no-longer-optional dependencies in processor * Add alloy to git deny exception * Fix no longer optional specification in processor's binaries feature * Use a version of foundry from 2024 * Correct fetching Bitcoin TXs in the processor docker tests * Update rustls to resolve RUSTSEC warnings * Use the monthly nightly foundry, not the deleted daily nightly
This commit is contained in:
parent
43083dfd49
commit
0f0db14f05
58 changed files with 5031 additions and 1385 deletions
2
.github/actions/bitcoin/action.yml
vendored
2
.github/actions/bitcoin/action.yml
vendored
|
@ -5,7 +5,7 @@ inputs:
|
||||||
version:
|
version:
|
||||||
description: "Version to download and run"
|
description: "Version to download and run"
|
||||||
required: false
|
required: false
|
||||||
default: 27.0
|
default: "27.0"
|
||||||
|
|
||||||
runs:
|
runs:
|
||||||
using: "composite"
|
using: "composite"
|
||||||
|
|
6
.github/actions/test-dependencies/action.yml
vendored
6
.github/actions/test-dependencies/action.yml
vendored
|
@ -10,7 +10,7 @@ inputs:
|
||||||
bitcoin-version:
|
bitcoin-version:
|
||||||
description: "Bitcoin version to download and run as a regtest node"
|
description: "Bitcoin version to download and run as a regtest node"
|
||||||
required: false
|
required: false
|
||||||
default: 27.0
|
default: "27.0"
|
||||||
|
|
||||||
runs:
|
runs:
|
||||||
using: "composite"
|
using: "composite"
|
||||||
|
@ -19,9 +19,9 @@ runs:
|
||||||
uses: ./.github/actions/build-dependencies
|
uses: ./.github/actions/build-dependencies
|
||||||
|
|
||||||
- name: Install Foundry
|
- name: Install Foundry
|
||||||
uses: foundry-rs/foundry-toolchain@cb603ca0abb544f301eaed59ac0baf579aa6aecf
|
uses: foundry-rs/foundry-toolchain@8f1998e9878d786675189ef566a2e4bf24869773
|
||||||
with:
|
with:
|
||||||
version: nightly-09fe3e041369a816365a020f715ad6f94dbce9f2
|
version: nightly-f625d0fa7c51e65b4bf1e8f7931cd1c6e2e285e9
|
||||||
cache: false
|
cache: false
|
||||||
|
|
||||||
- name: Run a Monero Regtest Node
|
- name: Run a Monero Regtest Node
|
||||||
|
|
1
.github/workflows/coins-tests.yml
vendored
1
.github/workflows/coins-tests.yml
vendored
|
@ -30,6 +30,7 @@ jobs:
|
||||||
run: |
|
run: |
|
||||||
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \
|
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \
|
||||||
-p bitcoin-serai \
|
-p bitcoin-serai \
|
||||||
|
-p alloy-simple-request-transport \
|
||||||
-p ethereum-serai \
|
-p ethereum-serai \
|
||||||
-p monero-generators \
|
-p monero-generators \
|
||||||
-p monero-serai
|
-p monero-serai
|
||||||
|
|
1461
Cargo.lock
generated
1461
Cargo.lock
generated
File diff suppressed because it is too large
Load diff
|
@ -36,6 +36,7 @@ members = [
|
||||||
"crypto/schnorrkel",
|
"crypto/schnorrkel",
|
||||||
|
|
||||||
"coins/bitcoin",
|
"coins/bitcoin",
|
||||||
|
"coins/ethereum/alloy-simple-request-transport",
|
||||||
"coins/ethereum",
|
"coins/ethereum",
|
||||||
"coins/monero/generators",
|
"coins/monero/generators",
|
||||||
"coins/monero",
|
"coins/monero",
|
||||||
|
|
|
@ -375,7 +375,7 @@ impl SignMachine<Transaction> for TransactionSignMachine {
|
||||||
msg: &[u8],
|
msg: &[u8],
|
||||||
) -> Result<(TransactionSignatureMachine, Self::SignatureShare), FrostError> {
|
) -> Result<(TransactionSignatureMachine, Self::SignatureShare), FrostError> {
|
||||||
if !msg.is_empty() {
|
if !msg.is_empty() {
|
||||||
panic!("message was passed to the TransactionMachine when it generates its own");
|
panic!("message was passed to the TransactionSignMachine when it generates its own");
|
||||||
}
|
}
|
||||||
|
|
||||||
let commitments = (0 .. self.sigs.len())
|
let commitments = (0 .. self.sigs.len())
|
||||||
|
|
4
coins/ethereum/.gitignore
vendored
4
coins/ethereum/.gitignore
vendored
|
@ -1,7 +1,3 @@
|
||||||
# Solidity build outputs
|
# Solidity build outputs
|
||||||
cache
|
cache
|
||||||
artifacts
|
artifacts
|
||||||
|
|
||||||
# Auto-generated ABI files
|
|
||||||
src/abi/schnorr.rs
|
|
||||||
src/abi/router.rs
|
|
||||||
|
|
|
@ -18,28 +18,29 @@ workspace = true
|
||||||
|
|
||||||
[dependencies]
|
[dependencies]
|
||||||
thiserror = { version = "1", default-features = false }
|
thiserror = { version = "1", default-features = false }
|
||||||
eyre = { version = "0.6", default-features = false }
|
|
||||||
|
|
||||||
sha3 = { version = "0.10", default-features = false, features = ["std"] }
|
|
||||||
|
|
||||||
group = { version = "0.13", default-features = false }
|
|
||||||
k256 = { version = "^0.13.1", default-features = false, features = ["std", "ecdsa"] }
|
|
||||||
frost = { package = "modular-frost", path = "../../crypto/frost", features = ["secp256k1", "tests"] }
|
|
||||||
|
|
||||||
ethers-core = { version = "2", default-features = false }
|
|
||||||
ethers-providers = { version = "2", default-features = false }
|
|
||||||
ethers-contract = { version = "2", default-features = false, features = ["abigen", "providers"] }
|
|
||||||
|
|
||||||
[build-dependencies]
|
|
||||||
ethers-contract = { version = "2", default-features = false, features = ["abigen", "providers"] }
|
|
||||||
|
|
||||||
[dev-dependencies]
|
|
||||||
rand_core = { version = "0.6", default-features = false, features = ["std"] }
|
rand_core = { version = "0.6", default-features = false, features = ["std"] }
|
||||||
|
|
||||||
hex = { version = "0.4", default-features = false, features = ["std"] }
|
transcript = { package = "flexible-transcript", path = "../../crypto/transcript", default-features = false, features = ["recommended"] }
|
||||||
serde = { version = "1", default-features = false, features = ["std"] }
|
|
||||||
serde_json = { version = "1", default-features = false, features = ["std"] }
|
|
||||||
|
|
||||||
sha2 = { version = "0.10", default-features = false, features = ["std"] }
|
group = { version = "0.13", default-features = false }
|
||||||
|
k256 = { version = "^0.13.1", default-features = false, features = ["std", "ecdsa", "arithmetic"] }
|
||||||
|
frost = { package = "modular-frost", path = "../../crypto/frost", default-features = false, features = ["secp256k1"] }
|
||||||
|
|
||||||
|
alloy-core = { version = "0.7", default-features = false }
|
||||||
|
alloy-sol-types = { version = "0.7", default-features = false, features = ["json"] }
|
||||||
|
alloy-consensus = { git = "https://github.com/alloy-rs/alloy", rev = "037dd4b20ec8533d6b6d5cf5e9489bbb182c18c6", default-features = false, features = ["k256"] }
|
||||||
|
alloy-rpc-types = { git = "https://github.com/alloy-rs/alloy", rev = "037dd4b20ec8533d6b6d5cf5e9489bbb182c18c6", default-features = false }
|
||||||
|
alloy-rpc-client = { git = "https://github.com/alloy-rs/alloy", rev = "037dd4b20ec8533d6b6d5cf5e9489bbb182c18c6", default-features = false }
|
||||||
|
alloy-simple-request-transport = { path = "./alloy-simple-request-transport", default-features = false }
|
||||||
|
alloy-provider = { git = "https://github.com/alloy-rs/alloy", rev = "037dd4b20ec8533d6b6d5cf5e9489bbb182c18c6", default-features = false }
|
||||||
|
|
||||||
|
[dev-dependencies]
|
||||||
|
frost = { package = "modular-frost", path = "../../crypto/frost", default-features = false, features = ["tests"] }
|
||||||
|
|
||||||
tokio = { version = "1", features = ["macros"] }
|
tokio = { version = "1", features = ["macros"] }
|
||||||
|
|
||||||
|
alloy-node-bindings = { git = "https://github.com/alloy-rs/alloy", rev = "037dd4b20ec8533d6b6d5cf5e9489bbb182c18c6", default-features = false }
|
||||||
|
|
||||||
|
[features]
|
||||||
|
tests = []
|
||||||
|
|
|
@ -3,6 +3,12 @@
|
||||||
This package contains Ethereum-related functionality, specifically deploying and
|
This package contains Ethereum-related functionality, specifically deploying and
|
||||||
interacting with Serai contracts.
|
interacting with Serai contracts.
|
||||||
|
|
||||||
|
While `monero-serai` and `bitcoin-serai` are general purpose libraries,
|
||||||
|
`ethereum-serai` is Serai specific. If any of the utilities are generally
|
||||||
|
desired, please fork and maintain your own copy to ensure the desired
|
||||||
|
functionality is preserved, or open an issue to request we make this library
|
||||||
|
general purpose.
|
||||||
|
|
||||||
### Dependencies
|
### Dependencies
|
||||||
|
|
||||||
- solc
|
- solc
|
||||||
|
|
29
coins/ethereum/alloy-simple-request-transport/Cargo.toml
Normal file
29
coins/ethereum/alloy-simple-request-transport/Cargo.toml
Normal file
|
@ -0,0 +1,29 @@
|
||||||
|
[package]
|
||||||
|
name = "alloy-simple-request-transport"
|
||||||
|
version = "0.1.0"
|
||||||
|
description = "A transport for alloy based off simple-request"
|
||||||
|
license = "MIT"
|
||||||
|
repository = "https://github.com/serai-dex/serai/tree/develop/coins/ethereum/alloy-simple-request-transport"
|
||||||
|
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
|
||||||
|
edition = "2021"
|
||||||
|
rust-version = "1.74"
|
||||||
|
|
||||||
|
[package.metadata.docs.rs]
|
||||||
|
all-features = true
|
||||||
|
rustdoc-args = ["--cfg", "docsrs"]
|
||||||
|
|
||||||
|
[lints]
|
||||||
|
workspace = true
|
||||||
|
|
||||||
|
[dependencies]
|
||||||
|
tower = "0.4"
|
||||||
|
|
||||||
|
serde_json = { version = "1", default-features = false }
|
||||||
|
simple-request = { path = "../../../common/request", default-features = false }
|
||||||
|
|
||||||
|
alloy-json-rpc = { git = "https://github.com/alloy-rs/alloy", rev = "037dd4b20ec8533d6b6d5cf5e9489bbb182c18c6", default-features = false }
|
||||||
|
alloy-transport = { git = "https://github.com/alloy-rs/alloy", rev = "037dd4b20ec8533d6b6d5cf5e9489bbb182c18c6", default-features = false }
|
||||||
|
|
||||||
|
[features]
|
||||||
|
default = ["tls"]
|
||||||
|
tls = ["simple-request/tls"]
|
21
coins/ethereum/alloy-simple-request-transport/LICENSE
Normal file
21
coins/ethereum/alloy-simple-request-transport/LICENSE
Normal file
|
@ -0,0 +1,21 @@
|
||||||
|
MIT License
|
||||||
|
|
||||||
|
Copyright (c) 2024 Luke Parker
|
||||||
|
|
||||||
|
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
of this software and associated documentation files (the "Software"), to deal
|
||||||
|
in the Software without restriction, including without limitation the rights
|
||||||
|
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
copies of the Software, and to permit persons to whom the Software is
|
||||||
|
furnished to do so, subject to the following conditions:
|
||||||
|
|
||||||
|
The above copyright notice and this permission notice shall be included in all
|
||||||
|
copies or substantial portions of the Software.
|
||||||
|
|
||||||
|
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
SOFTWARE.
|
4
coins/ethereum/alloy-simple-request-transport/README.md
Normal file
4
coins/ethereum/alloy-simple-request-transport/README.md
Normal file
|
@ -0,0 +1,4 @@
|
||||||
|
# Alloy Simple Request Transport
|
||||||
|
|
||||||
|
A transport for alloy based on simple-request, a small HTTP client built around
|
||||||
|
hyper.
|
60
coins/ethereum/alloy-simple-request-transport/src/lib.rs
Normal file
60
coins/ethereum/alloy-simple-request-transport/src/lib.rs
Normal file
|
@ -0,0 +1,60 @@
|
||||||
|
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
|
||||||
|
#![doc = include_str!("../README.md")]
|
||||||
|
|
||||||
|
use core::task;
|
||||||
|
use std::io;
|
||||||
|
|
||||||
|
use alloy_json_rpc::{RequestPacket, ResponsePacket};
|
||||||
|
use alloy_transport::{TransportError, TransportErrorKind, TransportFut};
|
||||||
|
|
||||||
|
use simple_request::{hyper, Request, Client};
|
||||||
|
|
||||||
|
use tower::Service;
|
||||||
|
|
||||||
|
#[derive(Clone, Debug)]
|
||||||
|
pub struct SimpleRequest {
|
||||||
|
client: Client,
|
||||||
|
url: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl SimpleRequest {
|
||||||
|
pub fn new(url: String) -> Self {
|
||||||
|
Self { client: Client::with_connection_pool(), url }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Service<RequestPacket> for SimpleRequest {
|
||||||
|
type Response = ResponsePacket;
|
||||||
|
type Error = TransportError;
|
||||||
|
type Future = TransportFut<'static>;
|
||||||
|
|
||||||
|
#[inline]
|
||||||
|
fn poll_ready(&mut self, _cx: &mut task::Context<'_>) -> task::Poll<Result<(), Self::Error>> {
|
||||||
|
task::Poll::Ready(Ok(()))
|
||||||
|
}
|
||||||
|
|
||||||
|
#[inline]
|
||||||
|
fn call(&mut self, req: RequestPacket) -> Self::Future {
|
||||||
|
let inner = self.clone();
|
||||||
|
Box::pin(async move {
|
||||||
|
let packet = req.serialize().map_err(TransportError::SerError)?;
|
||||||
|
let request = Request::from(
|
||||||
|
hyper::Request::post(&inner.url)
|
||||||
|
.header("Content-Type", "application/json")
|
||||||
|
.body(serde_json::to_vec(&packet).map_err(TransportError::SerError)?.into())
|
||||||
|
.unwrap(),
|
||||||
|
);
|
||||||
|
|
||||||
|
let mut res = inner
|
||||||
|
.client
|
||||||
|
.request(request)
|
||||||
|
.await
|
||||||
|
.map_err(|e| TransportErrorKind::custom(io::Error::other(format!("{e:?}"))))?
|
||||||
|
.body()
|
||||||
|
.await
|
||||||
|
.map_err(|e| TransportErrorKind::custom(io::Error::other(format!("{e:?}"))))?;
|
||||||
|
|
||||||
|
serde_json::from_reader(&mut res).map_err(|e| TransportError::deser_err(e, ""))
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
|
@ -1,7 +1,5 @@
|
||||||
use std::process::Command;
|
use std::process::Command;
|
||||||
|
|
||||||
use ethers_contract::Abigen;
|
|
||||||
|
|
||||||
fn main() {
|
fn main() {
|
||||||
println!("cargo:rerun-if-changed=contracts/*");
|
println!("cargo:rerun-if-changed=contracts/*");
|
||||||
println!("cargo:rerun-if-changed=artifacts/*");
|
println!("cargo:rerun-if-changed=artifacts/*");
|
||||||
|
@ -21,22 +19,23 @@ fn main() {
|
||||||
"--base-path", ".",
|
"--base-path", ".",
|
||||||
"-o", "./artifacts", "--overwrite",
|
"-o", "./artifacts", "--overwrite",
|
||||||
"--bin", "--abi",
|
"--bin", "--abi",
|
||||||
"--optimize",
|
"--via-ir", "--optimize",
|
||||||
"./contracts/Schnorr.sol", "./contracts/Router.sol",
|
|
||||||
|
"./contracts/IERC20.sol",
|
||||||
|
|
||||||
|
"./contracts/Schnorr.sol",
|
||||||
|
"./contracts/Deployer.sol",
|
||||||
|
"./contracts/Sandbox.sol",
|
||||||
|
"./contracts/Router.sol",
|
||||||
|
|
||||||
|
"./src/tests/contracts/Schnorr.sol",
|
||||||
|
"./src/tests/contracts/ERC20.sol",
|
||||||
|
|
||||||
|
"--no-color",
|
||||||
];
|
];
|
||||||
assert!(Command::new("solc").args(args).status().unwrap().success());
|
let solc = Command::new("solc").args(args).output().unwrap();
|
||||||
|
assert!(solc.status.success());
|
||||||
Abigen::new("Schnorr", "./artifacts/Schnorr.abi")
|
for line in String::from_utf8(solc.stderr).unwrap().lines() {
|
||||||
.unwrap()
|
assert!(!line.starts_with("Error:"));
|
||||||
.generate()
|
}
|
||||||
.unwrap()
|
|
||||||
.write_to_file("./src/abi/schnorr.rs")
|
|
||||||
.unwrap();
|
|
||||||
|
|
||||||
Abigen::new("Router", "./artifacts/Router.abi")
|
|
||||||
.unwrap()
|
|
||||||
.generate()
|
|
||||||
.unwrap()
|
|
||||||
.write_to_file("./src/abi/router.rs")
|
|
||||||
.unwrap();
|
|
||||||
}
|
}
|
||||||
|
|
52
coins/ethereum/contracts/Deployer.sol
Normal file
52
coins/ethereum/contracts/Deployer.sol
Normal file
|
@ -0,0 +1,52 @@
|
||||||
|
// SPDX-License-Identifier: AGPLv3
|
||||||
|
pragma solidity ^0.8.0;
|
||||||
|
|
||||||
|
/*
|
||||||
|
The expected deployment process of the Router is as follows:
|
||||||
|
|
||||||
|
1) A transaction deploying Deployer is made. Then, a deterministic signature is
|
||||||
|
created such that an account with an unknown private key is the creator of
|
||||||
|
the contract. Anyone can fund this address, and once anyone does, the
|
||||||
|
transaction deploying Deployer can be published by anyone. No other
|
||||||
|
transaction may be made from that account.
|
||||||
|
|
||||||
|
2) Anyone deploys the Router through the Deployer. This uses a sequential nonce
|
||||||
|
such that meet-in-the-middle attacks, with complexity 2**80, aren't feasible.
|
||||||
|
While such attacks would still be feasible if the Deployer's address was
|
||||||
|
controllable, the usage of a deterministic signature with a NUMS method
|
||||||
|
prevents that.
|
||||||
|
|
||||||
|
This doesn't have any denial-of-service risks and will resolve once anyone steps
|
||||||
|
forward as deployer. This does fail to guarantee an identical address across
|
||||||
|
every chain, though it enables letting anyone efficiently ask the Deployer for
|
||||||
|
the address (with the Deployer having an identical address on every chain).
|
||||||
|
|
||||||
|
Unfortunately, guaranteeing identical addresses aren't feasible. We'd need the
|
||||||
|
Deployer contract to use a consistent salt for the Router, yet the Router must
|
||||||
|
be deployed with a specific public key for Serai. Since Ethereum isn't able to
|
||||||
|
determine a valid public key (one the result of a Serai DKG) from a dishonest
|
||||||
|
public key, we have to allow multiple deployments with Serai being the one to
|
||||||
|
determine which to use.
|
||||||
|
|
||||||
|
The alternative would be to have a council publish the Serai key on-Ethereum,
|
||||||
|
with Serai verifying the published result. This would introduce a DoS risk in
|
||||||
|
the council not publishing the correct key/not publishing any key.
|
||||||
|
*/
|
||||||
|
|
||||||
|
contract Deployer {
|
||||||
|
event Deployment(bytes32 indexed init_code_hash, address created);
|
||||||
|
|
||||||
|
error DeploymentFailed();
|
||||||
|
|
||||||
|
function deploy(bytes memory init_code) external {
|
||||||
|
address created;
|
||||||
|
assembly {
|
||||||
|
created := create(0, add(init_code, 0x20), mload(init_code))
|
||||||
|
}
|
||||||
|
if (created == address(0)) {
|
||||||
|
revert DeploymentFailed();
|
||||||
|
}
|
||||||
|
// These may be emitted out of order upon re-entrancy
|
||||||
|
emit Deployment(keccak256(init_code), created);
|
||||||
|
}
|
||||||
|
}
|
20
coins/ethereum/contracts/IERC20.sol
Normal file
20
coins/ethereum/contracts/IERC20.sol
Normal file
|
@ -0,0 +1,20 @@
|
||||||
|
// SPDX-License-Identifier: CC0
|
||||||
|
pragma solidity ^0.8.0;
|
||||||
|
|
||||||
|
interface IERC20 {
|
||||||
|
event Transfer(address indexed from, address indexed to, uint256 value);
|
||||||
|
event Approval(address indexed owner, address indexed spender, uint256 value);
|
||||||
|
|
||||||
|
function name() external view returns (string memory);
|
||||||
|
function symbol() external view returns (string memory);
|
||||||
|
function decimals() external view returns (uint8);
|
||||||
|
|
||||||
|
function totalSupply() external view returns (uint256);
|
||||||
|
|
||||||
|
function balanceOf(address owner) external view returns (uint256);
|
||||||
|
function transfer(address to, uint256 value) external returns (bool);
|
||||||
|
function transferFrom(address from, address to, uint256 value) external returns (bool);
|
||||||
|
|
||||||
|
function approve(address spender, uint256 value) external returns (bool);
|
||||||
|
function allowance(address owner, address spender) external view returns (uint256);
|
||||||
|
}
|
|
@ -1,27 +1,24 @@
|
||||||
// SPDX-License-Identifier: AGPLv3
|
// SPDX-License-Identifier: AGPLv3
|
||||||
pragma solidity ^0.8.0;
|
pragma solidity ^0.8.0;
|
||||||
|
|
||||||
|
import "./IERC20.sol";
|
||||||
|
|
||||||
import "./Schnorr.sol";
|
import "./Schnorr.sol";
|
||||||
|
import "./Sandbox.sol";
|
||||||
|
|
||||||
contract Router is Schnorr {
|
contract Router {
|
||||||
// Contract initializer
|
// Nonce is incremented for each batch of transactions executed/key update
|
||||||
// TODO: Replace with a MuSig of the genesis validators
|
|
||||||
address public initializer;
|
|
||||||
|
|
||||||
// Nonce is incremented for each batch of transactions executed
|
|
||||||
uint256 public nonce;
|
uint256 public nonce;
|
||||||
|
|
||||||
// fixed parity for the public keys used in this contract
|
// Current public key's x-coordinate
|
||||||
uint8 constant public KEY_PARITY = 27;
|
// This key must always have the parity defined within the Schnorr contract
|
||||||
|
|
||||||
// current public key's x-coordinate
|
|
||||||
// note: this key must always use the fixed parity defined above
|
|
||||||
bytes32 public seraiKey;
|
bytes32 public seraiKey;
|
||||||
|
|
||||||
struct OutInstruction {
|
struct OutInstruction {
|
||||||
address to;
|
address to;
|
||||||
|
Call[] calls;
|
||||||
|
|
||||||
uint256 value;
|
uint256 value;
|
||||||
bytes data;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
struct Signature {
|
struct Signature {
|
||||||
|
@ -29,62 +26,197 @@ contract Router is Schnorr {
|
||||||
bytes32 s;
|
bytes32 s;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
event SeraiKeyUpdated(
|
||||||
|
uint256 indexed nonce,
|
||||||
|
bytes32 indexed key,
|
||||||
|
Signature signature
|
||||||
|
);
|
||||||
|
event InInstruction(
|
||||||
|
address indexed from,
|
||||||
|
address indexed coin,
|
||||||
|
uint256 amount,
|
||||||
|
bytes instruction
|
||||||
|
);
|
||||||
// success is a uint256 representing a bitfield of transaction successes
|
// success is a uint256 representing a bitfield of transaction successes
|
||||||
event Executed(uint256 nonce, bytes32 batch, uint256 success);
|
event Executed(
|
||||||
|
uint256 indexed nonce,
|
||||||
|
bytes32 indexed batch,
|
||||||
|
uint256 success,
|
||||||
|
Signature signature
|
||||||
|
);
|
||||||
|
|
||||||
// error types
|
// error types
|
||||||
error NotInitializer();
|
|
||||||
error AlreadyInitialized();
|
|
||||||
error InvalidKey();
|
error InvalidKey();
|
||||||
|
error InvalidSignature();
|
||||||
|
error InvalidAmount();
|
||||||
|
error FailedTransfer();
|
||||||
error TooManyTransactions();
|
error TooManyTransactions();
|
||||||
|
|
||||||
constructor() {
|
modifier _updateSeraiKeyAtEndOfFn(
|
||||||
initializer = msg.sender;
|
uint256 _nonce,
|
||||||
|
bytes32 key,
|
||||||
|
Signature memory sig
|
||||||
|
) {
|
||||||
|
if (
|
||||||
|
(key == bytes32(0)) ||
|
||||||
|
((bytes32(uint256(key) % Schnorr.Q)) != key)
|
||||||
|
) {
|
||||||
|
revert InvalidKey();
|
||||||
|
}
|
||||||
|
|
||||||
|
_;
|
||||||
|
|
||||||
|
seraiKey = key;
|
||||||
|
emit SeraiKeyUpdated(_nonce, key, sig);
|
||||||
}
|
}
|
||||||
|
|
||||||
// initSeraiKey can be called by the contract initializer to set the first
|
constructor(bytes32 _seraiKey) _updateSeraiKeyAtEndOfFn(
|
||||||
// public key, only if the public key has yet to be set.
|
0,
|
||||||
function initSeraiKey(bytes32 _seraiKey) external {
|
_seraiKey,
|
||||||
if (msg.sender != initializer) revert NotInitializer();
|
Signature({ c: bytes32(0), s: bytes32(0) })
|
||||||
if (seraiKey != 0) revert AlreadyInitialized();
|
) {
|
||||||
if (_seraiKey == bytes32(0)) revert InvalidKey();
|
nonce = 1;
|
||||||
seraiKey = _seraiKey;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// updateSeraiKey validates the given Schnorr signature against the current public key,
|
// updateSeraiKey validates the given Schnorr signature against the current
|
||||||
// and if successful, updates the contract's public key to the given one.
|
// public key, and if successful, updates the contract's public key to the
|
||||||
|
// given one.
|
||||||
function updateSeraiKey(
|
function updateSeraiKey(
|
||||||
bytes32 _seraiKey,
|
bytes32 _seraiKey,
|
||||||
Signature memory sig
|
Signature calldata sig
|
||||||
) public {
|
) external _updateSeraiKeyAtEndOfFn(nonce, _seraiKey, sig) {
|
||||||
if (_seraiKey == bytes32(0)) revert InvalidKey();
|
bytes memory message =
|
||||||
bytes32 message = keccak256(abi.encodePacked("updateSeraiKey", _seraiKey));
|
abi.encodePacked("updateSeraiKey", block.chainid, nonce, _seraiKey);
|
||||||
if (!verify(KEY_PARITY, seraiKey, message, sig.c, sig.s)) revert InvalidSignature();
|
nonce++;
|
||||||
seraiKey = _seraiKey;
|
|
||||||
|
if (!Schnorr.verify(seraiKey, message, sig.c, sig.s)) {
|
||||||
|
revert InvalidSignature();
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// execute accepts a list of transactions to execute as well as a Schnorr signature.
|
function inInstruction(
|
||||||
|
address coin,
|
||||||
|
uint256 amount,
|
||||||
|
bytes memory instruction
|
||||||
|
) external payable {
|
||||||
|
if (coin == address(0)) {
|
||||||
|
if (amount != msg.value) {
|
||||||
|
revert InvalidAmount();
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
(bool success, bytes memory res) =
|
||||||
|
address(coin).call(
|
||||||
|
abi.encodeWithSelector(
|
||||||
|
IERC20.transferFrom.selector,
|
||||||
|
msg.sender,
|
||||||
|
address(this),
|
||||||
|
amount
|
||||||
|
)
|
||||||
|
);
|
||||||
|
|
||||||
|
// Require there was nothing returned, which is done by some non-standard
|
||||||
|
// tokens, or that the ERC20 contract did in fact return true
|
||||||
|
bool nonStandardResOrTrue =
|
||||||
|
(res.length == 0) || abi.decode(res, (bool));
|
||||||
|
if (!(success && nonStandardResOrTrue)) {
|
||||||
|
revert FailedTransfer();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
Due to fee-on-transfer tokens, emitting the amount directly is frowned upon.
|
||||||
|
The amount instructed to transfer may not actually be the amount
|
||||||
|
transferred.
|
||||||
|
|
||||||
|
If we add nonReentrant to every single function which can effect the
|
||||||
|
balance, we can check the amount exactly matches. This prevents transfers of
|
||||||
|
less value than expected occurring, at least, not without an additional
|
||||||
|
transfer to top up the difference (which isn't routed through this contract
|
||||||
|
and accordingly isn't trying to artificially create events).
|
||||||
|
|
||||||
|
If we don't add nonReentrant, a transfer can be started, and then a new
|
||||||
|
transfer for the difference can follow it up (again and again until a
|
||||||
|
rounding error is reached). This contract would believe all transfers were
|
||||||
|
done in full, despite each only being done in part (except for the last
|
||||||
|
one).
|
||||||
|
|
||||||
|
Given fee-on-transfer tokens aren't intended to be supported, the only
|
||||||
|
token planned to be supported is Dai and it doesn't have any fee-on-transfer
|
||||||
|
logic, fee-on-transfer tokens aren't even able to be supported at this time,
|
||||||
|
we simply classify this entire class of tokens as non-standard
|
||||||
|
implementations which induce undefined behavior. It is the Serai network's
|
||||||
|
role not to add support for any non-standard implementations.
|
||||||
|
*/
|
||||||
|
emit InInstruction(msg.sender, coin, amount, instruction);
|
||||||
|
}
|
||||||
|
|
||||||
|
// execute accepts a list of transactions to execute as well as a signature.
|
||||||
// if signature verification passes, the given transactions are executed.
|
// if signature verification passes, the given transactions are executed.
|
||||||
// if signature verification fails, this function will revert.
|
// if signature verification fails, this function will revert.
|
||||||
function execute(
|
function execute(
|
||||||
OutInstruction[] calldata transactions,
|
OutInstruction[] calldata transactions,
|
||||||
Signature memory sig
|
Signature calldata sig
|
||||||
) public {
|
) external {
|
||||||
if (transactions.length > 256) revert TooManyTransactions();
|
if (transactions.length > 256) {
|
||||||
|
revert TooManyTransactions();
|
||||||
|
}
|
||||||
|
|
||||||
bytes32 message = keccak256(abi.encode("execute", nonce, transactions));
|
bytes memory message =
|
||||||
|
abi.encode("execute", block.chainid, nonce, transactions);
|
||||||
|
uint256 executed_with_nonce = nonce;
|
||||||
// This prevents re-entrancy from causing double spends yet does allow
|
// This prevents re-entrancy from causing double spends yet does allow
|
||||||
// out-of-order execution via re-entrancy
|
// out-of-order execution via re-entrancy
|
||||||
nonce++;
|
nonce++;
|
||||||
if (!verify(KEY_PARITY, seraiKey, message, sig.c, sig.s)) revert InvalidSignature();
|
|
||||||
|
if (!Schnorr.verify(seraiKey, message, sig.c, sig.s)) {
|
||||||
|
revert InvalidSignature();
|
||||||
|
}
|
||||||
|
|
||||||
uint256 successes;
|
uint256 successes;
|
||||||
for(uint256 i = 0; i < transactions.length; i++) {
|
for (uint256 i = 0; i < transactions.length; i++) {
|
||||||
(bool success, ) = transactions[i].to.call{value: transactions[i].value, gas: 200_000}(transactions[i].data);
|
bool success;
|
||||||
|
|
||||||
|
// If there are no calls, send to `to` the value
|
||||||
|
if (transactions[i].calls.length == 0) {
|
||||||
|
(success, ) = transactions[i].to.call{
|
||||||
|
value: transactions[i].value,
|
||||||
|
gas: 5_000
|
||||||
|
}("");
|
||||||
|
} else {
|
||||||
|
// If there are calls, ignore `to`. Deploy a new Sandbox and proxy the
|
||||||
|
// calls through that
|
||||||
|
//
|
||||||
|
// We could use a single sandbox in order to reduce gas costs, yet that
|
||||||
|
// risks one person creating an approval that's hooked before another
|
||||||
|
// user's intended action executes, in order to drain their coins
|
||||||
|
//
|
||||||
|
// While technically, that would be a flaw in the sandboxed flow, this
|
||||||
|
// is robust and prevents such flaws from being possible
|
||||||
|
//
|
||||||
|
// We also don't want people to set state via the Sandbox and expect it
|
||||||
|
// future available when anyone else could set a distinct value
|
||||||
|
Sandbox sandbox = new Sandbox();
|
||||||
|
(success, ) = address(sandbox).call{
|
||||||
|
value: transactions[i].value,
|
||||||
|
// TODO: Have the Call specify the gas up front
|
||||||
|
gas: 350_000
|
||||||
|
}(
|
||||||
|
abi.encodeWithSelector(
|
||||||
|
Sandbox.sandbox.selector,
|
||||||
|
transactions[i].calls
|
||||||
|
)
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
assembly {
|
assembly {
|
||||||
successes := or(successes, shl(i, success))
|
successes := or(successes, shl(i, success))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
emit Executed(nonce, message, successes);
|
emit Executed(
|
||||||
|
executed_with_nonce,
|
||||||
|
keccak256(message),
|
||||||
|
successes,
|
||||||
|
sig
|
||||||
|
);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
48
coins/ethereum/contracts/Sandbox.sol
Normal file
48
coins/ethereum/contracts/Sandbox.sol
Normal file
|
@ -0,0 +1,48 @@
|
||||||
|
// SPDX-License-Identifier: AGPLv3
|
||||||
|
pragma solidity ^0.8.24;
|
||||||
|
|
||||||
|
struct Call {
|
||||||
|
address to;
|
||||||
|
uint256 value;
|
||||||
|
bytes data;
|
||||||
|
}
|
||||||
|
|
||||||
|
// A minimal sandbox focused on gas efficiency.
|
||||||
|
//
|
||||||
|
// The first call is executed if any of the calls fail, making it a fallback.
|
||||||
|
// All other calls are executed sequentially.
|
||||||
|
contract Sandbox {
|
||||||
|
error AlreadyCalled();
|
||||||
|
error CallsFailed();
|
||||||
|
|
||||||
|
function sandbox(Call[] calldata calls) external payable {
|
||||||
|
// Prevent re-entrancy due to this executing arbitrary calls from anyone
|
||||||
|
// and anywhere
|
||||||
|
bool called;
|
||||||
|
assembly { called := tload(0) }
|
||||||
|
if (called) {
|
||||||
|
revert AlreadyCalled();
|
||||||
|
}
|
||||||
|
assembly { tstore(0, 1) }
|
||||||
|
|
||||||
|
// Execute the calls, starting from 1
|
||||||
|
for (uint256 i = 1; i < calls.length; i++) {
|
||||||
|
(bool success, ) =
|
||||||
|
calls[i].to.call{ value: calls[i].value }(calls[i].data);
|
||||||
|
|
||||||
|
// If this call failed, execute the fallback (call 0)
|
||||||
|
if (!success) {
|
||||||
|
(success, ) =
|
||||||
|
calls[0].to.call{ value: address(this).balance }(calls[0].data);
|
||||||
|
// If this call also failed, revert entirely
|
||||||
|
if (!success) {
|
||||||
|
revert CallsFailed();
|
||||||
|
}
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// We don't clear the re-entrancy guard as this contract should never be
|
||||||
|
// called again, so there's no reason to spend the effort
|
||||||
|
}
|
||||||
|
}
|
|
@ -2,38 +2,43 @@
|
||||||
pragma solidity ^0.8.0;
|
pragma solidity ^0.8.0;
|
||||||
|
|
||||||
// see https://github.com/noot/schnorr-verify for implementation details
|
// see https://github.com/noot/schnorr-verify for implementation details
|
||||||
contract Schnorr {
|
library Schnorr {
|
||||||
// secp256k1 group order
|
// secp256k1 group order
|
||||||
uint256 constant public Q =
|
uint256 constant public Q =
|
||||||
0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141;
|
0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141;
|
||||||
|
|
||||||
error InvalidSOrA();
|
// Fixed parity for the public keys used in this contract
|
||||||
error InvalidSignature();
|
// This avoids spending a word passing the parity in a similar style to
|
||||||
|
// Bitcoin's Taproot
|
||||||
|
uint8 constant public KEY_PARITY = 27;
|
||||||
|
|
||||||
// parity := public key y-coord parity (27 or 28)
|
error InvalidSOrA();
|
||||||
// px := public key x-coord
|
error MalformedSignature();
|
||||||
|
|
||||||
|
// px := public key x-coord, where the public key has a parity of KEY_PARITY
|
||||||
// message := 32-byte hash of the message
|
// message := 32-byte hash of the message
|
||||||
// c := schnorr signature challenge
|
// c := schnorr signature challenge
|
||||||
// s := schnorr signature
|
// s := schnorr signature
|
||||||
function verify(
|
function verify(
|
||||||
uint8 parity,
|
|
||||||
bytes32 px,
|
bytes32 px,
|
||||||
bytes32 message,
|
bytes memory message,
|
||||||
bytes32 c,
|
bytes32 c,
|
||||||
bytes32 s
|
bytes32 s
|
||||||
) public view returns (bool) {
|
) internal pure returns (bool) {
|
||||||
// ecrecover = (m, v, r, s);
|
// ecrecover = (m, v, r, s) -> key
|
||||||
|
// We instead pass the following to obtain the nonce (not the key)
|
||||||
|
// Then we hash it and verify it matches the challenge
|
||||||
bytes32 sa = bytes32(Q - mulmod(uint256(s), uint256(px), Q));
|
bytes32 sa = bytes32(Q - mulmod(uint256(s), uint256(px), Q));
|
||||||
bytes32 ca = bytes32(Q - mulmod(uint256(c), uint256(px), Q));
|
bytes32 ca = bytes32(Q - mulmod(uint256(c), uint256(px), Q));
|
||||||
|
|
||||||
|
// For safety, we want each input to ecrecover to be 0 (sa, px, ca)
|
||||||
|
// The ecreover precomple checks `r` and `s` (`px` and `ca`) are non-zero
|
||||||
|
// That leaves us to check `sa` are non-zero
|
||||||
if (sa == 0) revert InvalidSOrA();
|
if (sa == 0) revert InvalidSOrA();
|
||||||
// the ecrecover precompile implementation checks that the `r` and `s`
|
address R = ecrecover(sa, KEY_PARITY, px, ca);
|
||||||
// inputs are non-zero (in this case, `px` and `ca`), thus we don't need to
|
if (R == address(0)) revert MalformedSignature();
|
||||||
// check if they're zero.
|
|
||||||
address R = ecrecover(sa, parity, px, ca);
|
// Check the signature is correct by rebuilding the challenge
|
||||||
if (R == address(0)) revert InvalidSignature();
|
return c == keccak256(abi.encodePacked(R, px, message));
|
||||||
return c == keccak256(
|
|
||||||
abi.encodePacked(R, uint8(parity), px, block.chainid, message)
|
|
||||||
);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -1,6 +1,37 @@
|
||||||
|
use alloy_sol_types::sol;
|
||||||
|
|
||||||
#[rustfmt::skip]
|
#[rustfmt::skip]
|
||||||
|
#[allow(warnings)]
|
||||||
|
#[allow(needless_pass_by_value)]
|
||||||
#[allow(clippy::all)]
|
#[allow(clippy::all)]
|
||||||
pub(crate) mod schnorr;
|
#[allow(clippy::ignored_unit_patterns)]
|
||||||
|
#[allow(clippy::redundant_closure_for_method_calls)]
|
||||||
|
mod erc20_container {
|
||||||
|
use super::*;
|
||||||
|
sol!("contracts/IERC20.sol");
|
||||||
|
}
|
||||||
|
pub use erc20_container::IERC20 as erc20;
|
||||||
|
|
||||||
#[rustfmt::skip]
|
#[rustfmt::skip]
|
||||||
|
#[allow(warnings)]
|
||||||
|
#[allow(needless_pass_by_value)]
|
||||||
#[allow(clippy::all)]
|
#[allow(clippy::all)]
|
||||||
pub(crate) mod router;
|
#[allow(clippy::ignored_unit_patterns)]
|
||||||
|
#[allow(clippy::redundant_closure_for_method_calls)]
|
||||||
|
mod deployer_container {
|
||||||
|
use super::*;
|
||||||
|
sol!("contracts/Deployer.sol");
|
||||||
|
}
|
||||||
|
pub use deployer_container::Deployer as deployer;
|
||||||
|
|
||||||
|
#[rustfmt::skip]
|
||||||
|
#[allow(warnings)]
|
||||||
|
#[allow(needless_pass_by_value)]
|
||||||
|
#[allow(clippy::all)]
|
||||||
|
#[allow(clippy::ignored_unit_patterns)]
|
||||||
|
#[allow(clippy::redundant_closure_for_method_calls)]
|
||||||
|
mod router_container {
|
||||||
|
use super::*;
|
||||||
|
sol!(Router, "artifacts/Router.abi");
|
||||||
|
}
|
||||||
|
pub use router_container::Router as router;
|
||||||
|
|
|
@ -1,91 +1,185 @@
|
||||||
use sha3::{Digest, Keccak256};
|
|
||||||
|
|
||||||
use group::ff::PrimeField;
|
use group::ff::PrimeField;
|
||||||
use k256::{
|
use k256::{
|
||||||
elliptic_curve::{
|
elliptic_curve::{ops::Reduce, point::AffineCoordinates, sec1::ToEncodedPoint},
|
||||||
bigint::ArrayEncoding, ops::Reduce, point::AffineCoordinates, sec1::ToEncodedPoint,
|
ProjectivePoint, Scalar, U256 as KU256,
|
||||||
},
|
|
||||||
ProjectivePoint, Scalar, U256,
|
|
||||||
};
|
};
|
||||||
|
#[cfg(test)]
|
||||||
|
use k256::{elliptic_curve::point::DecompressPoint, AffinePoint};
|
||||||
|
|
||||||
use frost::{
|
use frost::{
|
||||||
algorithm::{Hram, SchnorrSignature},
|
algorithm::{Hram, SchnorrSignature},
|
||||||
curve::Secp256k1,
|
curve::{Ciphersuite, Secp256k1},
|
||||||
};
|
};
|
||||||
|
|
||||||
|
use alloy_core::primitives::{Parity, Signature as AlloySignature};
|
||||||
|
use alloy_consensus::{SignableTransaction, Signed, TxLegacy};
|
||||||
|
|
||||||
|
use crate::abi::router::{Signature as AbiSignature};
|
||||||
|
|
||||||
pub(crate) fn keccak256(data: &[u8]) -> [u8; 32] {
|
pub(crate) fn keccak256(data: &[u8]) -> [u8; 32] {
|
||||||
Keccak256::digest(data).into()
|
alloy_core::primitives::keccak256(data).into()
|
||||||
}
|
}
|
||||||
|
|
||||||
pub(crate) fn address(point: &ProjectivePoint) -> [u8; 20] {
|
pub(crate) fn hash_to_scalar(data: &[u8]) -> Scalar {
|
||||||
|
<Scalar as Reduce<KU256>>::reduce_bytes(&keccak256(data).into())
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn address(point: &ProjectivePoint) -> [u8; 20] {
|
||||||
let encoded_point = point.to_encoded_point(false);
|
let encoded_point = point.to_encoded_point(false);
|
||||||
// Last 20 bytes of the hash of the concatenated x and y coordinates
|
// Last 20 bytes of the hash of the concatenated x and y coordinates
|
||||||
// We obtain the concatenated x and y coordinates via the uncompressed encoding of the point
|
// We obtain the concatenated x and y coordinates via the uncompressed encoding of the point
|
||||||
keccak256(&encoded_point.as_ref()[1 .. 65])[12 ..].try_into().unwrap()
|
keccak256(&encoded_point.as_ref()[1 .. 65])[12 ..].try_into().unwrap()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
pub(crate) fn deterministically_sign(tx: &TxLegacy) -> Signed<TxLegacy> {
|
||||||
|
assert!(
|
||||||
|
tx.chain_id.is_none(),
|
||||||
|
"chain ID was Some when deterministically signing a TX (causing a non-deterministic signer)"
|
||||||
|
);
|
||||||
|
|
||||||
|
let sig_hash = tx.signature_hash().0;
|
||||||
|
let mut r = hash_to_scalar(&[sig_hash.as_slice(), b"r"].concat());
|
||||||
|
let mut s = hash_to_scalar(&[sig_hash.as_slice(), b"s"].concat());
|
||||||
|
loop {
|
||||||
|
let r_bytes: [u8; 32] = r.to_repr().into();
|
||||||
|
let s_bytes: [u8; 32] = s.to_repr().into();
|
||||||
|
let v = Parity::NonEip155(false);
|
||||||
|
let signature =
|
||||||
|
AlloySignature::from_scalars_and_parity(r_bytes.into(), s_bytes.into(), v).unwrap();
|
||||||
|
let tx = tx.clone().into_signed(signature);
|
||||||
|
if tx.recover_signer().is_ok() {
|
||||||
|
return tx;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Re-hash until valid
|
||||||
|
r = hash_to_scalar(r_bytes.as_ref());
|
||||||
|
s = hash_to_scalar(s_bytes.as_ref());
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// The public key for a Schnorr-signing account.
|
||||||
#[allow(non_snake_case)]
|
#[allow(non_snake_case)]
|
||||||
|
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
|
||||||
pub struct PublicKey {
|
pub struct PublicKey {
|
||||||
pub A: ProjectivePoint,
|
pub(crate) A: ProjectivePoint,
|
||||||
pub px: Scalar,
|
pub(crate) px: Scalar,
|
||||||
pub parity: u8,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
impl PublicKey {
|
impl PublicKey {
|
||||||
|
/// Construct a new `PublicKey`.
|
||||||
|
///
|
||||||
|
/// This will return None if the provided point isn't eligible to be a public key (due to
|
||||||
|
/// bounds such as parity).
|
||||||
#[allow(non_snake_case)]
|
#[allow(non_snake_case)]
|
||||||
pub fn new(A: ProjectivePoint) -> Option<PublicKey> {
|
pub fn new(A: ProjectivePoint) -> Option<PublicKey> {
|
||||||
let affine = A.to_affine();
|
let affine = A.to_affine();
|
||||||
let parity = u8::from(bool::from(affine.y_is_odd())) + 27;
|
// Only allow even keys to save a word within Ethereum
|
||||||
if parity != 27 {
|
let is_odd = bool::from(affine.y_is_odd());
|
||||||
|
if is_odd {
|
||||||
None?;
|
None?;
|
||||||
}
|
}
|
||||||
|
|
||||||
let x_coord = affine.x();
|
let x_coord = affine.x();
|
||||||
let x_coord_scalar = <Scalar as Reduce<U256>>::reduce_bytes(&x_coord);
|
let x_coord_scalar = <Scalar as Reduce<KU256>>::reduce_bytes(&x_coord);
|
||||||
// Return None if a reduction would occur
|
// Return None if a reduction would occur
|
||||||
|
// Reductions would be incredibly unlikely and shouldn't be an issue, yet it's one less
|
||||||
|
// headache/concern to have
|
||||||
|
// This does ban a trivial amoount of public keys
|
||||||
if x_coord_scalar.to_repr() != x_coord {
|
if x_coord_scalar.to_repr() != x_coord {
|
||||||
None?;
|
None?;
|
||||||
}
|
}
|
||||||
|
|
||||||
Some(PublicKey { A, px: x_coord_scalar, parity })
|
Some(PublicKey { A, px: x_coord_scalar })
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn point(&self) -> ProjectivePoint {
|
||||||
|
self.A
|
||||||
|
}
|
||||||
|
|
||||||
|
pub(crate) fn eth_repr(&self) -> [u8; 32] {
|
||||||
|
self.px.to_repr().into()
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
pub(crate) fn from_eth_repr(repr: [u8; 32]) -> Option<Self> {
|
||||||
|
#[allow(non_snake_case)]
|
||||||
|
let A = Option::<AffinePoint>::from(AffinePoint::decompress(&repr.into(), 0.into()))?.into();
|
||||||
|
Option::from(Scalar::from_repr(repr.into())).map(|px| PublicKey { A, px })
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// The HRAm to use for the Schnorr contract.
|
||||||
#[derive(Clone, Default)]
|
#[derive(Clone, Default)]
|
||||||
pub struct EthereumHram {}
|
pub struct EthereumHram {}
|
||||||
impl Hram<Secp256k1> for EthereumHram {
|
impl Hram<Secp256k1> for EthereumHram {
|
||||||
#[allow(non_snake_case)]
|
#[allow(non_snake_case)]
|
||||||
fn hram(R: &ProjectivePoint, A: &ProjectivePoint, m: &[u8]) -> Scalar {
|
fn hram(R: &ProjectivePoint, A: &ProjectivePoint, m: &[u8]) -> Scalar {
|
||||||
let a_encoded_point = A.to_encoded_point(true);
|
let x_coord = A.to_affine().x();
|
||||||
let mut a_encoded = a_encoded_point.as_ref().to_owned();
|
|
||||||
a_encoded[0] += 25; // Ethereum uses 27/28 for point parity
|
|
||||||
assert!((a_encoded[0] == 27) || (a_encoded[0] == 28));
|
|
||||||
let mut data = address(R).to_vec();
|
let mut data = address(R).to_vec();
|
||||||
data.append(&mut a_encoded);
|
data.extend(x_coord.as_slice());
|
||||||
data.extend(m);
|
data.extend(m);
|
||||||
Scalar::reduce(U256::from_be_slice(&keccak256(&data)))
|
|
||||||
|
<Scalar as Reduce<KU256>>::reduce_bytes(&keccak256(&data).into())
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// A signature for the Schnorr contract.
|
||||||
|
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
|
||||||
pub struct Signature {
|
pub struct Signature {
|
||||||
pub(crate) c: Scalar,
|
pub(crate) c: Scalar,
|
||||||
pub(crate) s: Scalar,
|
pub(crate) s: Scalar,
|
||||||
}
|
}
|
||||||
impl Signature {
|
impl Signature {
|
||||||
|
pub fn verify(&self, public_key: &PublicKey, message: &[u8]) -> bool {
|
||||||
|
#[allow(non_snake_case)]
|
||||||
|
let R = (Secp256k1::generator() * self.s) - (public_key.A * self.c);
|
||||||
|
EthereumHram::hram(&R, &public_key.A, message) == self.c
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Construct a new `Signature`.
|
||||||
|
///
|
||||||
|
/// This will return None if the signature is invalid.
|
||||||
pub fn new(
|
pub fn new(
|
||||||
public_key: &PublicKey,
|
public_key: &PublicKey,
|
||||||
chain_id: U256,
|
message: &[u8],
|
||||||
m: &[u8],
|
|
||||||
signature: SchnorrSignature<Secp256k1>,
|
signature: SchnorrSignature<Secp256k1>,
|
||||||
) -> Option<Signature> {
|
) -> Option<Signature> {
|
||||||
let c = EthereumHram::hram(
|
let c = EthereumHram::hram(&signature.R, &public_key.A, message);
|
||||||
&signature.R,
|
|
||||||
&public_key.A,
|
|
||||||
&[chain_id.to_be_byte_array().as_slice(), &keccak256(m)].concat(),
|
|
||||||
);
|
|
||||||
if !signature.verify(public_key.A, c) {
|
if !signature.verify(public_key.A, c) {
|
||||||
None?;
|
None?;
|
||||||
}
|
}
|
||||||
Some(Signature { c, s: signature.s })
|
|
||||||
|
let res = Signature { c, s: signature.s };
|
||||||
|
assert!(res.verify(public_key, message));
|
||||||
|
Some(res)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn c(&self) -> Scalar {
|
||||||
|
self.c
|
||||||
|
}
|
||||||
|
pub fn s(&self) -> Scalar {
|
||||||
|
self.s
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn to_bytes(&self) -> [u8; 64] {
|
||||||
|
let mut res = [0; 64];
|
||||||
|
res[.. 32].copy_from_slice(self.c.to_repr().as_ref());
|
||||||
|
res[32 ..].copy_from_slice(self.s.to_repr().as_ref());
|
||||||
|
res
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn from_bytes(bytes: [u8; 64]) -> std::io::Result<Self> {
|
||||||
|
let mut reader = bytes.as_slice();
|
||||||
|
let c = Secp256k1::read_F(&mut reader)?;
|
||||||
|
let s = Secp256k1::read_F(&mut reader)?;
|
||||||
|
Ok(Signature { c, s })
|
||||||
|
}
|
||||||
|
}
|
||||||
|
impl From<&Signature> for AbiSignature {
|
||||||
|
fn from(sig: &Signature) -> AbiSignature {
|
||||||
|
let c: [u8; 32] = sig.c.to_repr().into();
|
||||||
|
let s: [u8; 32] = sig.s.to_repr().into();
|
||||||
|
AbiSignature { c: c.into(), s: s.into() }
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
119
coins/ethereum/src/deployer.rs
Normal file
119
coins/ethereum/src/deployer.rs
Normal file
|
@ -0,0 +1,119 @@
|
||||||
|
use std::sync::Arc;
|
||||||
|
|
||||||
|
use alloy_core::primitives::{hex::FromHex, Address, B256, U256, Bytes, TxKind};
|
||||||
|
use alloy_consensus::{Signed, TxLegacy};
|
||||||
|
|
||||||
|
use alloy_sol_types::{SolCall, SolEvent};
|
||||||
|
|
||||||
|
use alloy_rpc_types::{BlockNumberOrTag, Filter};
|
||||||
|
use alloy_simple_request_transport::SimpleRequest;
|
||||||
|
use alloy_provider::{Provider, RootProvider};
|
||||||
|
|
||||||
|
use crate::{
|
||||||
|
Error,
|
||||||
|
crypto::{self, keccak256, PublicKey},
|
||||||
|
router::Router,
|
||||||
|
};
|
||||||
|
pub use crate::abi::deployer as abi;
|
||||||
|
|
||||||
|
/// The Deployer contract for the Router contract.
|
||||||
|
///
|
||||||
|
/// This Deployer has a deterministic address, letting it be immediately identified on any
|
||||||
|
/// compatible chain. It then supports retrieving the Router contract's address (which isn't
|
||||||
|
/// deterministic) using a single log query.
|
||||||
|
#[derive(Clone, Debug)]
|
||||||
|
pub struct Deployer;
|
||||||
|
impl Deployer {
|
||||||
|
/// Obtain the transaction to deploy this contract, already signed.
|
||||||
|
///
|
||||||
|
/// The account this transaction is sent from (which is populated in `from`) must be sufficiently
|
||||||
|
/// funded for this transaction to be submitted. This account has no known private key to anyone,
|
||||||
|
/// so ETH sent can be neither misappropriated nor returned.
|
||||||
|
pub fn deployment_tx() -> Signed<TxLegacy> {
|
||||||
|
let bytecode = include_str!("../artifacts/Deployer.bin");
|
||||||
|
let bytecode =
|
||||||
|
Bytes::from_hex(bytecode).expect("compiled-in Deployer bytecode wasn't valid hex");
|
||||||
|
|
||||||
|
let tx = TxLegacy {
|
||||||
|
chain_id: None,
|
||||||
|
nonce: 0,
|
||||||
|
gas_price: 100_000_000_000u128,
|
||||||
|
// TODO: Use a more accurate gas limit
|
||||||
|
gas_limit: 1_000_000u128,
|
||||||
|
to: TxKind::Create,
|
||||||
|
value: U256::ZERO,
|
||||||
|
input: bytecode,
|
||||||
|
};
|
||||||
|
|
||||||
|
crypto::deterministically_sign(&tx)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Obtain the deterministic address for this contract.
|
||||||
|
pub fn address() -> [u8; 20] {
|
||||||
|
let deployer_deployer =
|
||||||
|
Self::deployment_tx().recover_signer().expect("deployment_tx didn't have a valid signature");
|
||||||
|
**Address::create(&deployer_deployer, 0)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Construct a new view of the `Deployer`.
|
||||||
|
pub async fn new(provider: Arc<RootProvider<SimpleRequest>>) -> Result<Option<Self>, Error> {
|
||||||
|
let address = Self::address();
|
||||||
|
#[cfg(not(test))]
|
||||||
|
let required_block = BlockNumberOrTag::Finalized;
|
||||||
|
#[cfg(test)]
|
||||||
|
let required_block = BlockNumberOrTag::Latest;
|
||||||
|
let code = provider
|
||||||
|
.get_code_at(address.into(), required_block.into())
|
||||||
|
.await
|
||||||
|
.map_err(|_| Error::ConnectionError)?;
|
||||||
|
// Contract has yet to be deployed
|
||||||
|
if code.is_empty() {
|
||||||
|
return Ok(None);
|
||||||
|
}
|
||||||
|
Ok(Some(Self))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Yield the `ContractCall` necessary to deploy the Router.
|
||||||
|
pub fn deploy_router(&self, key: &PublicKey) -> TxLegacy {
|
||||||
|
TxLegacy {
|
||||||
|
to: TxKind::Call(Self::address().into()),
|
||||||
|
input: abi::deployCall::new((Router::init_code(key).into(),)).abi_encode().into(),
|
||||||
|
gas_limit: 1_000_000,
|
||||||
|
..Default::default()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Find the first Router deployed with the specified key as its first key.
|
||||||
|
///
|
||||||
|
/// This is the Router Serai will use, and is the only way to construct a `Router`.
|
||||||
|
pub async fn find_router(
|
||||||
|
&self,
|
||||||
|
provider: Arc<RootProvider<SimpleRequest>>,
|
||||||
|
key: &PublicKey,
|
||||||
|
) -> Result<Option<Router>, Error> {
|
||||||
|
let init_code = Router::init_code(key);
|
||||||
|
let init_code_hash = keccak256(&init_code);
|
||||||
|
|
||||||
|
#[cfg(not(test))]
|
||||||
|
let to_block = BlockNumberOrTag::Finalized;
|
||||||
|
#[cfg(test)]
|
||||||
|
let to_block = BlockNumberOrTag::Latest;
|
||||||
|
|
||||||
|
// Find the first log using this init code (where the init code is binding to the key)
|
||||||
|
let filter =
|
||||||
|
Filter::new().from_block(0).to_block(to_block).address(Address::from(Self::address()));
|
||||||
|
let filter = filter.event_signature(abi::Deployment::SIGNATURE_HASH);
|
||||||
|
let filter = filter.topic1(B256::from(init_code_hash));
|
||||||
|
let logs = provider.get_logs(&filter).await.map_err(|_| Error::ConnectionError)?;
|
||||||
|
|
||||||
|
let Some(first_log) = logs.first() else { return Ok(None) };
|
||||||
|
let router = first_log
|
||||||
|
.log_decode::<abi::Deployment>()
|
||||||
|
.map_err(|_| Error::ConnectionError)?
|
||||||
|
.inner
|
||||||
|
.data
|
||||||
|
.created;
|
||||||
|
|
||||||
|
Ok(Some(Router::new(provider, router)))
|
||||||
|
}
|
||||||
|
}
|
118
coins/ethereum/src/erc20.rs
Normal file
118
coins/ethereum/src/erc20.rs
Normal file
|
@ -0,0 +1,118 @@
|
||||||
|
use std::{sync::Arc, collections::HashSet};
|
||||||
|
|
||||||
|
use alloy_core::primitives::{Address, B256, U256};
|
||||||
|
|
||||||
|
use alloy_sol_types::{SolInterface, SolEvent};
|
||||||
|
|
||||||
|
use alloy_rpc_types::{BlockNumberOrTag, Filter};
|
||||||
|
use alloy_simple_request_transport::SimpleRequest;
|
||||||
|
use alloy_provider::{Provider, RootProvider};
|
||||||
|
|
||||||
|
use crate::Error;
|
||||||
|
pub use crate::abi::erc20 as abi;
|
||||||
|
use abi::{IERC20Calls, Transfer, transferCall, transferFromCall};
|
||||||
|
|
||||||
|
#[derive(Clone, Debug)]
|
||||||
|
pub struct TopLevelErc20Transfer {
|
||||||
|
pub id: [u8; 32],
|
||||||
|
pub from: [u8; 20],
|
||||||
|
pub amount: U256,
|
||||||
|
pub data: Vec<u8>,
|
||||||
|
}
|
||||||
|
|
||||||
|
/// A view for an ERC20 contract.
|
||||||
|
#[derive(Clone, Debug)]
|
||||||
|
pub struct ERC20(Arc<RootProvider<SimpleRequest>>, Address);
|
||||||
|
impl ERC20 {
|
||||||
|
/// Construct a new view of the specified ERC20 contract.
|
||||||
|
///
|
||||||
|
/// This checks a contract is deployed at that address yet does not check the contract is
|
||||||
|
/// actually an ERC20.
|
||||||
|
pub async fn new(
|
||||||
|
provider: Arc<RootProvider<SimpleRequest>>,
|
||||||
|
address: [u8; 20],
|
||||||
|
) -> Result<Option<Self>, Error> {
|
||||||
|
let code = provider
|
||||||
|
.get_code_at(address.into(), BlockNumberOrTag::Finalized.into())
|
||||||
|
.await
|
||||||
|
.map_err(|_| Error::ConnectionError)?;
|
||||||
|
// Contract has yet to be deployed
|
||||||
|
if code.is_empty() {
|
||||||
|
return Ok(None);
|
||||||
|
}
|
||||||
|
Ok(Some(Self(provider.clone(), Address::from(&address))))
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn top_level_transfers(
|
||||||
|
&self,
|
||||||
|
block: u64,
|
||||||
|
to: [u8; 20],
|
||||||
|
) -> Result<Vec<TopLevelErc20Transfer>, Error> {
|
||||||
|
let filter = Filter::new().from_block(block).to_block(block).address(self.1);
|
||||||
|
let filter = filter.event_signature(Transfer::SIGNATURE_HASH);
|
||||||
|
let mut to_topic = [0; 32];
|
||||||
|
to_topic[12 ..].copy_from_slice(&to);
|
||||||
|
let filter = filter.topic2(B256::from(to_topic));
|
||||||
|
let logs = self.0.get_logs(&filter).await.map_err(|_| Error::ConnectionError)?;
|
||||||
|
|
||||||
|
let mut handled = HashSet::new();
|
||||||
|
|
||||||
|
let mut top_level_transfers = vec![];
|
||||||
|
for log in logs {
|
||||||
|
// Double check the address which emitted this log
|
||||||
|
if log.address() != self.1 {
|
||||||
|
Err(Error::ConnectionError)?;
|
||||||
|
}
|
||||||
|
|
||||||
|
let tx_id = log.transaction_hash.ok_or(Error::ConnectionError)?;
|
||||||
|
let tx = self.0.get_transaction_by_hash(tx_id).await.map_err(|_| Error::ConnectionError)?;
|
||||||
|
|
||||||
|
// If this is a top-level call...
|
||||||
|
if tx.to == Some(self.1) {
|
||||||
|
// And we recognize the call...
|
||||||
|
// Don't validate the encoding as this can't be re-encoded to an identical bytestring due
|
||||||
|
// to the InInstruction appended
|
||||||
|
if let Ok(call) = IERC20Calls::abi_decode(&tx.input, false) {
|
||||||
|
// Extract the top-level call's from/to/value
|
||||||
|
let (from, call_to, value) = match call {
|
||||||
|
IERC20Calls::transfer(transferCall { to: call_to, value }) => (tx.from, call_to, value),
|
||||||
|
IERC20Calls::transferFrom(transferFromCall { from, to: call_to, value }) => {
|
||||||
|
(from, call_to, value)
|
||||||
|
}
|
||||||
|
// Treat any other function selectors as unrecognized
|
||||||
|
_ => continue,
|
||||||
|
};
|
||||||
|
|
||||||
|
let log = log.log_decode::<Transfer>().map_err(|_| Error::ConnectionError)?.inner.data;
|
||||||
|
|
||||||
|
// Ensure the top-level transfer is equivalent, and this presumably isn't a log for an
|
||||||
|
// internal transfer
|
||||||
|
if (log.from != from) || (call_to != to) || (value != log.value) {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Now that the top-level transfer is confirmed to be equivalent to the log, ensure it's
|
||||||
|
// the only log we handle
|
||||||
|
if handled.contains(&tx_id) {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
handled.insert(tx_id);
|
||||||
|
|
||||||
|
// Read the data appended after
|
||||||
|
let encoded = call.abi_encode();
|
||||||
|
let data = tx.input.as_ref()[encoded.len() ..].to_vec();
|
||||||
|
|
||||||
|
// Push the transfer
|
||||||
|
top_level_transfers.push(TopLevelErc20Transfer {
|
||||||
|
// Since we'll only handle one log for this TX, set the ID to the TX ID
|
||||||
|
id: *tx_id,
|
||||||
|
from: *log.from.0,
|
||||||
|
amount: log.value,
|
||||||
|
data,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Ok(top_level_transfers)
|
||||||
|
}
|
||||||
|
}
|
|
@ -1,16 +1,30 @@
|
||||||
use thiserror::Error;
|
use thiserror::Error;
|
||||||
|
|
||||||
|
pub use alloy_core;
|
||||||
|
pub use alloy_consensus;
|
||||||
|
|
||||||
|
pub use alloy_rpc_types;
|
||||||
|
pub use alloy_simple_request_transport;
|
||||||
|
pub use alloy_rpc_client;
|
||||||
|
pub use alloy_provider;
|
||||||
|
|
||||||
pub mod crypto;
|
pub mod crypto;
|
||||||
|
|
||||||
pub(crate) mod abi;
|
pub(crate) mod abi;
|
||||||
pub mod schnorr;
|
|
||||||
|
pub mod erc20;
|
||||||
|
pub mod deployer;
|
||||||
pub mod router;
|
pub mod router;
|
||||||
|
|
||||||
|
pub mod machine;
|
||||||
|
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
mod tests;
|
mod tests;
|
||||||
|
|
||||||
#[derive(Error, Debug)]
|
#[derive(Clone, Copy, PartialEq, Eq, Debug, Error)]
|
||||||
pub enum Error {
|
pub enum Error {
|
||||||
#[error("failed to verify Schnorr signature")]
|
#[error("failed to verify Schnorr signature")]
|
||||||
InvalidSignature,
|
InvalidSignature,
|
||||||
|
#[error("couldn't make call/send TX")]
|
||||||
|
ConnectionError,
|
||||||
}
|
}
|
||||||
|
|
414
coins/ethereum/src/machine.rs
Normal file
414
coins/ethereum/src/machine.rs
Normal file
|
@ -0,0 +1,414 @@
|
||||||
|
use std::{
|
||||||
|
io::{self, Read},
|
||||||
|
collections::HashMap,
|
||||||
|
};
|
||||||
|
|
||||||
|
use rand_core::{RngCore, CryptoRng};
|
||||||
|
|
||||||
|
use transcript::{Transcript, RecommendedTranscript};
|
||||||
|
|
||||||
|
use group::GroupEncoding;
|
||||||
|
use frost::{
|
||||||
|
curve::{Ciphersuite, Secp256k1},
|
||||||
|
Participant, ThresholdKeys, FrostError,
|
||||||
|
algorithm::Schnorr,
|
||||||
|
sign::*,
|
||||||
|
};
|
||||||
|
|
||||||
|
use alloy_core::primitives::U256;
|
||||||
|
|
||||||
|
use crate::{
|
||||||
|
crypto::{PublicKey, EthereumHram, Signature},
|
||||||
|
router::{
|
||||||
|
abi::{Call as AbiCall, OutInstruction as AbiOutInstruction},
|
||||||
|
Router,
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
#[derive(Clone, PartialEq, Eq, Debug)]
|
||||||
|
pub struct Call {
|
||||||
|
pub to: [u8; 20],
|
||||||
|
pub value: U256,
|
||||||
|
pub data: Vec<u8>,
|
||||||
|
}
|
||||||
|
impl Call {
|
||||||
|
pub fn read<R: io::Read>(reader: &mut R) -> io::Result<Self> {
|
||||||
|
let mut to = [0; 20];
|
||||||
|
reader.read_exact(&mut to)?;
|
||||||
|
|
||||||
|
let value = {
|
||||||
|
let mut value_bytes = [0; 32];
|
||||||
|
reader.read_exact(&mut value_bytes)?;
|
||||||
|
U256::from_le_slice(&value_bytes)
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut data_len = {
|
||||||
|
let mut data_len = [0; 4];
|
||||||
|
reader.read_exact(&mut data_len)?;
|
||||||
|
usize::try_from(u32::from_le_bytes(data_len)).expect("u32 couldn't fit within a usize")
|
||||||
|
};
|
||||||
|
|
||||||
|
// A valid DoS would be to claim a 4 GB data is present for only 4 bytes
|
||||||
|
// We read this in 1 KB chunks to only read data actually present (with a max DoS of 1 KB)
|
||||||
|
let mut data = vec![];
|
||||||
|
while data_len > 0 {
|
||||||
|
let chunk_len = data_len.min(1024);
|
||||||
|
let mut chunk = vec![0; chunk_len];
|
||||||
|
reader.read_exact(&mut chunk)?;
|
||||||
|
data.extend(&chunk);
|
||||||
|
data_len -= chunk_len;
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(Call { to, value, data })
|
||||||
|
}
|
||||||
|
|
||||||
|
fn write<W: io::Write>(&self, writer: &mut W) -> io::Result<()> {
|
||||||
|
writer.write_all(&self.to)?;
|
||||||
|
writer.write_all(&self.value.as_le_bytes())?;
|
||||||
|
|
||||||
|
let data_len = u32::try_from(self.data.len())
|
||||||
|
.map_err(|_| io::Error::other("call data length exceeded 2**32"))?;
|
||||||
|
writer.write_all(&data_len.to_le_bytes())?;
|
||||||
|
writer.write_all(&self.data)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
impl From<Call> for AbiCall {
|
||||||
|
fn from(call: Call) -> AbiCall {
|
||||||
|
AbiCall { to: call.to.into(), value: call.value, data: call.data.into() }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Clone, PartialEq, Eq, Debug)]
|
||||||
|
pub enum OutInstructionTarget {
|
||||||
|
Direct([u8; 20]),
|
||||||
|
Calls(Vec<Call>),
|
||||||
|
}
|
||||||
|
impl OutInstructionTarget {
|
||||||
|
fn read<R: io::Read>(reader: &mut R) -> io::Result<Self> {
|
||||||
|
let mut kind = [0xff];
|
||||||
|
reader.read_exact(&mut kind)?;
|
||||||
|
|
||||||
|
match kind[0] {
|
||||||
|
0 => {
|
||||||
|
let mut addr = [0; 20];
|
||||||
|
reader.read_exact(&mut addr)?;
|
||||||
|
Ok(OutInstructionTarget::Direct(addr))
|
||||||
|
}
|
||||||
|
1 => {
|
||||||
|
let mut calls_len = [0; 4];
|
||||||
|
reader.read_exact(&mut calls_len)?;
|
||||||
|
let calls_len = u32::from_le_bytes(calls_len);
|
||||||
|
|
||||||
|
let mut calls = vec![];
|
||||||
|
for _ in 0 .. calls_len {
|
||||||
|
calls.push(Call::read(reader)?);
|
||||||
|
}
|
||||||
|
Ok(OutInstructionTarget::Calls(calls))
|
||||||
|
}
|
||||||
|
_ => Err(io::Error::other("unrecognized OutInstructionTarget"))?,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn write<W: io::Write>(&self, writer: &mut W) -> io::Result<()> {
|
||||||
|
match self {
|
||||||
|
OutInstructionTarget::Direct(addr) => {
|
||||||
|
writer.write_all(&[0])?;
|
||||||
|
writer.write_all(addr)?;
|
||||||
|
}
|
||||||
|
OutInstructionTarget::Calls(calls) => {
|
||||||
|
writer.write_all(&[1])?;
|
||||||
|
let call_len = u32::try_from(calls.len())
|
||||||
|
.map_err(|_| io::Error::other("amount of calls exceeded 2**32"))?;
|
||||||
|
writer.write_all(&call_len.to_le_bytes())?;
|
||||||
|
for call in calls {
|
||||||
|
call.write(writer)?;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Clone, PartialEq, Eq, Debug)]
|
||||||
|
pub struct OutInstruction {
|
||||||
|
pub target: OutInstructionTarget,
|
||||||
|
pub value: U256,
|
||||||
|
}
|
||||||
|
impl OutInstruction {
|
||||||
|
fn read<R: io::Read>(reader: &mut R) -> io::Result<Self> {
|
||||||
|
let target = OutInstructionTarget::read(reader)?;
|
||||||
|
|
||||||
|
let value = {
|
||||||
|
let mut value_bytes = [0; 32];
|
||||||
|
reader.read_exact(&mut value_bytes)?;
|
||||||
|
U256::from_le_slice(&value_bytes)
|
||||||
|
};
|
||||||
|
|
||||||
|
Ok(OutInstruction { target, value })
|
||||||
|
}
|
||||||
|
fn write<W: io::Write>(&self, writer: &mut W) -> io::Result<()> {
|
||||||
|
self.target.write(writer)?;
|
||||||
|
writer.write_all(&self.value.as_le_bytes())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
impl From<OutInstruction> for AbiOutInstruction {
|
||||||
|
fn from(instruction: OutInstruction) -> AbiOutInstruction {
|
||||||
|
match instruction.target {
|
||||||
|
OutInstructionTarget::Direct(addr) => {
|
||||||
|
AbiOutInstruction { to: addr.into(), calls: vec![], value: instruction.value }
|
||||||
|
}
|
||||||
|
OutInstructionTarget::Calls(calls) => AbiOutInstruction {
|
||||||
|
to: [0; 20].into(),
|
||||||
|
calls: calls.into_iter().map(Into::into).collect(),
|
||||||
|
value: instruction.value,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Clone, PartialEq, Eq, Debug)]
|
||||||
|
pub enum RouterCommand {
|
||||||
|
UpdateSeraiKey { chain_id: U256, nonce: U256, key: PublicKey },
|
||||||
|
Execute { chain_id: U256, nonce: U256, outs: Vec<OutInstruction> },
|
||||||
|
}
|
||||||
|
|
||||||
|
impl RouterCommand {
|
||||||
|
pub fn msg(&self) -> Vec<u8> {
|
||||||
|
match self {
|
||||||
|
RouterCommand::UpdateSeraiKey { chain_id, nonce, key } => {
|
||||||
|
Router::update_serai_key_message(*chain_id, *nonce, key)
|
||||||
|
}
|
||||||
|
RouterCommand::Execute { chain_id, nonce, outs } => Router::execute_message(
|
||||||
|
*chain_id,
|
||||||
|
*nonce,
|
||||||
|
outs.iter().map(|out| out.clone().into()).collect(),
|
||||||
|
),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn read<R: io::Read>(reader: &mut R) -> io::Result<Self> {
|
||||||
|
let mut kind = [0xff];
|
||||||
|
reader.read_exact(&mut kind)?;
|
||||||
|
|
||||||
|
match kind[0] {
|
||||||
|
0 => {
|
||||||
|
let mut chain_id = [0; 32];
|
||||||
|
reader.read_exact(&mut chain_id)?;
|
||||||
|
|
||||||
|
let mut nonce = [0; 32];
|
||||||
|
reader.read_exact(&mut nonce)?;
|
||||||
|
|
||||||
|
let key = PublicKey::new(Secp256k1::read_G(reader)?)
|
||||||
|
.ok_or(io::Error::other("key for RouterCommand doesn't have an eth representation"))?;
|
||||||
|
Ok(RouterCommand::UpdateSeraiKey {
|
||||||
|
chain_id: U256::from_le_slice(&chain_id),
|
||||||
|
nonce: U256::from_le_slice(&nonce),
|
||||||
|
key,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
1 => {
|
||||||
|
let mut chain_id = [0; 32];
|
||||||
|
reader.read_exact(&mut chain_id)?;
|
||||||
|
let chain_id = U256::from_le_slice(&chain_id);
|
||||||
|
|
||||||
|
let mut nonce = [0; 32];
|
||||||
|
reader.read_exact(&mut nonce)?;
|
||||||
|
let nonce = U256::from_le_slice(&nonce);
|
||||||
|
|
||||||
|
let mut outs_len = [0; 4];
|
||||||
|
reader.read_exact(&mut outs_len)?;
|
||||||
|
let outs_len = u32::from_le_bytes(outs_len);
|
||||||
|
|
||||||
|
let mut outs = vec![];
|
||||||
|
for _ in 0 .. outs_len {
|
||||||
|
outs.push(OutInstruction::read(reader)?);
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(RouterCommand::Execute { chain_id, nonce, outs })
|
||||||
|
}
|
||||||
|
_ => Err(io::Error::other("reading unknown type of RouterCommand"))?,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn write<W: io::Write>(&self, writer: &mut W) -> io::Result<()> {
|
||||||
|
match self {
|
||||||
|
RouterCommand::UpdateSeraiKey { chain_id, nonce, key } => {
|
||||||
|
writer.write_all(&[0])?;
|
||||||
|
writer.write_all(&chain_id.as_le_bytes())?;
|
||||||
|
writer.write_all(&nonce.as_le_bytes())?;
|
||||||
|
writer.write_all(&key.A.to_bytes())
|
||||||
|
}
|
||||||
|
RouterCommand::Execute { chain_id, nonce, outs } => {
|
||||||
|
writer.write_all(&[1])?;
|
||||||
|
writer.write_all(&chain_id.as_le_bytes())?;
|
||||||
|
writer.write_all(&nonce.as_le_bytes())?;
|
||||||
|
writer.write_all(&u32::try_from(outs.len()).unwrap().to_le_bytes())?;
|
||||||
|
for out in outs {
|
||||||
|
out.write(writer)?;
|
||||||
|
}
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn serialize(&self) -> Vec<u8> {
|
||||||
|
let mut res = vec![];
|
||||||
|
self.write(&mut res).unwrap();
|
||||||
|
res
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Clone, PartialEq, Eq, Debug)]
|
||||||
|
pub struct SignedRouterCommand {
|
||||||
|
command: RouterCommand,
|
||||||
|
signature: Signature,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl SignedRouterCommand {
|
||||||
|
pub fn new(key: &PublicKey, command: RouterCommand, signature: &[u8; 64]) -> Option<Self> {
|
||||||
|
let c = Secp256k1::read_F(&mut &signature[.. 32]).ok()?;
|
||||||
|
let s = Secp256k1::read_F(&mut &signature[32 ..]).ok()?;
|
||||||
|
let signature = Signature { c, s };
|
||||||
|
|
||||||
|
if !signature.verify(key, &command.msg()) {
|
||||||
|
None?
|
||||||
|
}
|
||||||
|
Some(SignedRouterCommand { command, signature })
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn command(&self) -> &RouterCommand {
|
||||||
|
&self.command
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn signature(&self) -> &Signature {
|
||||||
|
&self.signature
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn read<R: io::Read>(reader: &mut R) -> io::Result<Self> {
|
||||||
|
let command = RouterCommand::read(reader)?;
|
||||||
|
|
||||||
|
let mut sig = [0; 64];
|
||||||
|
reader.read_exact(&mut sig)?;
|
||||||
|
let signature = Signature::from_bytes(sig)?;
|
||||||
|
|
||||||
|
Ok(SignedRouterCommand { command, signature })
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn write<W: io::Write>(&self, writer: &mut W) -> io::Result<()> {
|
||||||
|
self.command.write(writer)?;
|
||||||
|
writer.write_all(&self.signature.to_bytes())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub struct RouterCommandMachine {
|
||||||
|
key: PublicKey,
|
||||||
|
command: RouterCommand,
|
||||||
|
machine: AlgorithmMachine<Secp256k1, Schnorr<Secp256k1, RecommendedTranscript, EthereumHram>>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl RouterCommandMachine {
|
||||||
|
pub fn new(keys: ThresholdKeys<Secp256k1>, command: RouterCommand) -> Option<Self> {
|
||||||
|
// The Schnorr algorithm should be fine without this, even when using the IETF variant
|
||||||
|
// If this is better and more comprehensive, we should do it, even if not necessary
|
||||||
|
let mut transcript = RecommendedTranscript::new(b"ethereum-serai RouterCommandMachine v0.1");
|
||||||
|
let key = keys.group_key();
|
||||||
|
transcript.append_message(b"key", key.to_bytes());
|
||||||
|
transcript.append_message(b"command", command.serialize());
|
||||||
|
|
||||||
|
Some(Self {
|
||||||
|
key: PublicKey::new(key)?,
|
||||||
|
command,
|
||||||
|
machine: AlgorithmMachine::new(Schnorr::new(transcript), keys),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl PreprocessMachine for RouterCommandMachine {
|
||||||
|
type Preprocess = Preprocess<Secp256k1, ()>;
|
||||||
|
type Signature = SignedRouterCommand;
|
||||||
|
type SignMachine = RouterCommandSignMachine;
|
||||||
|
|
||||||
|
fn preprocess<R: RngCore + CryptoRng>(
|
||||||
|
self,
|
||||||
|
rng: &mut R,
|
||||||
|
) -> (Self::SignMachine, Self::Preprocess) {
|
||||||
|
let (machine, preprocess) = self.machine.preprocess(rng);
|
||||||
|
|
||||||
|
(RouterCommandSignMachine { key: self.key, command: self.command, machine }, preprocess)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub struct RouterCommandSignMachine {
|
||||||
|
key: PublicKey,
|
||||||
|
command: RouterCommand,
|
||||||
|
machine: AlgorithmSignMachine<Secp256k1, Schnorr<Secp256k1, RecommendedTranscript, EthereumHram>>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl SignMachine<SignedRouterCommand> for RouterCommandSignMachine {
|
||||||
|
type Params = ();
|
||||||
|
type Keys = ThresholdKeys<Secp256k1>;
|
||||||
|
type Preprocess = Preprocess<Secp256k1, ()>;
|
||||||
|
type SignatureShare = SignatureShare<Secp256k1>;
|
||||||
|
type SignatureMachine = RouterCommandSignatureMachine;
|
||||||
|
|
||||||
|
fn cache(self) -> CachedPreprocess {
|
||||||
|
unimplemented!(
|
||||||
|
"RouterCommand machines don't support caching their preprocesses due to {}",
|
||||||
|
"being already bound to a specific command"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
fn from_cache(
|
||||||
|
(): (),
|
||||||
|
_: ThresholdKeys<Secp256k1>,
|
||||||
|
_: CachedPreprocess,
|
||||||
|
) -> (Self, Self::Preprocess) {
|
||||||
|
unimplemented!(
|
||||||
|
"RouterCommand machines don't support caching their preprocesses due to {}",
|
||||||
|
"being already bound to a specific command"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
fn read_preprocess<R: Read>(&self, reader: &mut R) -> io::Result<Self::Preprocess> {
|
||||||
|
self.machine.read_preprocess(reader)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn sign(
|
||||||
|
self,
|
||||||
|
commitments: HashMap<Participant, Self::Preprocess>,
|
||||||
|
msg: &[u8],
|
||||||
|
) -> Result<(RouterCommandSignatureMachine, Self::SignatureShare), FrostError> {
|
||||||
|
if !msg.is_empty() {
|
||||||
|
panic!("message was passed to a RouterCommand machine when it generates its own");
|
||||||
|
}
|
||||||
|
|
||||||
|
let (machine, share) = self.machine.sign(commitments, &self.command.msg())?;
|
||||||
|
|
||||||
|
Ok((RouterCommandSignatureMachine { key: self.key, command: self.command, machine }, share))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub struct RouterCommandSignatureMachine {
|
||||||
|
key: PublicKey,
|
||||||
|
command: RouterCommand,
|
||||||
|
machine:
|
||||||
|
AlgorithmSignatureMachine<Secp256k1, Schnorr<Secp256k1, RecommendedTranscript, EthereumHram>>,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl SignatureMachine<SignedRouterCommand> for RouterCommandSignatureMachine {
|
||||||
|
type SignatureShare = SignatureShare<Secp256k1>;
|
||||||
|
|
||||||
|
fn read_share<R: Read>(&self, reader: &mut R) -> io::Result<Self::SignatureShare> {
|
||||||
|
self.machine.read_share(reader)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn complete(
|
||||||
|
self,
|
||||||
|
shares: HashMap<Participant, Self::SignatureShare>,
|
||||||
|
) -> Result<SignedRouterCommand, FrostError> {
|
||||||
|
let sig = self.machine.complete(shares)?;
|
||||||
|
let signature = Signature::new(&self.key, &self.command.msg(), sig)
|
||||||
|
.expect("machine produced an invalid signature");
|
||||||
|
Ok(SignedRouterCommand { command: self.command, signature })
|
||||||
|
}
|
||||||
|
}
|
|
@ -1,30 +1,426 @@
|
||||||
pub use crate::abi::router::*;
|
use std::{sync::Arc, io, collections::HashSet};
|
||||||
|
|
||||||
/*
|
use k256::{
|
||||||
use crate::crypto::{ProcessedSignature, PublicKey};
|
elliptic_curve::{group::GroupEncoding, sec1},
|
||||||
use ethers::{contract::ContractFactory, prelude::*, solc::artifacts::contract::ContractBytecode};
|
ProjectivePoint,
|
||||||
use eyre::Result;
|
};
|
||||||
use std::{convert::From, fs::File, sync::Arc};
|
|
||||||
|
|
||||||
pub async fn router_update_public_key<M: Middleware + 'static>(
|
use alloy_core::primitives::{hex::FromHex, Address, U256, Bytes, TxKind};
|
||||||
contract: &Router<M>,
|
#[cfg(test)]
|
||||||
public_key: &PublicKey,
|
use alloy_core::primitives::B256;
|
||||||
signature: &ProcessedSignature,
|
use alloy_consensus::TxLegacy;
|
||||||
) -> std::result::Result<Option<TransactionReceipt>, eyre::ErrReport> {
|
|
||||||
let tx = contract.update_public_key(public_key.px.to_bytes().into(), signature.into());
|
use alloy_sol_types::{SolValue, SolConstructor, SolCall, SolEvent};
|
||||||
let pending_tx = tx.send().await?;
|
|
||||||
let receipt = pending_tx.await?;
|
use alloy_rpc_types::Filter;
|
||||||
Ok(receipt)
|
#[cfg(test)]
|
||||||
|
use alloy_rpc_types::{BlockId, TransactionRequest, TransactionInput};
|
||||||
|
use alloy_simple_request_transport::SimpleRequest;
|
||||||
|
use alloy_provider::{Provider, RootProvider};
|
||||||
|
|
||||||
|
pub use crate::{
|
||||||
|
Error,
|
||||||
|
crypto::{PublicKey, Signature},
|
||||||
|
abi::{erc20::Transfer, router as abi},
|
||||||
|
};
|
||||||
|
use abi::{SeraiKeyUpdated, InInstruction as InInstructionEvent, Executed as ExecutedEvent};
|
||||||
|
|
||||||
|
#[derive(Clone, PartialEq, Eq, Debug)]
|
||||||
|
pub enum Coin {
|
||||||
|
Ether,
|
||||||
|
Erc20([u8; 20]),
|
||||||
}
|
}
|
||||||
|
|
||||||
pub async fn router_execute<M: Middleware + 'static>(
|
impl Coin {
|
||||||
contract: &Router<M>,
|
pub fn read<R: io::Read>(reader: &mut R) -> io::Result<Self> {
|
||||||
txs: Vec<Rtransaction>,
|
let mut kind = [0xff];
|
||||||
signature: &ProcessedSignature,
|
reader.read_exact(&mut kind)?;
|
||||||
) -> std::result::Result<Option<TransactionReceipt>, eyre::ErrReport> {
|
Ok(match kind[0] {
|
||||||
let tx = contract.execute(txs, signature.into()).send();
|
0 => Coin::Ether,
|
||||||
let pending_tx = tx.send().await?;
|
1 => {
|
||||||
let receipt = pending_tx.await?;
|
let mut address = [0; 20];
|
||||||
Ok(receipt)
|
reader.read_exact(&mut address)?;
|
||||||
|
Coin::Erc20(address)
|
||||||
|
}
|
||||||
|
_ => Err(io::Error::other("unrecognized Coin type"))?,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn write<W: io::Write>(&self, writer: &mut W) -> io::Result<()> {
|
||||||
|
match self {
|
||||||
|
Coin::Ether => writer.write_all(&[0]),
|
||||||
|
Coin::Erc20(token) => {
|
||||||
|
writer.write_all(&[1])?;
|
||||||
|
writer.write_all(token)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Clone, PartialEq, Eq, Debug)]
|
||||||
|
pub struct InInstruction {
|
||||||
|
pub id: ([u8; 32], u64),
|
||||||
|
pub from: [u8; 20],
|
||||||
|
pub coin: Coin,
|
||||||
|
pub amount: U256,
|
||||||
|
pub data: Vec<u8>,
|
||||||
|
pub key_at_end_of_block: ProjectivePoint,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl InInstruction {
|
||||||
|
pub fn read<R: io::Read>(reader: &mut R) -> io::Result<Self> {
|
||||||
|
let id = {
|
||||||
|
let mut id_hash = [0; 32];
|
||||||
|
reader.read_exact(&mut id_hash)?;
|
||||||
|
let mut id_pos = [0; 8];
|
||||||
|
reader.read_exact(&mut id_pos)?;
|
||||||
|
let id_pos = u64::from_le_bytes(id_pos);
|
||||||
|
(id_hash, id_pos)
|
||||||
|
};
|
||||||
|
|
||||||
|
let mut from = [0; 20];
|
||||||
|
reader.read_exact(&mut from)?;
|
||||||
|
|
||||||
|
let coin = Coin::read(reader)?;
|
||||||
|
let mut amount = [0; 32];
|
||||||
|
reader.read_exact(&mut amount)?;
|
||||||
|
let amount = U256::from_le_slice(&amount);
|
||||||
|
|
||||||
|
let mut data_len = [0; 4];
|
||||||
|
reader.read_exact(&mut data_len)?;
|
||||||
|
let data_len = usize::try_from(u32::from_le_bytes(data_len))
|
||||||
|
.map_err(|_| io::Error::other("InInstruction data exceeded 2**32 in length"))?;
|
||||||
|
let mut data = vec![0; data_len];
|
||||||
|
reader.read_exact(&mut data)?;
|
||||||
|
|
||||||
|
let mut key_at_end_of_block = <ProjectivePoint as GroupEncoding>::Repr::default();
|
||||||
|
reader.read_exact(&mut key_at_end_of_block)?;
|
||||||
|
let key_at_end_of_block = Option::from(ProjectivePoint::from_bytes(&key_at_end_of_block))
|
||||||
|
.ok_or(io::Error::other("InInstruction had key at end of block which wasn't valid"))?;
|
||||||
|
|
||||||
|
Ok(InInstruction { id, from, coin, amount, data, key_at_end_of_block })
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn write<W: io::Write>(&self, writer: &mut W) -> io::Result<()> {
|
||||||
|
writer.write_all(&self.id.0)?;
|
||||||
|
writer.write_all(&self.id.1.to_le_bytes())?;
|
||||||
|
|
||||||
|
writer.write_all(&self.from)?;
|
||||||
|
|
||||||
|
self.coin.write(writer)?;
|
||||||
|
writer.write_all(&self.amount.as_le_bytes())?;
|
||||||
|
|
||||||
|
writer.write_all(
|
||||||
|
&u32::try_from(self.data.len())
|
||||||
|
.map_err(|_| {
|
||||||
|
io::Error::other("InInstruction being written had data exceeding 2**32 in length")
|
||||||
|
})?
|
||||||
|
.to_le_bytes(),
|
||||||
|
)?;
|
||||||
|
writer.write_all(&self.data)?;
|
||||||
|
|
||||||
|
writer.write_all(&self.key_at_end_of_block.to_bytes())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Clone, PartialEq, Eq, Debug)]
|
||||||
|
pub struct Executed {
|
||||||
|
pub tx_id: [u8; 32],
|
||||||
|
pub nonce: u64,
|
||||||
|
pub signature: [u8; 64],
|
||||||
|
}
|
||||||
|
|
||||||
|
/// The contract Serai uses to manage its state.
|
||||||
|
#[derive(Clone, Debug)]
|
||||||
|
pub struct Router(Arc<RootProvider<SimpleRequest>>, Address);
|
||||||
|
impl Router {
|
||||||
|
pub(crate) fn code() -> Vec<u8> {
|
||||||
|
let bytecode = include_str!("../artifacts/Router.bin");
|
||||||
|
Bytes::from_hex(bytecode).expect("compiled-in Router bytecode wasn't valid hex").to_vec()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub(crate) fn init_code(key: &PublicKey) -> Vec<u8> {
|
||||||
|
let mut bytecode = Self::code();
|
||||||
|
// Append the constructor arguments
|
||||||
|
bytecode.extend((abi::constructorCall { _seraiKey: key.eth_repr().into() }).abi_encode());
|
||||||
|
bytecode
|
||||||
|
}
|
||||||
|
|
||||||
|
// This isn't pub in order to force users to use `Deployer::find_router`.
|
||||||
|
pub(crate) fn new(provider: Arc<RootProvider<SimpleRequest>>, address: Address) -> Self {
|
||||||
|
Self(provider, address)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn address(&self) -> [u8; 20] {
|
||||||
|
**self.1
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get the key for Serai at the specified block.
|
||||||
|
#[cfg(test)]
|
||||||
|
pub async fn serai_key(&self, at: [u8; 32]) -> Result<PublicKey, Error> {
|
||||||
|
let call = TransactionRequest::default()
|
||||||
|
.to(Some(self.1))
|
||||||
|
.input(TransactionInput::new(abi::seraiKeyCall::new(()).abi_encode().into()));
|
||||||
|
let bytes = self
|
||||||
|
.0
|
||||||
|
.call(&call, Some(BlockId::Hash(B256::from(at).into())))
|
||||||
|
.await
|
||||||
|
.map_err(|_| Error::ConnectionError)?;
|
||||||
|
let res =
|
||||||
|
abi::seraiKeyCall::abi_decode_returns(&bytes, true).map_err(|_| Error::ConnectionError)?;
|
||||||
|
PublicKey::from_eth_repr(res._0.0).ok_or(Error::ConnectionError)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get the message to be signed in order to update the key for Serai.
|
||||||
|
pub(crate) fn update_serai_key_message(chain_id: U256, nonce: U256, key: &PublicKey) -> Vec<u8> {
|
||||||
|
let mut buffer = b"updateSeraiKey".to_vec();
|
||||||
|
buffer.extend(&chain_id.to_be_bytes::<32>());
|
||||||
|
buffer.extend(&nonce.to_be_bytes::<32>());
|
||||||
|
buffer.extend(&key.eth_repr());
|
||||||
|
buffer
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Update the key representing Serai.
|
||||||
|
pub fn update_serai_key(&self, public_key: &PublicKey, sig: &Signature) -> TxLegacy {
|
||||||
|
// TODO: Set a more accurate gas
|
||||||
|
TxLegacy {
|
||||||
|
to: TxKind::Call(self.1),
|
||||||
|
input: abi::updateSeraiKeyCall::new((public_key.eth_repr().into(), sig.into()))
|
||||||
|
.abi_encode()
|
||||||
|
.into(),
|
||||||
|
gas_limit: 100_000,
|
||||||
|
..Default::default()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get the current nonce for the published batches.
|
||||||
|
#[cfg(test)]
|
||||||
|
pub async fn nonce(&self, at: [u8; 32]) -> Result<U256, Error> {
|
||||||
|
let call = TransactionRequest::default()
|
||||||
|
.to(Some(self.1))
|
||||||
|
.input(TransactionInput::new(abi::nonceCall::new(()).abi_encode().into()));
|
||||||
|
let bytes = self
|
||||||
|
.0
|
||||||
|
.call(&call, Some(BlockId::Hash(B256::from(at).into())))
|
||||||
|
.await
|
||||||
|
.map_err(|_| Error::ConnectionError)?;
|
||||||
|
let res =
|
||||||
|
abi::nonceCall::abi_decode_returns(&bytes, true).map_err(|_| Error::ConnectionError)?;
|
||||||
|
Ok(res._0)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get the message to be signed in order to update the key for Serai.
|
||||||
|
pub(crate) fn execute_message(
|
||||||
|
chain_id: U256,
|
||||||
|
nonce: U256,
|
||||||
|
outs: Vec<abi::OutInstruction>,
|
||||||
|
) -> Vec<u8> {
|
||||||
|
("execute".to_string(), chain_id, nonce, outs).abi_encode_params()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Execute a batch of `OutInstruction`s.
|
||||||
|
pub fn execute(&self, outs: &[abi::OutInstruction], sig: &Signature) -> TxLegacy {
|
||||||
|
TxLegacy {
|
||||||
|
to: TxKind::Call(self.1),
|
||||||
|
input: abi::executeCall::new((outs.to_vec(), sig.into())).abi_encode().into(),
|
||||||
|
// TODO
|
||||||
|
gas_limit: 100_000 + ((200_000 + 10_000) * u128::try_from(outs.len()).unwrap()),
|
||||||
|
..Default::default()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn in_instructions(
|
||||||
|
&self,
|
||||||
|
block: u64,
|
||||||
|
allowed_tokens: &HashSet<[u8; 20]>,
|
||||||
|
) -> Result<Vec<InInstruction>, Error> {
|
||||||
|
let key_at_end_of_block = {
|
||||||
|
let filter = Filter::new().from_block(0).to_block(block).address(self.1);
|
||||||
|
let filter = filter.event_signature(SeraiKeyUpdated::SIGNATURE_HASH);
|
||||||
|
let all_keys = self.0.get_logs(&filter).await.map_err(|_| Error::ConnectionError)?;
|
||||||
|
|
||||||
|
let last_key_x_coordinate_log = all_keys.last().ok_or(Error::ConnectionError)?;
|
||||||
|
let last_key_x_coordinate = last_key_x_coordinate_log
|
||||||
|
.log_decode::<SeraiKeyUpdated>()
|
||||||
|
.map_err(|_| Error::ConnectionError)?
|
||||||
|
.inner
|
||||||
|
.data
|
||||||
|
.key;
|
||||||
|
|
||||||
|
let mut compressed_point = <ProjectivePoint as GroupEncoding>::Repr::default();
|
||||||
|
compressed_point[0] = u8::from(sec1::Tag::CompressedEvenY);
|
||||||
|
compressed_point[1 ..].copy_from_slice(last_key_x_coordinate.as_slice());
|
||||||
|
|
||||||
|
ProjectivePoint::from_bytes(&compressed_point).expect("router's last key wasn't a valid key")
|
||||||
|
};
|
||||||
|
|
||||||
|
let filter = Filter::new().from_block(block).to_block(block).address(self.1);
|
||||||
|
let filter = filter.event_signature(InInstructionEvent::SIGNATURE_HASH);
|
||||||
|
let logs = self.0.get_logs(&filter).await.map_err(|_| Error::ConnectionError)?;
|
||||||
|
|
||||||
|
let mut transfer_check = HashSet::new();
|
||||||
|
let mut in_instructions = vec![];
|
||||||
|
for log in logs {
|
||||||
|
// Double check the address which emitted this log
|
||||||
|
if log.address() != self.1 {
|
||||||
|
Err(Error::ConnectionError)?;
|
||||||
|
}
|
||||||
|
|
||||||
|
let id = (
|
||||||
|
log.block_hash.ok_or(Error::ConnectionError)?.into(),
|
||||||
|
log.log_index.ok_or(Error::ConnectionError)?,
|
||||||
|
);
|
||||||
|
|
||||||
|
let tx_hash = log.transaction_hash.ok_or(Error::ConnectionError)?;
|
||||||
|
let tx = self.0.get_transaction_by_hash(tx_hash).await.map_err(|_| Error::ConnectionError)?;
|
||||||
|
|
||||||
|
let log =
|
||||||
|
log.log_decode::<InInstructionEvent>().map_err(|_| Error::ConnectionError)?.inner.data;
|
||||||
|
|
||||||
|
let coin = if log.coin.0 == [0; 20] {
|
||||||
|
Coin::Ether
|
||||||
|
} else {
|
||||||
|
let token = *log.coin.0;
|
||||||
|
|
||||||
|
if !allowed_tokens.contains(&token) {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
// If this also counts as a top-level transfer via the token, drop it
|
||||||
|
//
|
||||||
|
// Necessary in order to handle a potential edge case with some theoretical token
|
||||||
|
// implementations
|
||||||
|
//
|
||||||
|
// This will either let it be handled by the top-level transfer hook or will drop it
|
||||||
|
// entirely on the side of caution
|
||||||
|
if tx.to == Some(token.into()) {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get all logs for this TX
|
||||||
|
let receipt = self
|
||||||
|
.0
|
||||||
|
.get_transaction_receipt(tx_hash)
|
||||||
|
.await
|
||||||
|
.map_err(|_| Error::ConnectionError)?
|
||||||
|
.ok_or(Error::ConnectionError)?;
|
||||||
|
let tx_logs = receipt.inner.logs();
|
||||||
|
|
||||||
|
// Find a matching transfer log
|
||||||
|
let mut found_transfer = false;
|
||||||
|
for tx_log in tx_logs {
|
||||||
|
let log_index = tx_log.log_index.ok_or(Error::ConnectionError)?;
|
||||||
|
// Ensure we didn't already use this transfer to check a distinct InInstruction event
|
||||||
|
if transfer_check.contains(&log_index) {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if this log is from the token we expected to be transferred
|
||||||
|
if tx_log.address().0 != token {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
// Check if this is a transfer log
|
||||||
|
// https://github.com/alloy-rs/core/issues/589
|
||||||
|
if tx_log.topics()[0] != Transfer::SIGNATURE_HASH {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
let Ok(transfer) = Transfer::decode_log(&tx_log.inner.clone(), true) else { continue };
|
||||||
|
// Check if this is a transfer to us for the expected amount
|
||||||
|
if (transfer.to == self.1) && (transfer.value == log.amount) {
|
||||||
|
transfer_check.insert(log_index);
|
||||||
|
found_transfer = true;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !found_transfer {
|
||||||
|
// This shouldn't be a ConnectionError
|
||||||
|
// This is an exploit, a non-conforming ERC20, or an invalid connection
|
||||||
|
// This should halt the process which is sufficient, yet this is sub-optimal
|
||||||
|
// TODO
|
||||||
|
Err(Error::ConnectionError)?;
|
||||||
|
}
|
||||||
|
|
||||||
|
Coin::Erc20(token)
|
||||||
|
};
|
||||||
|
|
||||||
|
in_instructions.push(InInstruction {
|
||||||
|
id,
|
||||||
|
from: *log.from.0,
|
||||||
|
coin,
|
||||||
|
amount: log.amount,
|
||||||
|
data: log.instruction.as_ref().to_vec(),
|
||||||
|
key_at_end_of_block,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(in_instructions)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn executed_commands(&self, block: u64) -> Result<Vec<Executed>, Error> {
|
||||||
|
let mut res = vec![];
|
||||||
|
|
||||||
|
{
|
||||||
|
let filter = Filter::new().from_block(block).to_block(block).address(self.1);
|
||||||
|
let filter = filter.event_signature(SeraiKeyUpdated::SIGNATURE_HASH);
|
||||||
|
let logs = self.0.get_logs(&filter).await.map_err(|_| Error::ConnectionError)?;
|
||||||
|
|
||||||
|
for log in logs {
|
||||||
|
// Double check the address which emitted this log
|
||||||
|
if log.address() != self.1 {
|
||||||
|
Err(Error::ConnectionError)?;
|
||||||
|
}
|
||||||
|
|
||||||
|
let tx_id = log.transaction_hash.ok_or(Error::ConnectionError)?.into();
|
||||||
|
|
||||||
|
let log =
|
||||||
|
log.log_decode::<SeraiKeyUpdated>().map_err(|_| Error::ConnectionError)?.inner.data;
|
||||||
|
|
||||||
|
let mut signature = [0; 64];
|
||||||
|
signature[.. 32].copy_from_slice(log.signature.c.as_ref());
|
||||||
|
signature[32 ..].copy_from_slice(log.signature.s.as_ref());
|
||||||
|
res.push(Executed {
|
||||||
|
tx_id,
|
||||||
|
nonce: log.nonce.try_into().map_err(|_| Error::ConnectionError)?,
|
||||||
|
signature,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
{
|
||||||
|
let filter = Filter::new().from_block(block).to_block(block).address(self.1);
|
||||||
|
let filter = filter.event_signature(ExecutedEvent::SIGNATURE_HASH);
|
||||||
|
let logs = self.0.get_logs(&filter).await.map_err(|_| Error::ConnectionError)?;
|
||||||
|
|
||||||
|
for log in logs {
|
||||||
|
// Double check the address which emitted this log
|
||||||
|
if log.address() != self.1 {
|
||||||
|
Err(Error::ConnectionError)?;
|
||||||
|
}
|
||||||
|
|
||||||
|
let tx_id = log.transaction_hash.ok_or(Error::ConnectionError)?.into();
|
||||||
|
|
||||||
|
let log = log.log_decode::<ExecutedEvent>().map_err(|_| Error::ConnectionError)?.inner.data;
|
||||||
|
|
||||||
|
let mut signature = [0; 64];
|
||||||
|
signature[.. 32].copy_from_slice(log.signature.c.as_ref());
|
||||||
|
signature[32 ..].copy_from_slice(log.signature.s.as_ref());
|
||||||
|
res.push(Executed {
|
||||||
|
tx_id,
|
||||||
|
nonce: log.nonce.try_into().map_err(|_| Error::ConnectionError)?,
|
||||||
|
signature,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(res)
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(feature = "tests")]
|
||||||
|
pub fn key_updated_filter(&self) -> Filter {
|
||||||
|
Filter::new().address(self.1).event_signature(SeraiKeyUpdated::SIGNATURE_HASH)
|
||||||
|
}
|
||||||
|
#[cfg(feature = "tests")]
|
||||||
|
pub fn executed_filter(&self) -> Filter {
|
||||||
|
Filter::new().address(self.1).event_signature(ExecutedEvent::SIGNATURE_HASH)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
*/
|
|
||||||
|
|
|
@ -1,34 +0,0 @@
|
||||||
use eyre::{eyre, Result};
|
|
||||||
|
|
||||||
use group::ff::PrimeField;
|
|
||||||
|
|
||||||
use ethers_providers::{Provider, Http};
|
|
||||||
|
|
||||||
use crate::{
|
|
||||||
Error,
|
|
||||||
crypto::{keccak256, PublicKey, Signature},
|
|
||||||
};
|
|
||||||
pub use crate::abi::schnorr::*;
|
|
||||||
|
|
||||||
pub async fn call_verify(
|
|
||||||
contract: &Schnorr<Provider<Http>>,
|
|
||||||
public_key: &PublicKey,
|
|
||||||
message: &[u8],
|
|
||||||
signature: &Signature,
|
|
||||||
) -> Result<()> {
|
|
||||||
if contract
|
|
||||||
.verify(
|
|
||||||
public_key.parity,
|
|
||||||
public_key.px.to_repr().into(),
|
|
||||||
keccak256(message),
|
|
||||||
signature.c.to_repr().into(),
|
|
||||||
signature.s.to_repr().into(),
|
|
||||||
)
|
|
||||||
.call()
|
|
||||||
.await?
|
|
||||||
{
|
|
||||||
Ok(())
|
|
||||||
} else {
|
|
||||||
Err(eyre!(Error::InvalidSignature))
|
|
||||||
}
|
|
||||||
}
|
|
13
coins/ethereum/src/tests/abi/mod.rs
Normal file
13
coins/ethereum/src/tests/abi/mod.rs
Normal file
|
@ -0,0 +1,13 @@
|
||||||
|
use alloy_sol_types::sol;
|
||||||
|
|
||||||
|
#[rustfmt::skip]
|
||||||
|
#[allow(warnings)]
|
||||||
|
#[allow(needless_pass_by_value)]
|
||||||
|
#[allow(clippy::all)]
|
||||||
|
#[allow(clippy::ignored_unit_patterns)]
|
||||||
|
#[allow(clippy::redundant_closure_for_method_calls)]
|
||||||
|
mod schnorr_container {
|
||||||
|
use super::*;
|
||||||
|
sol!("src/tests/contracts/Schnorr.sol");
|
||||||
|
}
|
||||||
|
pub(crate) use schnorr_container::TestSchnorr as schnorr;
|
51
coins/ethereum/src/tests/contracts/ERC20.sol
Normal file
51
coins/ethereum/src/tests/contracts/ERC20.sol
Normal file
|
@ -0,0 +1,51 @@
|
||||||
|
// SPDX-License-Identifier: AGPLv3
|
||||||
|
pragma solidity ^0.8.0;
|
||||||
|
|
||||||
|
contract TestERC20 {
|
||||||
|
event Transfer(address indexed from, address indexed to, uint256 value);
|
||||||
|
event Approval(address indexed owner, address indexed spender, uint256 value);
|
||||||
|
|
||||||
|
function name() public pure returns (string memory) {
|
||||||
|
return "Test ERC20";
|
||||||
|
}
|
||||||
|
function symbol() public pure returns (string memory) {
|
||||||
|
return "TEST";
|
||||||
|
}
|
||||||
|
function decimals() public pure returns (uint8) {
|
||||||
|
return 18;
|
||||||
|
}
|
||||||
|
|
||||||
|
function totalSupply() public pure returns (uint256) {
|
||||||
|
return 1_000_000 * 10e18;
|
||||||
|
}
|
||||||
|
|
||||||
|
mapping(address => uint256) balances;
|
||||||
|
mapping(address => mapping(address => uint256)) allowances;
|
||||||
|
|
||||||
|
constructor() {
|
||||||
|
balances[msg.sender] = totalSupply();
|
||||||
|
}
|
||||||
|
|
||||||
|
function balanceOf(address owner) public view returns (uint256) {
|
||||||
|
return balances[owner];
|
||||||
|
}
|
||||||
|
function transfer(address to, uint256 value) public returns (bool) {
|
||||||
|
balances[msg.sender] -= value;
|
||||||
|
balances[to] += value;
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
function transferFrom(address from, address to, uint256 value) public returns (bool) {
|
||||||
|
allowances[from][msg.sender] -= value;
|
||||||
|
balances[from] -= value;
|
||||||
|
balances[to] += value;
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
function approve(address spender, uint256 value) public returns (bool) {
|
||||||
|
allowances[msg.sender][spender] = value;
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
function allowance(address owner, address spender) public view returns (uint256) {
|
||||||
|
return allowances[owner][spender];
|
||||||
|
}
|
||||||
|
}
|
15
coins/ethereum/src/tests/contracts/Schnorr.sol
Normal file
15
coins/ethereum/src/tests/contracts/Schnorr.sol
Normal file
|
@ -0,0 +1,15 @@
|
||||||
|
// SPDX-License-Identifier: AGPLv3
|
||||||
|
pragma solidity ^0.8.0;
|
||||||
|
|
||||||
|
import "../../../contracts/Schnorr.sol";
|
||||||
|
|
||||||
|
contract TestSchnorr {
|
||||||
|
function verify(
|
||||||
|
bytes32 px,
|
||||||
|
bytes calldata message,
|
||||||
|
bytes32 c,
|
||||||
|
bytes32 s
|
||||||
|
) external pure returns (bool) {
|
||||||
|
return Schnorr.verify(px, message, c, s);
|
||||||
|
}
|
||||||
|
}
|
|
@ -1,49 +1,33 @@
|
||||||
use rand_core::OsRng;
|
use rand_core::OsRng;
|
||||||
|
|
||||||
use sha2::Sha256;
|
use group::ff::{Field, PrimeField};
|
||||||
use sha3::{Digest, Keccak256};
|
|
||||||
|
|
||||||
use group::Group;
|
|
||||||
use k256::{
|
use k256::{
|
||||||
ecdsa::{hazmat::SignPrimitive, signature::DigestVerifier, SigningKey, VerifyingKey},
|
ecdsa::{
|
||||||
elliptic_curve::{bigint::ArrayEncoding, ops::Reduce, point::DecompressPoint},
|
self, hazmat::SignPrimitive, signature::hazmat::PrehashVerifier, SigningKey, VerifyingKey,
|
||||||
U256, Scalar, AffinePoint, ProjectivePoint,
|
},
|
||||||
|
Scalar, ProjectivePoint,
|
||||||
};
|
};
|
||||||
|
|
||||||
use frost::{
|
use frost::{
|
||||||
curve::Secp256k1,
|
curve::{Ciphersuite, Secp256k1},
|
||||||
algorithm::{Hram, IetfSchnorr},
|
algorithm::{Hram, IetfSchnorr},
|
||||||
tests::{algorithm_machines, sign},
|
tests::{algorithm_machines, sign},
|
||||||
};
|
};
|
||||||
|
|
||||||
use crate::{crypto::*, tests::key_gen};
|
use crate::{crypto::*, tests::key_gen};
|
||||||
|
|
||||||
pub fn hash_to_scalar(data: &[u8]) -> Scalar {
|
// The ecrecover opcode, yet with parity replacing v
|
||||||
Scalar::reduce(U256::from_be_slice(&keccak256(data)))
|
pub(crate) fn ecrecover(message: Scalar, odd_y: bool, r: Scalar, s: Scalar) -> Option<[u8; 20]> {
|
||||||
}
|
let sig = ecdsa::Signature::from_scalars(r, s).ok()?;
|
||||||
|
let message: [u8; 32] = message.to_repr().into();
|
||||||
pub(crate) fn ecrecover(message: Scalar, v: u8, r: Scalar, s: Scalar) -> Option<[u8; 20]> {
|
alloy_core::primitives::Signature::from_signature_and_parity(
|
||||||
if r.is_zero().into() || s.is_zero().into() || !((v == 27) || (v == 28)) {
|
sig,
|
||||||
return None;
|
alloy_core::primitives::Parity::Parity(odd_y),
|
||||||
}
|
)
|
||||||
|
.ok()?
|
||||||
#[allow(non_snake_case)]
|
.recover_address_from_prehash(&alloy_core::primitives::B256::from(message))
|
||||||
let R = AffinePoint::decompress(&r.to_bytes(), (v - 27).into());
|
.ok()
|
||||||
#[allow(non_snake_case)]
|
.map(Into::into)
|
||||||
if let Some(R) = Option::<AffinePoint>::from(R) {
|
|
||||||
#[allow(non_snake_case)]
|
|
||||||
let R = ProjectivePoint::from(R);
|
|
||||||
|
|
||||||
let r = r.invert().unwrap();
|
|
||||||
let u1 = ProjectivePoint::GENERATOR * (-message * r);
|
|
||||||
let u2 = R * (s * r);
|
|
||||||
let key: ProjectivePoint = u1 + u2;
|
|
||||||
if !bool::from(key.is_identity()) {
|
|
||||||
return Some(address(&key));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
None
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
|
@ -55,20 +39,23 @@ fn test_ecrecover() {
|
||||||
const MESSAGE: &[u8] = b"Hello, World!";
|
const MESSAGE: &[u8] = b"Hello, World!";
|
||||||
let (sig, recovery_id) = private
|
let (sig, recovery_id) = private
|
||||||
.as_nonzero_scalar()
|
.as_nonzero_scalar()
|
||||||
.try_sign_prehashed_rfc6979::<Sha256>(&Keccak256::digest(MESSAGE), b"")
|
.try_sign_prehashed(
|
||||||
|
<Secp256k1 as Ciphersuite>::F::random(&mut OsRng),
|
||||||
|
&keccak256(MESSAGE).into(),
|
||||||
|
)
|
||||||
.unwrap();
|
.unwrap();
|
||||||
|
|
||||||
// Sanity check the signature verifies
|
// Sanity check the signature verifies
|
||||||
#[allow(clippy::unit_cmp)] // Intended to assert this wasn't changed to Result<bool>
|
#[allow(clippy::unit_cmp)] // Intended to assert this wasn't changed to Result<bool>
|
||||||
{
|
{
|
||||||
assert_eq!(public.verify_digest(Keccak256::new_with_prefix(MESSAGE), &sig).unwrap(), ());
|
assert_eq!(public.verify_prehash(&keccak256(MESSAGE), &sig).unwrap(), ());
|
||||||
}
|
}
|
||||||
|
|
||||||
// Perform the ecrecover
|
// Perform the ecrecover
|
||||||
assert_eq!(
|
assert_eq!(
|
||||||
ecrecover(
|
ecrecover(
|
||||||
hash_to_scalar(MESSAGE),
|
hash_to_scalar(MESSAGE),
|
||||||
u8::from(recovery_id.unwrap().is_y_odd()) + 27,
|
u8::from(recovery_id.unwrap().is_y_odd()) == 1,
|
||||||
*sig.r(),
|
*sig.r(),
|
||||||
*sig.s()
|
*sig.s()
|
||||||
)
|
)
|
||||||
|
@ -93,18 +80,13 @@ fn test_signing() {
|
||||||
pub fn preprocess_signature_for_ecrecover(
|
pub fn preprocess_signature_for_ecrecover(
|
||||||
R: ProjectivePoint,
|
R: ProjectivePoint,
|
||||||
public_key: &PublicKey,
|
public_key: &PublicKey,
|
||||||
chain_id: U256,
|
|
||||||
m: &[u8],
|
m: &[u8],
|
||||||
s: Scalar,
|
s: Scalar,
|
||||||
) -> (u8, Scalar, Scalar) {
|
) -> (Scalar, Scalar) {
|
||||||
let c = EthereumHram::hram(
|
let c = EthereumHram::hram(&R, &public_key.A, m);
|
||||||
&R,
|
|
||||||
&public_key.A,
|
|
||||||
&[chain_id.to_be_byte_array().as_slice(), &keccak256(m)].concat(),
|
|
||||||
);
|
|
||||||
let sa = -(s * public_key.px);
|
let sa = -(s * public_key.px);
|
||||||
let ca = -(c * public_key.px);
|
let ca = -(c * public_key.px);
|
||||||
(public_key.parity, sa, ca)
|
(sa, ca)
|
||||||
}
|
}
|
||||||
|
|
||||||
#[test]
|
#[test]
|
||||||
|
@ -112,21 +94,12 @@ fn test_ecrecover_hack() {
|
||||||
let (keys, public_key) = key_gen();
|
let (keys, public_key) = key_gen();
|
||||||
|
|
||||||
const MESSAGE: &[u8] = b"Hello, World!";
|
const MESSAGE: &[u8] = b"Hello, World!";
|
||||||
let hashed_message = keccak256(MESSAGE);
|
|
||||||
let chain_id = U256::ONE;
|
|
||||||
let full_message = &[chain_id.to_be_byte_array().as_slice(), &hashed_message].concat();
|
|
||||||
|
|
||||||
let algo = IetfSchnorr::<Secp256k1, EthereumHram>::ietf();
|
let algo = IetfSchnorr::<Secp256k1, EthereumHram>::ietf();
|
||||||
let sig = sign(
|
let sig =
|
||||||
&mut OsRng,
|
sign(&mut OsRng, &algo, keys.clone(), algorithm_machines(&mut OsRng, &algo, &keys), MESSAGE);
|
||||||
&algo,
|
|
||||||
keys.clone(),
|
|
||||||
algorithm_machines(&mut OsRng, &algo, &keys),
|
|
||||||
full_message,
|
|
||||||
);
|
|
||||||
|
|
||||||
let (parity, sa, ca) =
|
let (sa, ca) = preprocess_signature_for_ecrecover(sig.R, &public_key, MESSAGE, sig.s);
|
||||||
preprocess_signature_for_ecrecover(sig.R, &public_key, chain_id, MESSAGE, sig.s);
|
let q = ecrecover(sa, false, public_key.px, ca).unwrap();
|
||||||
let q = ecrecover(sa, parity, public_key.px, ca).unwrap();
|
|
||||||
assert_eq!(q, address(&sig.R));
|
assert_eq!(q, address(&sig.R));
|
||||||
}
|
}
|
||||||
|
|
|
@ -1,21 +1,25 @@
|
||||||
use std::{sync::Arc, time::Duration, fs::File, collections::HashMap};
|
use std::{sync::Arc, collections::HashMap};
|
||||||
|
|
||||||
use rand_core::OsRng;
|
use rand_core::OsRng;
|
||||||
|
|
||||||
use group::ff::PrimeField;
|
|
||||||
use k256::{Scalar, ProjectivePoint};
|
use k256::{Scalar, ProjectivePoint};
|
||||||
use frost::{curve::Secp256k1, Participant, ThresholdKeys, tests::key_gen as frost_key_gen};
|
use frost::{curve::Secp256k1, Participant, ThresholdKeys, tests::key_gen as frost_key_gen};
|
||||||
|
|
||||||
use ethers_core::{
|
use alloy_core::{
|
||||||
types::{H160, Signature as EthersSignature},
|
primitives::{Address, U256, Bytes, TxKind},
|
||||||
abi::Abi,
|
hex::FromHex,
|
||||||
};
|
};
|
||||||
use ethers_contract::ContractFactory;
|
use alloy_consensus::{SignableTransaction, TxLegacy};
|
||||||
use ethers_providers::{Middleware, Provider, Http};
|
|
||||||
|
|
||||||
use crate::crypto::PublicKey;
|
use alloy_rpc_types::TransactionReceipt;
|
||||||
|
use alloy_simple_request_transport::SimpleRequest;
|
||||||
|
use alloy_provider::{Provider, RootProvider};
|
||||||
|
|
||||||
|
use crate::crypto::{address, deterministically_sign, PublicKey};
|
||||||
|
|
||||||
mod crypto;
|
mod crypto;
|
||||||
|
|
||||||
|
mod abi;
|
||||||
mod schnorr;
|
mod schnorr;
|
||||||
mod router;
|
mod router;
|
||||||
|
|
||||||
|
@ -36,57 +40,88 @@ pub fn key_gen() -> (HashMap<Participant, ThresholdKeys<Secp256k1>>, PublicKey)
|
||||||
(keys, public_key)
|
(keys, public_key)
|
||||||
}
|
}
|
||||||
|
|
||||||
// TODO: Replace with a contract deployment from an unknown account, so the environment solely has
|
// TODO: Use a proper error here
|
||||||
// to fund the deployer, not create/pass a wallet
|
pub async fn send(
|
||||||
// TODO: Deterministic deployments across chains
|
provider: &RootProvider<SimpleRequest>,
|
||||||
|
wallet: &k256::ecdsa::SigningKey,
|
||||||
|
mut tx: TxLegacy,
|
||||||
|
) -> Option<TransactionReceipt> {
|
||||||
|
let verifying_key = *wallet.verifying_key().as_affine();
|
||||||
|
let address = Address::from(address(&verifying_key.into()));
|
||||||
|
|
||||||
|
// https://github.com/alloy-rs/alloy/issues/539
|
||||||
|
// let chain_id = provider.get_chain_id().await.unwrap();
|
||||||
|
// tx.chain_id = Some(chain_id);
|
||||||
|
tx.chain_id = None;
|
||||||
|
tx.nonce = provider.get_transaction_count(address, None).await.unwrap();
|
||||||
|
// 100 gwei
|
||||||
|
tx.gas_price = 100_000_000_000u128;
|
||||||
|
|
||||||
|
let sig = wallet.sign_prehash_recoverable(tx.signature_hash().as_ref()).unwrap();
|
||||||
|
assert_eq!(address, tx.clone().into_signed(sig.into()).recover_signer().unwrap());
|
||||||
|
assert!(
|
||||||
|
provider.get_balance(address, None).await.unwrap() >
|
||||||
|
((U256::from(tx.gas_price) * U256::from(tx.gas_limit)) + tx.value)
|
||||||
|
);
|
||||||
|
|
||||||
|
let mut bytes = vec![];
|
||||||
|
tx.encode_with_signature_fields(&sig.into(), &mut bytes);
|
||||||
|
let pending_tx = provider.send_raw_transaction(&bytes).await.ok()?;
|
||||||
|
pending_tx.get_receipt().await.ok()
|
||||||
|
}
|
||||||
|
|
||||||
|
pub async fn fund_account(
|
||||||
|
provider: &RootProvider<SimpleRequest>,
|
||||||
|
wallet: &k256::ecdsa::SigningKey,
|
||||||
|
to_fund: Address,
|
||||||
|
value: U256,
|
||||||
|
) -> Option<()> {
|
||||||
|
let funding_tx =
|
||||||
|
TxLegacy { to: TxKind::Call(to_fund), gas_limit: 21_000, value, ..Default::default() };
|
||||||
|
assert!(send(provider, wallet, funding_tx).await.unwrap().status());
|
||||||
|
|
||||||
|
Some(())
|
||||||
|
}
|
||||||
|
|
||||||
|
// TODO: Use a proper error here
|
||||||
pub async fn deploy_contract(
|
pub async fn deploy_contract(
|
||||||
chain_id: u32,
|
client: Arc<RootProvider<SimpleRequest>>,
|
||||||
client: Arc<Provider<Http>>,
|
|
||||||
wallet: &k256::ecdsa::SigningKey,
|
wallet: &k256::ecdsa::SigningKey,
|
||||||
name: &str,
|
name: &str,
|
||||||
) -> eyre::Result<H160> {
|
) -> Option<Address> {
|
||||||
let abi: Abi =
|
|
||||||
serde_json::from_reader(File::open(format!("./artifacts/{name}.abi")).unwrap()).unwrap();
|
|
||||||
|
|
||||||
let hex_bin_buf = std::fs::read_to_string(format!("./artifacts/{name}.bin")).unwrap();
|
let hex_bin_buf = std::fs::read_to_string(format!("./artifacts/{name}.bin")).unwrap();
|
||||||
let hex_bin =
|
let hex_bin =
|
||||||
if let Some(stripped) = hex_bin_buf.strip_prefix("0x") { stripped } else { &hex_bin_buf };
|
if let Some(stripped) = hex_bin_buf.strip_prefix("0x") { stripped } else { &hex_bin_buf };
|
||||||
let bin = hex::decode(hex_bin).unwrap();
|
let bin = Bytes::from_hex(hex_bin).unwrap();
|
||||||
let factory = ContractFactory::new(abi, bin.into(), client.clone());
|
|
||||||
|
|
||||||
let mut deployment_tx = factory.deploy(())?.tx;
|
let deployment_tx = TxLegacy {
|
||||||
deployment_tx.set_chain_id(chain_id);
|
chain_id: None,
|
||||||
deployment_tx.set_gas(1_000_000);
|
nonce: 0,
|
||||||
let (max_fee_per_gas, max_priority_fee_per_gas) = client.estimate_eip1559_fees(None).await?;
|
// 100 gwei
|
||||||
deployment_tx.as_eip1559_mut().unwrap().max_fee_per_gas = Some(max_fee_per_gas);
|
gas_price: 100_000_000_000u128,
|
||||||
deployment_tx.as_eip1559_mut().unwrap().max_priority_fee_per_gas = Some(max_priority_fee_per_gas);
|
gas_limit: 1_000_000,
|
||||||
|
to: TxKind::Create,
|
||||||
|
value: U256::ZERO,
|
||||||
|
input: bin,
|
||||||
|
};
|
||||||
|
|
||||||
let sig_hash = deployment_tx.sighash();
|
let deployment_tx = deterministically_sign(&deployment_tx);
|
||||||
let (sig, rid) = wallet.sign_prehash_recoverable(sig_hash.as_ref()).unwrap();
|
|
||||||
|
|
||||||
// EIP-155 v
|
// Fund the deployer address
|
||||||
let mut v = u64::from(rid.to_byte());
|
fund_account(
|
||||||
assert!((v == 0) || (v == 1));
|
&client,
|
||||||
v += u64::from((chain_id * 2) + 35);
|
wallet,
|
||||||
|
deployment_tx.recover_signer().unwrap(),
|
||||||
|
U256::from(deployment_tx.tx().gas_limit) * U256::from(deployment_tx.tx().gas_price),
|
||||||
|
)
|
||||||
|
.await?;
|
||||||
|
|
||||||
let r = sig.r().to_repr();
|
let (deployment_tx, sig, _) = deployment_tx.into_parts();
|
||||||
let r_ref: &[u8] = r.as_ref();
|
let mut bytes = vec![];
|
||||||
let s = sig.s().to_repr();
|
deployment_tx.encode_with_signature_fields(&sig, &mut bytes);
|
||||||
let s_ref: &[u8] = s.as_ref();
|
let pending_tx = client.send_raw_transaction(&bytes).await.ok()?;
|
||||||
let deployment_tx =
|
let receipt = pending_tx.get_receipt().await.ok()?;
|
||||||
deployment_tx.rlp_signed(&EthersSignature { r: r_ref.into(), s: s_ref.into(), v });
|
assert!(receipt.status());
|
||||||
|
|
||||||
let pending_tx = client.send_raw_transaction(deployment_tx).await?;
|
Some(receipt.contract_address.unwrap())
|
||||||
|
|
||||||
let mut receipt;
|
|
||||||
while {
|
|
||||||
receipt = client.get_transaction_receipt(pending_tx.tx_hash()).await?;
|
|
||||||
receipt.is_none()
|
|
||||||
} {
|
|
||||||
tokio::time::sleep(Duration::from_secs(6)).await;
|
|
||||||
}
|
|
||||||
let receipt = receipt.unwrap();
|
|
||||||
assert!(receipt.status == Some(1.into()));
|
|
||||||
|
|
||||||
Ok(receipt.contract_address.unwrap())
|
|
||||||
}
|
}
|
||||||
|
|
|
@ -2,7 +2,8 @@ use std::{convert::TryFrom, sync::Arc, collections::HashMap};
|
||||||
|
|
||||||
use rand_core::OsRng;
|
use rand_core::OsRng;
|
||||||
|
|
||||||
use group::ff::PrimeField;
|
use group::Group;
|
||||||
|
use k256::ProjectivePoint;
|
||||||
use frost::{
|
use frost::{
|
||||||
curve::Secp256k1,
|
curve::Secp256k1,
|
||||||
Participant, ThresholdKeys,
|
Participant, ThresholdKeys,
|
||||||
|
@ -10,100 +11,173 @@ use frost::{
|
||||||
tests::{algorithm_machines, sign},
|
tests::{algorithm_machines, sign},
|
||||||
};
|
};
|
||||||
|
|
||||||
use ethers_core::{
|
use alloy_core::primitives::{Address, U256};
|
||||||
types::{H160, U256, Bytes},
|
|
||||||
abi::AbiEncode,
|
use alloy_simple_request_transport::SimpleRequest;
|
||||||
utils::{Anvil, AnvilInstance},
|
use alloy_rpc_client::ClientBuilder;
|
||||||
};
|
use alloy_provider::{Provider, RootProvider};
|
||||||
use ethers_providers::{Middleware, Provider, Http};
|
|
||||||
|
use alloy_node_bindings::{Anvil, AnvilInstance};
|
||||||
|
|
||||||
use crate::{
|
use crate::{
|
||||||
crypto::{keccak256, PublicKey, EthereumHram, Signature},
|
crypto::*,
|
||||||
router::{self, *},
|
deployer::Deployer,
|
||||||
tests::{key_gen, deploy_contract},
|
router::{Router, abi as router},
|
||||||
|
tests::{key_gen, send, fund_account},
|
||||||
};
|
};
|
||||||
|
|
||||||
async fn setup_test() -> (
|
async fn setup_test() -> (
|
||||||
u32,
|
|
||||||
AnvilInstance,
|
AnvilInstance,
|
||||||
Router<Provider<Http>>,
|
Arc<RootProvider<SimpleRequest>>,
|
||||||
|
u64,
|
||||||
|
Router,
|
||||||
HashMap<Participant, ThresholdKeys<Secp256k1>>,
|
HashMap<Participant, ThresholdKeys<Secp256k1>>,
|
||||||
PublicKey,
|
PublicKey,
|
||||||
) {
|
) {
|
||||||
let anvil = Anvil::new().spawn();
|
let anvil = Anvil::new().spawn();
|
||||||
|
|
||||||
let provider = Provider::<Http>::try_from(anvil.endpoint()).unwrap();
|
let provider = RootProvider::new(
|
||||||
let chain_id = provider.get_chainid().await.unwrap().as_u32();
|
ClientBuilder::default().transport(SimpleRequest::new(anvil.endpoint()), true),
|
||||||
|
);
|
||||||
|
let chain_id = provider.get_chain_id().await.unwrap();
|
||||||
let wallet = anvil.keys()[0].clone().into();
|
let wallet = anvil.keys()[0].clone().into();
|
||||||
let client = Arc::new(provider);
|
let client = Arc::new(provider);
|
||||||
|
|
||||||
let contract_address =
|
// Make sure the Deployer constructor returns None, as it doesn't exist yet
|
||||||
deploy_contract(chain_id, client.clone(), &wallet, "Router").await.unwrap();
|
assert!(Deployer::new(client.clone()).await.unwrap().is_none());
|
||||||
let contract = Router::new(contract_address, client.clone());
|
|
||||||
|
// Deploy the Deployer
|
||||||
|
let tx = Deployer::deployment_tx();
|
||||||
|
fund_account(
|
||||||
|
&client,
|
||||||
|
&wallet,
|
||||||
|
tx.recover_signer().unwrap(),
|
||||||
|
U256::from(tx.tx().gas_limit) * U256::from(tx.tx().gas_price),
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let (tx, sig, _) = tx.into_parts();
|
||||||
|
let mut bytes = vec![];
|
||||||
|
tx.encode_with_signature_fields(&sig, &mut bytes);
|
||||||
|
|
||||||
|
let pending_tx = client.send_raw_transaction(&bytes).await.unwrap();
|
||||||
|
let receipt = pending_tx.get_receipt().await.unwrap();
|
||||||
|
assert!(receipt.status());
|
||||||
|
let deployer =
|
||||||
|
Deployer::new(client.clone()).await.expect("network error").expect("deployer wasn't deployed");
|
||||||
|
|
||||||
let (keys, public_key) = key_gen();
|
let (keys, public_key) = key_gen();
|
||||||
|
|
||||||
// Set the key to the threshold keys
|
// Verify the Router constructor returns None, as it doesn't exist yet
|
||||||
let tx = contract.init_serai_key(public_key.px.to_repr().into()).gas(100_000);
|
assert!(deployer.find_router(client.clone(), &public_key).await.unwrap().is_none());
|
||||||
let pending_tx = tx.send().await.unwrap();
|
|
||||||
let receipt = pending_tx.await.unwrap().unwrap();
|
|
||||||
assert!(receipt.status == Some(1.into()));
|
|
||||||
|
|
||||||
(chain_id, anvil, contract, keys, public_key)
|
// Deploy the router
|
||||||
|
let receipt = send(&client, &anvil.keys()[0].clone().into(), deployer.deploy_router(&public_key))
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
assert!(receipt.status());
|
||||||
|
let contract = deployer.find_router(client.clone(), &public_key).await.unwrap().unwrap();
|
||||||
|
|
||||||
|
(anvil, client, chain_id, contract, keys, public_key)
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn latest_block_hash(client: &RootProvider<SimpleRequest>) -> [u8; 32] {
|
||||||
|
client
|
||||||
|
.get_block(client.get_block_number().await.unwrap().into(), false)
|
||||||
|
.await
|
||||||
|
.unwrap()
|
||||||
|
.unwrap()
|
||||||
|
.header
|
||||||
|
.hash
|
||||||
|
.unwrap()
|
||||||
|
.0
|
||||||
}
|
}
|
||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
async fn test_deploy_contract() {
|
async fn test_deploy_contract() {
|
||||||
setup_test().await;
|
let (_anvil, client, _, router, _, public_key) = setup_test().await;
|
||||||
|
|
||||||
|
let block_hash = latest_block_hash(&client).await;
|
||||||
|
assert_eq!(router.serai_key(block_hash).await.unwrap(), public_key);
|
||||||
|
assert_eq!(router.nonce(block_hash).await.unwrap(), U256::try_from(1u64).unwrap());
|
||||||
|
// TODO: Check it emitted SeraiKeyUpdated(public_key) at its genesis
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn hash_and_sign(
|
pub fn hash_and_sign(
|
||||||
keys: &HashMap<Participant, ThresholdKeys<Secp256k1>>,
|
keys: &HashMap<Participant, ThresholdKeys<Secp256k1>>,
|
||||||
public_key: &PublicKey,
|
public_key: &PublicKey,
|
||||||
chain_id: U256,
|
|
||||||
message: &[u8],
|
message: &[u8],
|
||||||
) -> Signature {
|
) -> Signature {
|
||||||
let hashed_message = keccak256(message);
|
|
||||||
|
|
||||||
let mut chain_id_bytes = [0; 32];
|
|
||||||
chain_id.to_big_endian(&mut chain_id_bytes);
|
|
||||||
let full_message = &[chain_id_bytes.as_slice(), &hashed_message].concat();
|
|
||||||
|
|
||||||
let algo = IetfSchnorr::<Secp256k1, EthereumHram>::ietf();
|
let algo = IetfSchnorr::<Secp256k1, EthereumHram>::ietf();
|
||||||
let sig = sign(
|
let sig =
|
||||||
&mut OsRng,
|
sign(&mut OsRng, &algo, keys.clone(), algorithm_machines(&mut OsRng, &algo, keys), message);
|
||||||
&algo,
|
|
||||||
keys.clone(),
|
|
||||||
algorithm_machines(&mut OsRng, &algo, keys),
|
|
||||||
full_message,
|
|
||||||
);
|
|
||||||
|
|
||||||
Signature::new(public_key, k256::U256::from_words(chain_id.0), message, sig).unwrap()
|
Signature::new(public_key, message, sig).unwrap()
|
||||||
|
}
|
||||||
|
|
||||||
|
#[tokio::test]
|
||||||
|
async fn test_router_update_serai_key() {
|
||||||
|
let (anvil, client, chain_id, contract, keys, public_key) = setup_test().await;
|
||||||
|
|
||||||
|
let next_key = loop {
|
||||||
|
let point = ProjectivePoint::random(&mut OsRng);
|
||||||
|
let Some(next_key) = PublicKey::new(point) else { continue };
|
||||||
|
break next_key;
|
||||||
|
};
|
||||||
|
|
||||||
|
let message = Router::update_serai_key_message(
|
||||||
|
U256::try_from(chain_id).unwrap(),
|
||||||
|
U256::try_from(1u64).unwrap(),
|
||||||
|
&next_key,
|
||||||
|
);
|
||||||
|
let sig = hash_and_sign(&keys, &public_key, &message);
|
||||||
|
|
||||||
|
let first_block_hash = latest_block_hash(&client).await;
|
||||||
|
assert_eq!(contract.serai_key(first_block_hash).await.unwrap(), public_key);
|
||||||
|
|
||||||
|
let receipt =
|
||||||
|
send(&client, &anvil.keys()[0].clone().into(), contract.update_serai_key(&next_key, &sig))
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
assert!(receipt.status());
|
||||||
|
|
||||||
|
let second_block_hash = latest_block_hash(&client).await;
|
||||||
|
assert_eq!(contract.serai_key(second_block_hash).await.unwrap(), next_key);
|
||||||
|
// Check this does still offer the historical state
|
||||||
|
assert_eq!(contract.serai_key(first_block_hash).await.unwrap(), public_key);
|
||||||
|
// TODO: Check logs
|
||||||
|
|
||||||
|
println!("gas used: {:?}", receipt.gas_used);
|
||||||
|
// println!("logs: {:?}", receipt.logs);
|
||||||
}
|
}
|
||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
async fn test_router_execute() {
|
async fn test_router_execute() {
|
||||||
let (chain_id, _anvil, contract, keys, public_key) = setup_test().await;
|
let (anvil, client, chain_id, contract, keys, public_key) = setup_test().await;
|
||||||
|
|
||||||
let to = H160([0u8; 20]);
|
let to = Address::from([0; 20]);
|
||||||
let value = U256([0u64; 4]);
|
let value = U256::ZERO;
|
||||||
let data = Bytes::from([0]);
|
let tx = router::OutInstruction { to, value, calls: vec![] };
|
||||||
let tx = OutInstruction { to, value, data: data.clone() };
|
let txs = vec![tx];
|
||||||
|
|
||||||
let nonce_call = contract.nonce();
|
let first_block_hash = latest_block_hash(&client).await;
|
||||||
let nonce = nonce_call.call().await.unwrap();
|
let nonce = contract.nonce(first_block_hash).await.unwrap();
|
||||||
|
assert_eq!(nonce, U256::try_from(1u64).unwrap());
|
||||||
|
|
||||||
let encoded =
|
let message = Router::execute_message(U256::try_from(chain_id).unwrap(), nonce, txs.clone());
|
||||||
("execute".to_string(), nonce, vec![router::OutInstruction { to, value, data }]).encode();
|
let sig = hash_and_sign(&keys, &public_key, &message);
|
||||||
let sig = hash_and_sign(&keys, &public_key, chain_id.into(), &encoded);
|
|
||||||
|
|
||||||
let tx = contract
|
let receipt =
|
||||||
.execute(vec![tx], router::Signature { c: sig.c.to_repr().into(), s: sig.s.to_repr().into() })
|
send(&client, &anvil.keys()[0].clone().into(), contract.execute(&txs, &sig)).await.unwrap();
|
||||||
.gas(300_000);
|
assert!(receipt.status());
|
||||||
let pending_tx = tx.send().await.unwrap();
|
|
||||||
let receipt = dbg!(pending_tx.await.unwrap().unwrap());
|
|
||||||
assert!(receipt.status == Some(1.into()));
|
|
||||||
|
|
||||||
println!("gas used: {:?}", receipt.cumulative_gas_used);
|
let second_block_hash = latest_block_hash(&client).await;
|
||||||
println!("logs: {:?}", receipt.logs);
|
assert_eq!(contract.nonce(second_block_hash).await.unwrap(), U256::try_from(2u64).unwrap());
|
||||||
|
// Check this does still offer the historical state
|
||||||
|
assert_eq!(contract.nonce(first_block_hash).await.unwrap(), U256::try_from(1u64).unwrap());
|
||||||
|
// TODO: Check logs
|
||||||
|
|
||||||
|
println!("gas used: {:?}", receipt.gas_used);
|
||||||
|
// println!("logs: {:?}", receipt.logs);
|
||||||
}
|
}
|
||||||
|
|
|
@ -1,11 +1,9 @@
|
||||||
use std::{convert::TryFrom, sync::Arc};
|
use std::sync::Arc;
|
||||||
|
|
||||||
use rand_core::OsRng;
|
use rand_core::OsRng;
|
||||||
|
|
||||||
use ::k256::{elliptic_curve::bigint::ArrayEncoding, U256, Scalar};
|
use group::ff::PrimeField;
|
||||||
|
use k256::Scalar;
|
||||||
use ethers_core::utils::{keccak256, Anvil, AnvilInstance};
|
|
||||||
use ethers_providers::{Middleware, Provider, Http};
|
|
||||||
|
|
||||||
use frost::{
|
use frost::{
|
||||||
curve::Secp256k1,
|
curve::Secp256k1,
|
||||||
|
@ -13,24 +11,34 @@ use frost::{
|
||||||
tests::{algorithm_machines, sign},
|
tests::{algorithm_machines, sign},
|
||||||
};
|
};
|
||||||
|
|
||||||
|
use alloy_core::primitives::Address;
|
||||||
|
|
||||||
|
use alloy_sol_types::SolCall;
|
||||||
|
|
||||||
|
use alloy_rpc_types::{TransactionInput, TransactionRequest};
|
||||||
|
use alloy_simple_request_transport::SimpleRequest;
|
||||||
|
use alloy_rpc_client::ClientBuilder;
|
||||||
|
use alloy_provider::{Provider, RootProvider};
|
||||||
|
|
||||||
|
use alloy_node_bindings::{Anvil, AnvilInstance};
|
||||||
|
|
||||||
use crate::{
|
use crate::{
|
||||||
|
Error,
|
||||||
crypto::*,
|
crypto::*,
|
||||||
schnorr::*,
|
tests::{key_gen, deploy_contract, abi::schnorr as abi},
|
||||||
tests::{key_gen, deploy_contract},
|
|
||||||
};
|
};
|
||||||
|
|
||||||
async fn setup_test() -> (u32, AnvilInstance, Schnorr<Provider<Http>>) {
|
async fn setup_test() -> (AnvilInstance, Arc<RootProvider<SimpleRequest>>, Address) {
|
||||||
let anvil = Anvil::new().spawn();
|
let anvil = Anvil::new().spawn();
|
||||||
|
|
||||||
let provider = Provider::<Http>::try_from(anvil.endpoint()).unwrap();
|
let provider = RootProvider::new(
|
||||||
let chain_id = provider.get_chainid().await.unwrap().as_u32();
|
ClientBuilder::default().transport(SimpleRequest::new(anvil.endpoint()), true),
|
||||||
|
);
|
||||||
let wallet = anvil.keys()[0].clone().into();
|
let wallet = anvil.keys()[0].clone().into();
|
||||||
let client = Arc::new(provider);
|
let client = Arc::new(provider);
|
||||||
|
|
||||||
let contract_address =
|
let address = deploy_contract(client.clone(), &wallet, "TestSchnorr").await.unwrap();
|
||||||
deploy_contract(chain_id, client.clone(), &wallet, "Schnorr").await.unwrap();
|
(anvil, client, address)
|
||||||
let contract = Schnorr::new(contract_address, client.clone());
|
|
||||||
(chain_id, anvil, contract)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
|
@ -38,30 +46,48 @@ async fn test_deploy_contract() {
|
||||||
setup_test().await;
|
setup_test().await;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
pub async fn call_verify(
|
||||||
|
provider: &RootProvider<SimpleRequest>,
|
||||||
|
contract: Address,
|
||||||
|
public_key: &PublicKey,
|
||||||
|
message: &[u8],
|
||||||
|
signature: &Signature,
|
||||||
|
) -> Result<(), Error> {
|
||||||
|
let px: [u8; 32] = public_key.px.to_repr().into();
|
||||||
|
let c_bytes: [u8; 32] = signature.c.to_repr().into();
|
||||||
|
let s_bytes: [u8; 32] = signature.s.to_repr().into();
|
||||||
|
let call = TransactionRequest::default().to(Some(contract)).input(TransactionInput::new(
|
||||||
|
abi::verifyCall::new((px.into(), message.to_vec().into(), c_bytes.into(), s_bytes.into()))
|
||||||
|
.abi_encode()
|
||||||
|
.into(),
|
||||||
|
));
|
||||||
|
let bytes = provider.call(&call, None).await.map_err(|_| Error::ConnectionError)?;
|
||||||
|
let res =
|
||||||
|
abi::verifyCall::abi_decode_returns(&bytes, true).map_err(|_| Error::ConnectionError)?;
|
||||||
|
|
||||||
|
if res._0 {
|
||||||
|
Ok(())
|
||||||
|
} else {
|
||||||
|
Err(Error::InvalidSignature)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
#[tokio::test]
|
#[tokio::test]
|
||||||
async fn test_ecrecover_hack() {
|
async fn test_ecrecover_hack() {
|
||||||
let (chain_id, _anvil, contract) = setup_test().await;
|
let (_anvil, client, contract) = setup_test().await;
|
||||||
let chain_id = U256::from(chain_id);
|
|
||||||
|
|
||||||
let (keys, public_key) = key_gen();
|
let (keys, public_key) = key_gen();
|
||||||
|
|
||||||
const MESSAGE: &[u8] = b"Hello, World!";
|
const MESSAGE: &[u8] = b"Hello, World!";
|
||||||
let hashed_message = keccak256(MESSAGE);
|
|
||||||
let full_message = &[chain_id.to_be_byte_array().as_slice(), &hashed_message].concat();
|
|
||||||
|
|
||||||
let algo = IetfSchnorr::<Secp256k1, EthereumHram>::ietf();
|
let algo = IetfSchnorr::<Secp256k1, EthereumHram>::ietf();
|
||||||
let sig = sign(
|
let sig =
|
||||||
&mut OsRng,
|
sign(&mut OsRng, &algo, keys.clone(), algorithm_machines(&mut OsRng, &algo, &keys), MESSAGE);
|
||||||
&algo,
|
let sig = Signature::new(&public_key, MESSAGE, sig).unwrap();
|
||||||
keys.clone(),
|
|
||||||
algorithm_machines(&mut OsRng, &algo, &keys),
|
|
||||||
full_message,
|
|
||||||
);
|
|
||||||
let sig = Signature::new(&public_key, chain_id, MESSAGE, sig).unwrap();
|
|
||||||
|
|
||||||
call_verify(&contract, &public_key, MESSAGE, &sig).await.unwrap();
|
call_verify(&client, contract, &public_key, MESSAGE, &sig).await.unwrap();
|
||||||
// Test an invalid signature fails
|
// Test an invalid signature fails
|
||||||
let mut sig = sig;
|
let mut sig = sig;
|
||||||
sig.s += Scalar::ONE;
|
sig.s += Scalar::ONE;
|
||||||
assert!(call_verify(&contract, &public_key, MESSAGE, &sig).await.is_err());
|
assert!(call_verify(&client, contract, &public_key, MESSAGE, &sig).await.is_err());
|
||||||
}
|
}
|
||||||
|
|
|
@ -99,6 +99,7 @@ allow-git = [
|
||||||
"https://github.com/rust-lang-nursery/lazy-static.rs",
|
"https://github.com/rust-lang-nursery/lazy-static.rs",
|
||||||
"https://github.com/serai-dex/substrate-bip39",
|
"https://github.com/serai-dex/substrate-bip39",
|
||||||
"https://github.com/serai-dex/substrate",
|
"https://github.com/serai-dex/substrate",
|
||||||
|
"https://github.com/alloy-rs/alloy",
|
||||||
"https://github.com/monero-rs/base58-monero",
|
"https://github.com/monero-rs/base58-monero",
|
||||||
"https://github.com/kayabaNerve/dockertest-rs",
|
"https://github.com/kayabaNerve/dockertest-rs",
|
||||||
]
|
]
|
||||||
|
|
|
@ -28,6 +28,7 @@ rand_core = { version = "0.6", default-features = false, features = ["std", "get
|
||||||
rand_chacha = { version = "0.3", default-features = false, features = ["std"] }
|
rand_chacha = { version = "0.3", default-features = false, features = ["std"] }
|
||||||
|
|
||||||
# Encoders
|
# Encoders
|
||||||
|
const-hex = { version = "1", default-features = false }
|
||||||
hex = { version = "0.4", default-features = false, features = ["std"] }
|
hex = { version = "0.4", default-features = false, features = ["std"] }
|
||||||
scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std"] }
|
scale = { package = "parity-scale-codec", version = "3", default-features = false, features = ["std"] }
|
||||||
borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] }
|
borsh = { version = "1", default-features = false, features = ["std", "derive", "de_strict_order"] }
|
||||||
|
@ -40,11 +41,16 @@ transcript = { package = "flexible-transcript", path = "../crypto/transcript", d
|
||||||
frost = { package = "modular-frost", path = "../crypto/frost", default-features = false, features = ["ristretto"] }
|
frost = { package = "modular-frost", path = "../crypto/frost", default-features = false, features = ["ristretto"] }
|
||||||
frost-schnorrkel = { path = "../crypto/schnorrkel", default-features = false }
|
frost-schnorrkel = { path = "../crypto/schnorrkel", default-features = false }
|
||||||
|
|
||||||
|
# Bitcoin/Ethereum
|
||||||
|
k256 = { version = "^0.13.1", default-features = false, features = ["std"], optional = true }
|
||||||
|
|
||||||
# Bitcoin
|
# Bitcoin
|
||||||
secp256k1 = { version = "0.28", default-features = false, features = ["std", "global-context", "rand-std"], optional = true }
|
secp256k1 = { version = "0.28", default-features = false, features = ["std", "global-context", "rand-std"], optional = true }
|
||||||
k256 = { version = "^0.13.1", default-features = false, features = ["std"], optional = true }
|
|
||||||
bitcoin-serai = { path = "../coins/bitcoin", default-features = false, features = ["std"], optional = true }
|
bitcoin-serai = { path = "../coins/bitcoin", default-features = false, features = ["std"], optional = true }
|
||||||
|
|
||||||
|
# Ethereum
|
||||||
|
ethereum-serai = { path = "../coins/ethereum", default-features = false, optional = true }
|
||||||
|
|
||||||
# Monero
|
# Monero
|
||||||
dalek-ff-group = { path = "../crypto/dalek-ff-group", default-features = false, features = ["std"], optional = true }
|
dalek-ff-group = { path = "../crypto/dalek-ff-group", default-features = false, features = ["std"], optional = true }
|
||||||
monero-serai = { path = "../coins/monero", default-features = false, features = ["std", "http-rpc", "multisig"], optional = true }
|
monero-serai = { path = "../coins/monero", default-features = false, features = ["std", "http-rpc", "multisig"], optional = true }
|
||||||
|
@ -55,12 +61,12 @@ env_logger = { version = "0.10", default-features = false, features = ["humantim
|
||||||
tokio = { version = "1", default-features = false, features = ["rt-multi-thread", "sync", "time", "macros"] }
|
tokio = { version = "1", default-features = false, features = ["rt-multi-thread", "sync", "time", "macros"] }
|
||||||
|
|
||||||
zalloc = { path = "../common/zalloc" }
|
zalloc = { path = "../common/zalloc" }
|
||||||
serai-db = { path = "../common/db", optional = true }
|
serai-db = { path = "../common/db" }
|
||||||
serai-env = { path = "../common/env", optional = true }
|
serai-env = { path = "../common/env", optional = true }
|
||||||
# TODO: Replace with direct usage of primitives
|
# TODO: Replace with direct usage of primitives
|
||||||
serai-client = { path = "../substrate/client", default-features = false, features = ["serai"] }
|
serai-client = { path = "../substrate/client", default-features = false, features = ["serai"] }
|
||||||
|
|
||||||
messages = { package = "serai-processor-messages", path = "./messages", optional = true }
|
messages = { package = "serai-processor-messages", path = "./messages" }
|
||||||
|
|
||||||
message-queue = { package = "serai-message-queue", path = "../message-queue", optional = true }
|
message-queue = { package = "serai-message-queue", path = "../message-queue", optional = true }
|
||||||
|
|
||||||
|
@ -69,6 +75,8 @@ frost = { package = "modular-frost", path = "../crypto/frost", features = ["test
|
||||||
|
|
||||||
sp-application-crypto = { git = "https://github.com/serai-dex/substrate", default-features = false, features = ["std"] }
|
sp-application-crypto = { git = "https://github.com/serai-dex/substrate", default-features = false, features = ["std"] }
|
||||||
|
|
||||||
|
ethereum-serai = { path = "../coins/ethereum", default-features = false, features = ["tests"] }
|
||||||
|
|
||||||
dockertest = "0.4"
|
dockertest = "0.4"
|
||||||
serai-docker-tests = { path = "../tests/docker" }
|
serai-docker-tests = { path = "../tests/docker" }
|
||||||
|
|
||||||
|
@ -76,9 +84,11 @@ serai-docker-tests = { path = "../tests/docker" }
|
||||||
secp256k1 = ["k256", "frost/secp256k1"]
|
secp256k1 = ["k256", "frost/secp256k1"]
|
||||||
bitcoin = ["dep:secp256k1", "secp256k1", "bitcoin-serai", "serai-client/bitcoin"]
|
bitcoin = ["dep:secp256k1", "secp256k1", "bitcoin-serai", "serai-client/bitcoin"]
|
||||||
|
|
||||||
|
ethereum = ["secp256k1", "ethereum-serai"]
|
||||||
|
|
||||||
ed25519 = ["dalek-ff-group", "frost/ed25519"]
|
ed25519 = ["dalek-ff-group", "frost/ed25519"]
|
||||||
monero = ["ed25519", "monero-serai", "serai-client/monero"]
|
monero = ["ed25519", "monero-serai", "serai-client/monero"]
|
||||||
|
|
||||||
binaries = ["env_logger", "serai-env", "messages", "message-queue"]
|
binaries = ["env_logger", "serai-env", "message-queue"]
|
||||||
parity-db = ["serai-db/parity-db"]
|
parity-db = ["serai-db/parity-db"]
|
||||||
rocksdb = ["serai-db/rocksdb"]
|
rocksdb = ["serai-db/rocksdb"]
|
||||||
|
|
|
@ -1,7 +1,15 @@
|
||||||
|
#![allow(dead_code)]
|
||||||
|
|
||||||
mod plan;
|
mod plan;
|
||||||
pub use plan::*;
|
pub use plan::*;
|
||||||
|
|
||||||
|
mod db;
|
||||||
|
pub(crate) use db::*;
|
||||||
|
|
||||||
|
mod key_gen;
|
||||||
|
|
||||||
pub mod networks;
|
pub mod networks;
|
||||||
|
pub(crate) mod multisigs;
|
||||||
|
|
||||||
mod additional_key;
|
mod additional_key;
|
||||||
pub use additional_key::additional_key;
|
pub use additional_key::additional_key;
|
||||||
|
|
|
@ -31,6 +31,8 @@ mod networks;
|
||||||
use networks::{Block, Network};
|
use networks::{Block, Network};
|
||||||
#[cfg(feature = "bitcoin")]
|
#[cfg(feature = "bitcoin")]
|
||||||
use networks::Bitcoin;
|
use networks::Bitcoin;
|
||||||
|
#[cfg(feature = "ethereum")]
|
||||||
|
use networks::Ethereum;
|
||||||
#[cfg(feature = "monero")]
|
#[cfg(feature = "monero")]
|
||||||
use networks::Monero;
|
use networks::Monero;
|
||||||
|
|
||||||
|
@ -735,6 +737,7 @@ async fn main() {
|
||||||
};
|
};
|
||||||
let network_id = match env::var("NETWORK").expect("network wasn't specified").as_str() {
|
let network_id = match env::var("NETWORK").expect("network wasn't specified").as_str() {
|
||||||
"bitcoin" => NetworkId::Bitcoin,
|
"bitcoin" => NetworkId::Bitcoin,
|
||||||
|
"ethereum" => NetworkId::Ethereum,
|
||||||
"monero" => NetworkId::Monero,
|
"monero" => NetworkId::Monero,
|
||||||
_ => panic!("unrecognized network"),
|
_ => panic!("unrecognized network"),
|
||||||
};
|
};
|
||||||
|
@ -744,6 +747,8 @@ async fn main() {
|
||||||
match network_id {
|
match network_id {
|
||||||
#[cfg(feature = "bitcoin")]
|
#[cfg(feature = "bitcoin")]
|
||||||
NetworkId::Bitcoin => run(db, Bitcoin::new(url).await, coordinator).await,
|
NetworkId::Bitcoin => run(db, Bitcoin::new(url).await, coordinator).await,
|
||||||
|
#[cfg(feature = "ethereum")]
|
||||||
|
NetworkId::Ethereum => run(db.clone(), Ethereum::new(db, url).await, coordinator).await,
|
||||||
#[cfg(feature = "monero")]
|
#[cfg(feature = "monero")]
|
||||||
NetworkId::Monero => run(db, Monero::new(url).await, coordinator).await,
|
NetworkId::Monero => run(db, Monero::new(url).await, coordinator).await,
|
||||||
_ => panic!("spawning a processor for an unsupported network"),
|
_ => panic!("spawning a processor for an unsupported network"),
|
||||||
|
|
|
@ -1,3 +1,5 @@
|
||||||
|
use std::io;
|
||||||
|
|
||||||
use ciphersuite::Ciphersuite;
|
use ciphersuite::Ciphersuite;
|
||||||
pub use serai_db::*;
|
pub use serai_db::*;
|
||||||
|
|
||||||
|
@ -6,9 +8,59 @@ use serai_client::{primitives::Balance, in_instructions::primitives::InInstructi
|
||||||
|
|
||||||
use crate::{
|
use crate::{
|
||||||
Get, Plan,
|
Get, Plan,
|
||||||
networks::{Transaction, Network},
|
networks::{Output, Transaction, Network},
|
||||||
};
|
};
|
||||||
|
|
||||||
|
#[derive(Clone, PartialEq, Eq, Debug)]
|
||||||
|
pub enum PlanFromScanning<N: Network> {
|
||||||
|
Refund(N::Output, N::Address),
|
||||||
|
Forward(N::Output),
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<N: Network> PlanFromScanning<N> {
|
||||||
|
fn read<R: io::Read>(reader: &mut R) -> io::Result<Self> {
|
||||||
|
let mut kind = [0xff];
|
||||||
|
reader.read_exact(&mut kind)?;
|
||||||
|
match kind[0] {
|
||||||
|
0 => {
|
||||||
|
let output = N::Output::read(reader)?;
|
||||||
|
|
||||||
|
let mut address_vec_len = [0; 4];
|
||||||
|
reader.read_exact(&mut address_vec_len)?;
|
||||||
|
let mut address_vec =
|
||||||
|
vec![0; usize::try_from(u32::from_le_bytes(address_vec_len)).unwrap()];
|
||||||
|
reader.read_exact(&mut address_vec)?;
|
||||||
|
let address =
|
||||||
|
N::Address::try_from(address_vec).map_err(|_| "invalid address saved to disk").unwrap();
|
||||||
|
|
||||||
|
Ok(PlanFromScanning::Refund(output, address))
|
||||||
|
}
|
||||||
|
1 => {
|
||||||
|
let output = N::Output::read(reader)?;
|
||||||
|
Ok(PlanFromScanning::Forward(output))
|
||||||
|
}
|
||||||
|
_ => panic!("reading unrecognized PlanFromScanning"),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
fn write<W: io::Write>(&self, writer: &mut W) -> io::Result<()> {
|
||||||
|
match self {
|
||||||
|
PlanFromScanning::Refund(output, address) => {
|
||||||
|
writer.write_all(&[0])?;
|
||||||
|
output.write(writer)?;
|
||||||
|
|
||||||
|
let address_vec: Vec<u8> =
|
||||||
|
address.clone().try_into().map_err(|_| "invalid address being refunded to").unwrap();
|
||||||
|
writer.write_all(&u32::try_from(address_vec.len()).unwrap().to_le_bytes())?;
|
||||||
|
writer.write_all(&address_vec)
|
||||||
|
}
|
||||||
|
PlanFromScanning::Forward(output) => {
|
||||||
|
writer.write_all(&[1])?;
|
||||||
|
output.write(writer)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
create_db!(
|
create_db!(
|
||||||
MultisigsDb {
|
MultisigsDb {
|
||||||
NextBatchDb: () -> u32,
|
NextBatchDb: () -> u32,
|
||||||
|
@ -80,7 +132,11 @@ impl PlanDb {
|
||||||
) -> bool {
|
) -> bool {
|
||||||
let plan = Plan::<N>::read::<&[u8]>(&mut &Self::get(getter, &id).unwrap()[8 ..]).unwrap();
|
let plan = Plan::<N>::read::<&[u8]>(&mut &Self::get(getter, &id).unwrap()[8 ..]).unwrap();
|
||||||
assert_eq!(plan.id(), id);
|
assert_eq!(plan.id(), id);
|
||||||
(key == plan.key) && (Some(N::change_address(plan.key)) == plan.change)
|
if let Some(change) = N::change_address(plan.key) {
|
||||||
|
(key == plan.key) && (Some(change) == plan.change)
|
||||||
|
} else {
|
||||||
|
false
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -130,7 +186,7 @@ impl PlansFromScanningDb {
|
||||||
pub fn set_plans_from_scanning<N: Network>(
|
pub fn set_plans_from_scanning<N: Network>(
|
||||||
txn: &mut impl DbTxn,
|
txn: &mut impl DbTxn,
|
||||||
block_number: usize,
|
block_number: usize,
|
||||||
plans: Vec<Plan<N>>,
|
plans: Vec<PlanFromScanning<N>>,
|
||||||
) {
|
) {
|
||||||
let mut buf = vec![];
|
let mut buf = vec![];
|
||||||
for plan in plans {
|
for plan in plans {
|
||||||
|
@ -142,13 +198,13 @@ impl PlansFromScanningDb {
|
||||||
pub fn take_plans_from_scanning<N: Network>(
|
pub fn take_plans_from_scanning<N: Network>(
|
||||||
txn: &mut impl DbTxn,
|
txn: &mut impl DbTxn,
|
||||||
block_number: usize,
|
block_number: usize,
|
||||||
) -> Option<Vec<Plan<N>>> {
|
) -> Option<Vec<PlanFromScanning<N>>> {
|
||||||
let block_number = u64::try_from(block_number).unwrap();
|
let block_number = u64::try_from(block_number).unwrap();
|
||||||
let res = Self::get(txn, block_number).map(|plans| {
|
let res = Self::get(txn, block_number).map(|plans| {
|
||||||
let mut plans_ref = plans.as_slice();
|
let mut plans_ref = plans.as_slice();
|
||||||
let mut res = vec![];
|
let mut res = vec![];
|
||||||
while !plans_ref.is_empty() {
|
while !plans_ref.is_empty() {
|
||||||
res.push(Plan::<N>::read(&mut plans_ref).unwrap());
|
res.push(PlanFromScanning::<N>::read(&mut plans_ref).unwrap());
|
||||||
}
|
}
|
||||||
res
|
res
|
||||||
});
|
});
|
||||||
|
|
|
@ -7,7 +7,7 @@ use scale::{Encode, Decode};
|
||||||
use messages::SubstrateContext;
|
use messages::SubstrateContext;
|
||||||
|
|
||||||
use serai_client::{
|
use serai_client::{
|
||||||
primitives::{MAX_DATA_LEN, NetworkId, Coin, ExternalAddress, BlockHash, Data},
|
primitives::{MAX_DATA_LEN, ExternalAddress, BlockHash, Data},
|
||||||
in_instructions::primitives::{
|
in_instructions::primitives::{
|
||||||
InInstructionWithBalance, Batch, RefundableInInstruction, Shorthand, MAX_BATCH_SIZE,
|
InInstructionWithBalance, Batch, RefundableInInstruction, Shorthand, MAX_BATCH_SIZE,
|
||||||
},
|
},
|
||||||
|
@ -28,15 +28,12 @@ use scanner::{ScannerEvent, ScannerHandle, Scanner};
|
||||||
mod db;
|
mod db;
|
||||||
use db::*;
|
use db::*;
|
||||||
|
|
||||||
#[cfg(not(test))]
|
pub(crate) mod scheduler;
|
||||||
mod scheduler;
|
|
||||||
#[cfg(test)]
|
|
||||||
pub mod scheduler;
|
|
||||||
use scheduler::Scheduler;
|
use scheduler::Scheduler;
|
||||||
|
|
||||||
use crate::{
|
use crate::{
|
||||||
Get, Db, Payment, Plan,
|
Get, Db, Payment, Plan,
|
||||||
networks::{OutputType, Output, Transaction, SignableTransaction, Block, PreparedSend, Network},
|
networks::{OutputType, Output, SignableTransaction, Eventuality, Block, PreparedSend, Network},
|
||||||
};
|
};
|
||||||
|
|
||||||
// InInstructionWithBalance from an external output
|
// InInstructionWithBalance from an external output
|
||||||
|
@ -95,6 +92,8 @@ enum RotationStep {
|
||||||
ClosingExisting,
|
ClosingExisting,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// This explicitly shouldn't take the database as we prepare Plans we won't execute for fee
|
||||||
|
// estimates
|
||||||
async fn prepare_send<N: Network>(
|
async fn prepare_send<N: Network>(
|
||||||
network: &N,
|
network: &N,
|
||||||
block_number: usize,
|
block_number: usize,
|
||||||
|
@ -122,7 +121,7 @@ async fn prepare_send<N: Network>(
|
||||||
pub struct MultisigViewer<N: Network> {
|
pub struct MultisigViewer<N: Network> {
|
||||||
activation_block: usize,
|
activation_block: usize,
|
||||||
key: <N::Curve as Ciphersuite>::G,
|
key: <N::Curve as Ciphersuite>::G,
|
||||||
scheduler: Scheduler<N>,
|
scheduler: N::Scheduler,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[allow(clippy::type_complexity)]
|
#[allow(clippy::type_complexity)]
|
||||||
|
@ -131,7 +130,7 @@ pub enum MultisigEvent<N: Network> {
|
||||||
// Batches to publish
|
// Batches to publish
|
||||||
Batches(Option<(<N::Curve as Ciphersuite>::G, <N::Curve as Ciphersuite>::G)>, Vec<Batch>),
|
Batches(Option<(<N::Curve as Ciphersuite>::G, <N::Curve as Ciphersuite>::G)>, Vec<Batch>),
|
||||||
// Eventuality completion found on-chain
|
// Eventuality completion found on-chain
|
||||||
Completed(Vec<u8>, [u8; 32], N::Transaction),
|
Completed(Vec<u8>, [u8; 32], <N::Eventuality as Eventuality>::Completion),
|
||||||
}
|
}
|
||||||
|
|
||||||
pub struct MultisigManager<D: Db, N: Network> {
|
pub struct MultisigManager<D: Db, N: Network> {
|
||||||
|
@ -157,20 +156,7 @@ impl<D: Db, N: Network> MultisigManager<D, N> {
|
||||||
assert!(current_keys.len() <= 2);
|
assert!(current_keys.len() <= 2);
|
||||||
let mut actively_signing = vec![];
|
let mut actively_signing = vec![];
|
||||||
for (_, key) in ¤t_keys {
|
for (_, key) in ¤t_keys {
|
||||||
schedulers.push(
|
schedulers.push(N::Scheduler::from_db(raw_db, *key, N::NETWORK).unwrap());
|
||||||
Scheduler::from_db(
|
|
||||||
raw_db,
|
|
||||||
*key,
|
|
||||||
match N::NETWORK {
|
|
||||||
NetworkId::Serai => panic!("adding a key for Serai"),
|
|
||||||
NetworkId::Bitcoin => Coin::Bitcoin,
|
|
||||||
// TODO: This is incomplete to DAI
|
|
||||||
NetworkId::Ethereum => Coin::Ether,
|
|
||||||
NetworkId::Monero => Coin::Monero,
|
|
||||||
},
|
|
||||||
)
|
|
||||||
.unwrap(),
|
|
||||||
);
|
|
||||||
|
|
||||||
// Load any TXs being actively signed
|
// Load any TXs being actively signed
|
||||||
let key = key.to_bytes();
|
let key = key.to_bytes();
|
||||||
|
@ -245,17 +231,7 @@ impl<D: Db, N: Network> MultisigManager<D, N> {
|
||||||
let viewer = Some(MultisigViewer {
|
let viewer = Some(MultisigViewer {
|
||||||
activation_block,
|
activation_block,
|
||||||
key: external_key,
|
key: external_key,
|
||||||
scheduler: Scheduler::<N>::new::<D>(
|
scheduler: N::Scheduler::new::<D>(txn, external_key, N::NETWORK),
|
||||||
txn,
|
|
||||||
external_key,
|
|
||||||
match N::NETWORK {
|
|
||||||
NetworkId::Serai => panic!("adding a key for Serai"),
|
|
||||||
NetworkId::Bitcoin => Coin::Bitcoin,
|
|
||||||
// TODO: This is incomplete to DAI
|
|
||||||
NetworkId::Ethereum => Coin::Ether,
|
|
||||||
NetworkId::Monero => Coin::Monero,
|
|
||||||
},
|
|
||||||
),
|
|
||||||
});
|
});
|
||||||
|
|
||||||
if self.existing.is_none() {
|
if self.existing.is_none() {
|
||||||
|
@ -352,48 +328,30 @@ impl<D: Db, N: Network> MultisigManager<D, N> {
|
||||||
(existing_outputs, new_outputs)
|
(existing_outputs, new_outputs)
|
||||||
}
|
}
|
||||||
|
|
||||||
fn refund_plan(output: N::Output, refund_to: N::Address) -> Plan<N> {
|
fn refund_plan(
|
||||||
|
scheduler: &mut N::Scheduler,
|
||||||
|
txn: &mut D::Transaction<'_>,
|
||||||
|
output: N::Output,
|
||||||
|
refund_to: N::Address,
|
||||||
|
) -> Plan<N> {
|
||||||
log::info!("creating refund plan for {}", hex::encode(output.id()));
|
log::info!("creating refund plan for {}", hex::encode(output.id()));
|
||||||
assert_eq!(output.kind(), OutputType::External);
|
assert_eq!(output.kind(), OutputType::External);
|
||||||
Plan {
|
scheduler.refund_plan::<D>(txn, output, refund_to)
|
||||||
key: output.key(),
|
|
||||||
// Uses a payment as this will still be successfully sent due to fee amortization,
|
|
||||||
// and because change is currently always a Serai key
|
|
||||||
payments: vec![Payment { address: refund_to, data: None, balance: output.balance() }],
|
|
||||||
inputs: vec![output],
|
|
||||||
change: None,
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
fn forward_plan(&self, output: N::Output) -> Plan<N> {
|
// Returns the plan for forwarding if one is needed.
|
||||||
|
// Returns None if one is not needed to forward this output.
|
||||||
|
fn forward_plan(&mut self, txn: &mut D::Transaction<'_>, output: &N::Output) -> Option<Plan<N>> {
|
||||||
log::info!("creating forwarding plan for {}", hex::encode(output.id()));
|
log::info!("creating forwarding plan for {}", hex::encode(output.id()));
|
||||||
|
let res = self.existing.as_mut().unwrap().scheduler.forward_plan::<D>(
|
||||||
/*
|
txn,
|
||||||
Sending a Plan, with arbitrary data proxying the InInstruction, would require adding
|
output.clone(),
|
||||||
a flow for networks which drop their data to still embed arbitrary data. It'd also have
|
self.new.as_ref().expect("forwarding plan yet no new multisig").key,
|
||||||
edge cases causing failures (we'd need to manually provide the origin if it was implied,
|
);
|
||||||
which may exceed the encoding limit).
|
if res.is_none() {
|
||||||
|
log::info!("no forwarding plan was necessary for {}", hex::encode(output.id()));
|
||||||
Instead, we save the InInstruction as we scan this output. Then, when the output is
|
|
||||||
successfully forwarded, we simply read it from the local database. This also saves the
|
|
||||||
costs of embedding arbitrary data.
|
|
||||||
|
|
||||||
Since we can't rely on the Eventuality system to detect if it's a forwarded transaction,
|
|
||||||
due to the asynchonicity of the Eventuality system, we instead interpret an Forwarded
|
|
||||||
output which has an amount associated with an InInstruction which was forwarded as having
|
|
||||||
been forwarded.
|
|
||||||
*/
|
|
||||||
|
|
||||||
Plan {
|
|
||||||
key: self.existing.as_ref().unwrap().key,
|
|
||||||
payments: vec![Payment {
|
|
||||||
address: N::forward_address(self.new.as_ref().unwrap().key),
|
|
||||||
data: None,
|
|
||||||
balance: output.balance(),
|
|
||||||
}],
|
|
||||||
inputs: vec![output],
|
|
||||||
change: None,
|
|
||||||
}
|
}
|
||||||
|
res
|
||||||
}
|
}
|
||||||
|
|
||||||
// Filter newly received outputs due to the step being RotationStep::ClosingExisting.
|
// Filter newly received outputs due to the step being RotationStep::ClosingExisting.
|
||||||
|
@ -605,7 +563,31 @@ impl<D: Db, N: Network> MultisigManager<D, N> {
|
||||||
block_number
|
block_number
|
||||||
{
|
{
|
||||||
// Load plans crated when we scanned the block
|
// Load plans crated when we scanned the block
|
||||||
plans = PlansFromScanningDb::take_plans_from_scanning::<N>(txn, block_number).unwrap();
|
let scanning_plans =
|
||||||
|
PlansFromScanningDb::take_plans_from_scanning::<N>(txn, block_number).unwrap();
|
||||||
|
// Expand into actual plans
|
||||||
|
plans = scanning_plans
|
||||||
|
.into_iter()
|
||||||
|
.map(|plan| match plan {
|
||||||
|
PlanFromScanning::Refund(output, refund_to) => {
|
||||||
|
let existing = self.existing.as_mut().unwrap();
|
||||||
|
if output.key() == existing.key {
|
||||||
|
Self::refund_plan(&mut existing.scheduler, txn, output, refund_to)
|
||||||
|
} else {
|
||||||
|
let new = self
|
||||||
|
.new
|
||||||
|
.as_mut()
|
||||||
|
.expect("new multisig didn't expect yet output wasn't for existing multisig");
|
||||||
|
assert_eq!(output.key(), new.key, "output wasn't for existing nor new multisig");
|
||||||
|
Self::refund_plan(&mut new.scheduler, txn, output, refund_to)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
PlanFromScanning::Forward(output) => self
|
||||||
|
.forward_plan(txn, &output)
|
||||||
|
.expect("supposed to forward an output yet no forwarding plan"),
|
||||||
|
})
|
||||||
|
.collect();
|
||||||
|
|
||||||
for plan in &plans {
|
for plan in &plans {
|
||||||
plans_from_scanning.insert(plan.id());
|
plans_from_scanning.insert(plan.id());
|
||||||
}
|
}
|
||||||
|
@ -665,13 +647,23 @@ impl<D: Db, N: Network> MultisigManager<D, N> {
|
||||||
});
|
});
|
||||||
|
|
||||||
for plan in &plans {
|
for plan in &plans {
|
||||||
if plan.change == Some(N::change_address(plan.key)) {
|
// This first equality should 'never meaningfully' be false
|
||||||
// Assert these are only created during the expected step
|
// All created plans so far are by the existing multisig EXCEPT:
|
||||||
match *step {
|
// A) If we created a refund plan from the new multisig (yet that wouldn't have change)
|
||||||
RotationStep::UseExisting => {}
|
// B) The existing Scheduler returned a Plan for the new key (yet that happens with the SC
|
||||||
RotationStep::NewAsChange |
|
// scheduler, yet that doesn't have change)
|
||||||
RotationStep::ForwardFromExisting |
|
// Despite being 'unnecessary' now, it's better to explicitly ensure and be robust
|
||||||
RotationStep::ClosingExisting => panic!("change was set to self despite rotating"),
|
if plan.key == self.existing.as_ref().unwrap().key {
|
||||||
|
if let Some(change) = N::change_address(plan.key) {
|
||||||
|
if plan.change == Some(change) {
|
||||||
|
// Assert these (self-change) are only created during the expected step
|
||||||
|
match *step {
|
||||||
|
RotationStep::UseExisting => {}
|
||||||
|
RotationStep::NewAsChange |
|
||||||
|
RotationStep::ForwardFromExisting |
|
||||||
|
RotationStep::ClosingExisting => panic!("change was set to self despite rotating"),
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -853,15 +845,20 @@ impl<D: Db, N: Network> MultisigManager<D, N> {
|
||||||
let plans_at_start = plans.len();
|
let plans_at_start = plans.len();
|
||||||
let (refund_to, instruction) = instruction_from_output::<N>(output);
|
let (refund_to, instruction) = instruction_from_output::<N>(output);
|
||||||
if let Some(mut instruction) = instruction {
|
if let Some(mut instruction) = instruction {
|
||||||
// Build a dedicated Plan forwarding this
|
let Some(shimmed_plan) = N::Scheduler::shim_forward_plan(
|
||||||
let forward_plan = self.forward_plan(output.clone());
|
output.clone(),
|
||||||
plans.push(forward_plan.clone());
|
self.new.as_ref().expect("forwarding from existing yet no new multisig").key,
|
||||||
|
) else {
|
||||||
|
// If this network doesn't need forwarding, report the output now
|
||||||
|
return true;
|
||||||
|
};
|
||||||
|
plans.push(PlanFromScanning::<N>::Forward(output.clone()));
|
||||||
|
|
||||||
// Set the instruction for this output to be returned
|
// Set the instruction for this output to be returned
|
||||||
// We need to set it under the amount it's forwarded with, so prepare its forwarding
|
// We need to set it under the amount it's forwarded with, so prepare its forwarding
|
||||||
// TX to determine the fees involved
|
// TX to determine the fees involved
|
||||||
let PreparedSend { tx, post_fee_branches: _, operating_costs } =
|
let PreparedSend { tx, post_fee_branches: _, operating_costs } =
|
||||||
prepare_send(network, block_number, forward_plan, 0).await;
|
prepare_send(network, block_number, shimmed_plan, 0).await;
|
||||||
// operating_costs should not increase in a forwarding TX
|
// operating_costs should not increase in a forwarding TX
|
||||||
assert_eq!(operating_costs, 0);
|
assert_eq!(operating_costs, 0);
|
||||||
|
|
||||||
|
@ -872,12 +869,28 @@ impl<D: Db, N: Network> MultisigManager<D, N> {
|
||||||
// letting it die out
|
// letting it die out
|
||||||
if let Some(tx) = &tx {
|
if let Some(tx) = &tx {
|
||||||
instruction.balance.amount.0 -= tx.0.fee();
|
instruction.balance.amount.0 -= tx.0.fee();
|
||||||
|
|
||||||
|
/*
|
||||||
|
Sending a Plan, with arbitrary data proxying the InInstruction, would require
|
||||||
|
adding a flow for networks which drop their data to still embed arbitrary data.
|
||||||
|
It'd also have edge cases causing failures (we'd need to manually provide the
|
||||||
|
origin if it was implied, which may exceed the encoding limit).
|
||||||
|
|
||||||
|
Instead, we save the InInstruction as we scan this output. Then, when the
|
||||||
|
output is successfully forwarded, we simply read it from the local database.
|
||||||
|
This also saves the costs of embedding arbitrary data.
|
||||||
|
|
||||||
|
Since we can't rely on the Eventuality system to detect if it's a forwarded
|
||||||
|
transaction, due to the asynchonicity of the Eventuality system, we instead
|
||||||
|
interpret an Forwarded output which has an amount associated with an
|
||||||
|
InInstruction which was forwarded as having been forwarded.
|
||||||
|
*/
|
||||||
ForwardedOutputDb::save_forwarded_output(txn, &instruction);
|
ForwardedOutputDb::save_forwarded_output(txn, &instruction);
|
||||||
}
|
}
|
||||||
} else if let Some(refund_to) = refund_to {
|
} else if let Some(refund_to) = refund_to {
|
||||||
if let Ok(refund_to) = refund_to.consume().try_into() {
|
if let Ok(refund_to) = refund_to.consume().try_into() {
|
||||||
// Build a dedicated Plan refunding this
|
// Build a dedicated Plan refunding this
|
||||||
plans.push(Self::refund_plan(output.clone(), refund_to));
|
plans.push(PlanFromScanning::Refund(output.clone(), refund_to));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -909,7 +922,7 @@ impl<D: Db, N: Network> MultisigManager<D, N> {
|
||||||
let Some(instruction) = instruction else {
|
let Some(instruction) = instruction else {
|
||||||
if let Some(refund_to) = refund_to {
|
if let Some(refund_to) = refund_to {
|
||||||
if let Ok(refund_to) = refund_to.consume().try_into() {
|
if let Ok(refund_to) = refund_to.consume().try_into() {
|
||||||
plans.push(Self::refund_plan(output.clone(), refund_to));
|
plans.push(PlanFromScanning::Refund(output.clone(), refund_to));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
continue;
|
continue;
|
||||||
|
@ -999,9 +1012,9 @@ impl<D: Db, N: Network> MultisigManager<D, N> {
|
||||||
// This must be emitted before ScannerEvent::Block for all completions of known Eventualities
|
// This must be emitted before ScannerEvent::Block for all completions of known Eventualities
|
||||||
// within the block. Unknown Eventualities may have their Completed events emitted after
|
// within the block. Unknown Eventualities may have their Completed events emitted after
|
||||||
// ScannerEvent::Block however.
|
// ScannerEvent::Block however.
|
||||||
ScannerEvent::Completed(key, block_number, id, tx) => {
|
ScannerEvent::Completed(key, block_number, id, tx_id, completion) => {
|
||||||
ResolvedDb::resolve_plan::<N>(txn, &key, id, &tx.id());
|
ResolvedDb::resolve_plan::<N>(txn, &key, id, &tx_id);
|
||||||
(block_number, MultisigEvent::Completed(key, id, tx))
|
(block_number, MultisigEvent::Completed(key, id, completion))
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
|
@ -17,15 +17,25 @@ use tokio::{
|
||||||
|
|
||||||
use crate::{
|
use crate::{
|
||||||
Get, DbTxn, Db,
|
Get, DbTxn, Db,
|
||||||
networks::{Output, Transaction, EventualitiesTracker, Block, Network},
|
networks::{Output, Transaction, Eventuality, EventualitiesTracker, Block, Network},
|
||||||
};
|
};
|
||||||
|
|
||||||
#[derive(Clone, Debug)]
|
#[derive(Clone, Debug)]
|
||||||
pub enum ScannerEvent<N: Network> {
|
pub enum ScannerEvent<N: Network> {
|
||||||
// Block scanned
|
// Block scanned
|
||||||
Block { is_retirement_block: bool, block: <N::Block as Block<N>>::Id, outputs: Vec<N::Output> },
|
Block {
|
||||||
|
is_retirement_block: bool,
|
||||||
|
block: <N::Block as Block<N>>::Id,
|
||||||
|
outputs: Vec<N::Output>,
|
||||||
|
},
|
||||||
// Eventuality completion found on-chain
|
// Eventuality completion found on-chain
|
||||||
Completed(Vec<u8>, usize, [u8; 32], N::Transaction),
|
Completed(
|
||||||
|
Vec<u8>,
|
||||||
|
usize,
|
||||||
|
[u8; 32],
|
||||||
|
<N::Transaction as Transaction<N>>::Id,
|
||||||
|
<N::Eventuality as Eventuality>::Completion,
|
||||||
|
),
|
||||||
}
|
}
|
||||||
|
|
||||||
pub type ScannerEventChannel<N> = mpsc::UnboundedReceiver<ScannerEvent<N>>;
|
pub type ScannerEventChannel<N> = mpsc::UnboundedReceiver<ScannerEvent<N>>;
|
||||||
|
@ -555,19 +565,25 @@ impl<N: Network, D: Db> Scanner<N, D> {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
for (id, (block_number, tx)) in network
|
for (id, (block_number, tx, completion)) in network
|
||||||
.get_eventuality_completions(scanner.eventualities.get_mut(&key_vec).unwrap(), &block)
|
.get_eventuality_completions(scanner.eventualities.get_mut(&key_vec).unwrap(), &block)
|
||||||
.await
|
.await
|
||||||
{
|
{
|
||||||
info!(
|
info!(
|
||||||
"eventuality {} resolved by {}, as found on chain",
|
"eventuality {} resolved by {}, as found on chain",
|
||||||
hex::encode(id),
|
hex::encode(id),
|
||||||
hex::encode(&tx.id())
|
hex::encode(tx.as_ref())
|
||||||
);
|
);
|
||||||
|
|
||||||
completion_block_numbers.push(block_number);
|
completion_block_numbers.push(block_number);
|
||||||
// This must be before the mission of ScannerEvent::Block, per commentary in mod.rs
|
// This must be before the mission of ScannerEvent::Block, per commentary in mod.rs
|
||||||
if !scanner.emit(ScannerEvent::Completed(key_vec.clone(), block_number, id, tx)) {
|
if !scanner.emit(ScannerEvent::Completed(
|
||||||
|
key_vec.clone(),
|
||||||
|
block_number,
|
||||||
|
id,
|
||||||
|
tx,
|
||||||
|
completion,
|
||||||
|
)) {
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
95
processor/src/multisigs/scheduler/mod.rs
Normal file
95
processor/src/multisigs/scheduler/mod.rs
Normal file
|
@ -0,0 +1,95 @@
|
||||||
|
use core::fmt::Debug;
|
||||||
|
use std::io;
|
||||||
|
|
||||||
|
use ciphersuite::Ciphersuite;
|
||||||
|
|
||||||
|
use serai_client::primitives::{NetworkId, Balance};
|
||||||
|
|
||||||
|
use crate::{networks::Network, Db, Payment, Plan};
|
||||||
|
|
||||||
|
pub(crate) mod utxo;
|
||||||
|
pub(crate) mod smart_contract;
|
||||||
|
|
||||||
|
pub trait SchedulerAddendum: Send + Clone + PartialEq + Debug {
|
||||||
|
fn read<R: io::Read>(reader: &mut R) -> io::Result<Self>;
|
||||||
|
fn write<W: io::Write>(&self, writer: &mut W) -> io::Result<()>;
|
||||||
|
}
|
||||||
|
|
||||||
|
impl SchedulerAddendum for () {
|
||||||
|
fn read<R: io::Read>(_: &mut R) -> io::Result<Self> {
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
fn write<W: io::Write>(&self, _: &mut W) -> io::Result<()> {
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub trait Scheduler<N: Network>: Sized + Clone + PartialEq + Debug {
|
||||||
|
type Addendum: SchedulerAddendum;
|
||||||
|
|
||||||
|
/// Check if this Scheduler is empty.
|
||||||
|
fn empty(&self) -> bool;
|
||||||
|
|
||||||
|
/// Create a new Scheduler.
|
||||||
|
fn new<D: Db>(
|
||||||
|
txn: &mut D::Transaction<'_>,
|
||||||
|
key: <N::Curve as Ciphersuite>::G,
|
||||||
|
network: NetworkId,
|
||||||
|
) -> Self;
|
||||||
|
|
||||||
|
/// Load a Scheduler from the DB.
|
||||||
|
fn from_db<D: Db>(
|
||||||
|
db: &D,
|
||||||
|
key: <N::Curve as Ciphersuite>::G,
|
||||||
|
network: NetworkId,
|
||||||
|
) -> io::Result<Self>;
|
||||||
|
|
||||||
|
/// Check if a branch is usable.
|
||||||
|
fn can_use_branch(&self, balance: Balance) -> bool;
|
||||||
|
|
||||||
|
/// Schedule a series of outputs/payments.
|
||||||
|
fn schedule<D: Db>(
|
||||||
|
&mut self,
|
||||||
|
txn: &mut D::Transaction<'_>,
|
||||||
|
utxos: Vec<N::Output>,
|
||||||
|
payments: Vec<Payment<N>>,
|
||||||
|
key_for_any_change: <N::Curve as Ciphersuite>::G,
|
||||||
|
force_spend: bool,
|
||||||
|
) -> Vec<Plan<N>>;
|
||||||
|
|
||||||
|
/// Consume all payments still pending within this Scheduler, without scheduling them.
|
||||||
|
fn consume_payments<D: Db>(&mut self, txn: &mut D::Transaction<'_>) -> Vec<Payment<N>>;
|
||||||
|
|
||||||
|
/// Note a branch output as having been created, with the amount it was actually created with,
|
||||||
|
/// or not having been created due to being too small.
|
||||||
|
fn created_output<D: Db>(
|
||||||
|
&mut self,
|
||||||
|
txn: &mut D::Transaction<'_>,
|
||||||
|
expected: u64,
|
||||||
|
actual: Option<u64>,
|
||||||
|
);
|
||||||
|
|
||||||
|
/// Refund a specific output.
|
||||||
|
fn refund_plan<D: Db>(
|
||||||
|
&mut self,
|
||||||
|
txn: &mut D::Transaction<'_>,
|
||||||
|
output: N::Output,
|
||||||
|
refund_to: N::Address,
|
||||||
|
) -> Plan<N>;
|
||||||
|
|
||||||
|
/// Shim the forwarding Plan as necessary to obtain a fee estimate.
|
||||||
|
///
|
||||||
|
/// If this Scheduler is for a Network which requires forwarding, this must return Some with a
|
||||||
|
/// plan with identical fee behavior. If forwarding isn't necessary, returns None.
|
||||||
|
fn shim_forward_plan(output: N::Output, to: <N::Curve as Ciphersuite>::G) -> Option<Plan<N>>;
|
||||||
|
|
||||||
|
/// Forward a specific output to the new multisig.
|
||||||
|
///
|
||||||
|
/// Returns None if no forwarding is necessary. Must return Some if forwarding is necessary.
|
||||||
|
fn forward_plan<D: Db>(
|
||||||
|
&mut self,
|
||||||
|
txn: &mut D::Transaction<'_>,
|
||||||
|
output: N::Output,
|
||||||
|
to: <N::Curve as Ciphersuite>::G,
|
||||||
|
) -> Option<Plan<N>>;
|
||||||
|
}
|
208
processor/src/multisigs/scheduler/smart_contract.rs
Normal file
208
processor/src/multisigs/scheduler/smart_contract.rs
Normal file
|
@ -0,0 +1,208 @@
|
||||||
|
use std::{io, collections::HashSet};
|
||||||
|
|
||||||
|
use ciphersuite::{group::GroupEncoding, Ciphersuite};
|
||||||
|
|
||||||
|
use serai_client::primitives::{NetworkId, Coin, Balance};
|
||||||
|
|
||||||
|
use crate::{
|
||||||
|
Get, DbTxn, Db, Payment, Plan, create_db,
|
||||||
|
networks::{Output, Network},
|
||||||
|
multisigs::scheduler::{SchedulerAddendum, Scheduler as SchedulerTrait},
|
||||||
|
};
|
||||||
|
|
||||||
|
#[derive(Clone, PartialEq, Eq, Debug)]
|
||||||
|
pub struct Scheduler<N: Network> {
|
||||||
|
key: <N::Curve as Ciphersuite>::G,
|
||||||
|
coins: HashSet<Coin>,
|
||||||
|
rotated: bool,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
|
||||||
|
pub enum Addendum<N: Network> {
|
||||||
|
Nonce(u64),
|
||||||
|
RotateTo { nonce: u64, new_key: <N::Curve as Ciphersuite>::G },
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<N: Network> SchedulerAddendum for Addendum<N> {
|
||||||
|
fn read<R: io::Read>(reader: &mut R) -> io::Result<Self> {
|
||||||
|
let mut kind = [0xff];
|
||||||
|
reader.read_exact(&mut kind)?;
|
||||||
|
match kind[0] {
|
||||||
|
0 => {
|
||||||
|
let mut nonce = [0; 8];
|
||||||
|
reader.read_exact(&mut nonce)?;
|
||||||
|
Ok(Addendum::Nonce(u64::from_le_bytes(nonce)))
|
||||||
|
}
|
||||||
|
1 => {
|
||||||
|
let mut nonce = [0; 8];
|
||||||
|
reader.read_exact(&mut nonce)?;
|
||||||
|
let nonce = u64::from_le_bytes(nonce);
|
||||||
|
|
||||||
|
let new_key = N::Curve::read_G(reader)?;
|
||||||
|
Ok(Addendum::RotateTo { nonce, new_key })
|
||||||
|
}
|
||||||
|
_ => Err(io::Error::other("reading unknown Addendum type"))?,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
fn write<W: io::Write>(&self, writer: &mut W) -> io::Result<()> {
|
||||||
|
match self {
|
||||||
|
Addendum::Nonce(nonce) => {
|
||||||
|
writer.write_all(&[0])?;
|
||||||
|
writer.write_all(&nonce.to_le_bytes())
|
||||||
|
}
|
||||||
|
Addendum::RotateTo { nonce, new_key } => {
|
||||||
|
writer.write_all(&[1])?;
|
||||||
|
writer.write_all(&nonce.to_le_bytes())?;
|
||||||
|
writer.write_all(new_key.to_bytes().as_ref())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
create_db! {
|
||||||
|
SchedulerDb {
|
||||||
|
LastNonce: () -> u64,
|
||||||
|
RotatedTo: (key: &[u8]) -> Vec<u8>,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<N: Network<Scheduler = Self>> SchedulerTrait<N> for Scheduler<N> {
|
||||||
|
type Addendum = Addendum<N>;
|
||||||
|
|
||||||
|
/// Check if this Scheduler is empty.
|
||||||
|
fn empty(&self) -> bool {
|
||||||
|
self.rotated
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Create a new Scheduler.
|
||||||
|
fn new<D: Db>(
|
||||||
|
_txn: &mut D::Transaction<'_>,
|
||||||
|
key: <N::Curve as Ciphersuite>::G,
|
||||||
|
network: NetworkId,
|
||||||
|
) -> Self {
|
||||||
|
assert!(N::branch_address(key).is_none());
|
||||||
|
assert!(N::change_address(key).is_none());
|
||||||
|
assert!(N::forward_address(key).is_none());
|
||||||
|
|
||||||
|
Scheduler { key, coins: network.coins().iter().copied().collect(), rotated: false }
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Load a Scheduler from the DB.
|
||||||
|
fn from_db<D: Db>(
|
||||||
|
db: &D,
|
||||||
|
key: <N::Curve as Ciphersuite>::G,
|
||||||
|
network: NetworkId,
|
||||||
|
) -> io::Result<Self> {
|
||||||
|
Ok(Scheduler {
|
||||||
|
key,
|
||||||
|
coins: network.coins().iter().copied().collect(),
|
||||||
|
rotated: RotatedTo::get(db, key.to_bytes().as_ref()).is_some(),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
fn can_use_branch(&self, _balance: Balance) -> bool {
|
||||||
|
false
|
||||||
|
}
|
||||||
|
|
||||||
|
fn schedule<D: Db>(
|
||||||
|
&mut self,
|
||||||
|
txn: &mut D::Transaction<'_>,
|
||||||
|
utxos: Vec<N::Output>,
|
||||||
|
payments: Vec<Payment<N>>,
|
||||||
|
key_for_any_change: <N::Curve as Ciphersuite>::G,
|
||||||
|
force_spend: bool,
|
||||||
|
) -> Vec<Plan<N>> {
|
||||||
|
for utxo in utxos {
|
||||||
|
assert!(self.coins.contains(&utxo.balance().coin));
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut nonce = LastNonce::get(txn).map_or(0, |nonce| nonce + 1);
|
||||||
|
let mut plans = vec![];
|
||||||
|
for chunk in payments.as_slice().chunks(N::MAX_OUTPUTS) {
|
||||||
|
// Once we rotate, all further payments should be scheduled via the new multisig
|
||||||
|
assert!(!self.rotated);
|
||||||
|
plans.push(Plan {
|
||||||
|
key: self.key,
|
||||||
|
inputs: vec![],
|
||||||
|
payments: chunk.to_vec(),
|
||||||
|
change: None,
|
||||||
|
scheduler_addendum: Addendum::Nonce(nonce),
|
||||||
|
});
|
||||||
|
nonce += 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
// If we're supposed to rotate to the new key, create an empty Plan which will signify the key
|
||||||
|
// update
|
||||||
|
if force_spend && (!self.rotated) {
|
||||||
|
plans.push(Plan {
|
||||||
|
key: self.key,
|
||||||
|
inputs: vec![],
|
||||||
|
payments: vec![],
|
||||||
|
change: None,
|
||||||
|
scheduler_addendum: Addendum::RotateTo { nonce, new_key: key_for_any_change },
|
||||||
|
});
|
||||||
|
nonce += 1;
|
||||||
|
self.rotated = true;
|
||||||
|
RotatedTo::set(
|
||||||
|
txn,
|
||||||
|
self.key.to_bytes().as_ref(),
|
||||||
|
&key_for_any_change.to_bytes().as_ref().to_vec(),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
LastNonce::set(txn, &nonce);
|
||||||
|
|
||||||
|
plans
|
||||||
|
}
|
||||||
|
|
||||||
|
fn consume_payments<D: Db>(&mut self, _txn: &mut D::Transaction<'_>) -> Vec<Payment<N>> {
|
||||||
|
vec![]
|
||||||
|
}
|
||||||
|
|
||||||
|
fn created_output<D: Db>(
|
||||||
|
&mut self,
|
||||||
|
_txn: &mut D::Transaction<'_>,
|
||||||
|
_expected: u64,
|
||||||
|
_actual: Option<u64>,
|
||||||
|
) {
|
||||||
|
panic!("Smart Contract Scheduler created a Branch output")
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Refund a specific output.
|
||||||
|
fn refund_plan<D: Db>(
|
||||||
|
&mut self,
|
||||||
|
txn: &mut D::Transaction<'_>,
|
||||||
|
output: N::Output,
|
||||||
|
refund_to: N::Address,
|
||||||
|
) -> Plan<N> {
|
||||||
|
let current_key = RotatedTo::get(txn, self.key.to_bytes().as_ref())
|
||||||
|
.and_then(|key_bytes| <N::Curve as Ciphersuite>::read_G(&mut key_bytes.as_slice()).ok())
|
||||||
|
.unwrap_or(self.key);
|
||||||
|
|
||||||
|
let nonce = LastNonce::get(txn).map_or(0, |nonce| nonce + 1);
|
||||||
|
LastNonce::set(txn, &(nonce + 1));
|
||||||
|
Plan {
|
||||||
|
key: current_key,
|
||||||
|
inputs: vec![],
|
||||||
|
payments: vec![Payment { address: refund_to, data: None, balance: output.balance() }],
|
||||||
|
change: None,
|
||||||
|
scheduler_addendum: Addendum::Nonce(nonce),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn shim_forward_plan(_output: N::Output, _to: <N::Curve as Ciphersuite>::G) -> Option<Plan<N>> {
|
||||||
|
None
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Forward a specific output to the new multisig.
|
||||||
|
///
|
||||||
|
/// Returns None if no forwarding is necessary.
|
||||||
|
fn forward_plan<D: Db>(
|
||||||
|
&mut self,
|
||||||
|
_txn: &mut D::Transaction<'_>,
|
||||||
|
_output: N::Output,
|
||||||
|
_to: <N::Curve as Ciphersuite>::G,
|
||||||
|
) -> Option<Plan<N>> {
|
||||||
|
None
|
||||||
|
}
|
||||||
|
}
|
|
@ -5,16 +5,17 @@ use std::{
|
||||||
|
|
||||||
use ciphersuite::{group::GroupEncoding, Ciphersuite};
|
use ciphersuite::{group::GroupEncoding, Ciphersuite};
|
||||||
|
|
||||||
use serai_client::primitives::{Coin, Amount, Balance};
|
use serai_client::primitives::{NetworkId, Coin, Amount, Balance};
|
||||||
|
|
||||||
use crate::{
|
use crate::{
|
||||||
networks::{OutputType, Output, Network},
|
|
||||||
DbTxn, Db, Payment, Plan,
|
DbTxn, Db, Payment, Plan,
|
||||||
|
networks::{OutputType, Output, Network, UtxoNetwork},
|
||||||
|
multisigs::scheduler::Scheduler as SchedulerTrait,
|
||||||
};
|
};
|
||||||
|
|
||||||
/// Stateless, deterministic output/payment manager.
|
/// Deterministic output/payment manager.
|
||||||
#[derive(PartialEq, Eq, Debug)]
|
#[derive(Clone, PartialEq, Eq, Debug)]
|
||||||
pub struct Scheduler<N: Network> {
|
pub struct Scheduler<N: UtxoNetwork> {
|
||||||
key: <N::Curve as Ciphersuite>::G,
|
key: <N::Curve as Ciphersuite>::G,
|
||||||
coin: Coin,
|
coin: Coin,
|
||||||
|
|
||||||
|
@ -46,7 +47,7 @@ fn scheduler_key<D: Db, G: GroupEncoding>(key: &G) -> Vec<u8> {
|
||||||
D::key(b"SCHEDULER", b"scheduler", key.to_bytes())
|
D::key(b"SCHEDULER", b"scheduler", key.to_bytes())
|
||||||
}
|
}
|
||||||
|
|
||||||
impl<N: Network> Scheduler<N> {
|
impl<N: UtxoNetwork<Scheduler = Self>> Scheduler<N> {
|
||||||
pub fn empty(&self) -> bool {
|
pub fn empty(&self) -> bool {
|
||||||
self.queued_plans.is_empty() &&
|
self.queued_plans.is_empty() &&
|
||||||
self.plans.is_empty() &&
|
self.plans.is_empty() &&
|
||||||
|
@ -144,8 +145,18 @@ impl<N: Network> Scheduler<N> {
|
||||||
pub fn new<D: Db>(
|
pub fn new<D: Db>(
|
||||||
txn: &mut D::Transaction<'_>,
|
txn: &mut D::Transaction<'_>,
|
||||||
key: <N::Curve as Ciphersuite>::G,
|
key: <N::Curve as Ciphersuite>::G,
|
||||||
coin: Coin,
|
network: NetworkId,
|
||||||
) -> Self {
|
) -> Self {
|
||||||
|
assert!(N::branch_address(key).is_some());
|
||||||
|
assert!(N::change_address(key).is_some());
|
||||||
|
assert!(N::forward_address(key).is_some());
|
||||||
|
|
||||||
|
let coin = {
|
||||||
|
let coins = network.coins();
|
||||||
|
assert_eq!(coins.len(), 1);
|
||||||
|
coins[0]
|
||||||
|
};
|
||||||
|
|
||||||
let res = Scheduler {
|
let res = Scheduler {
|
||||||
key,
|
key,
|
||||||
coin,
|
coin,
|
||||||
|
@ -159,7 +170,17 @@ impl<N: Network> Scheduler<N> {
|
||||||
res
|
res
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn from_db<D: Db>(db: &D, key: <N::Curve as Ciphersuite>::G, coin: Coin) -> io::Result<Self> {
|
pub fn from_db<D: Db>(
|
||||||
|
db: &D,
|
||||||
|
key: <N::Curve as Ciphersuite>::G,
|
||||||
|
network: NetworkId,
|
||||||
|
) -> io::Result<Self> {
|
||||||
|
let coin = {
|
||||||
|
let coins = network.coins();
|
||||||
|
assert_eq!(coins.len(), 1);
|
||||||
|
coins[0]
|
||||||
|
};
|
||||||
|
|
||||||
let scheduler = db.get(scheduler_key::<D, _>(&key)).unwrap_or_else(|| {
|
let scheduler = db.get(scheduler_key::<D, _>(&key)).unwrap_or_else(|| {
|
||||||
panic!("loading scheduler from DB without scheduler for {}", hex::encode(key.to_bytes()))
|
panic!("loading scheduler from DB without scheduler for {}", hex::encode(key.to_bytes()))
|
||||||
});
|
});
|
||||||
|
@ -201,7 +222,7 @@ impl<N: Network> Scheduler<N> {
|
||||||
amount
|
amount
|
||||||
};
|
};
|
||||||
|
|
||||||
let branch_address = N::branch_address(self.key);
|
let branch_address = N::branch_address(self.key).unwrap();
|
||||||
|
|
||||||
// If we have more payments than we can handle in a single TX, create plans for them
|
// If we have more payments than we can handle in a single TX, create plans for them
|
||||||
// TODO2: This isn't perfect. For 258 outputs, and a MAX_OUTPUTS of 16, this will create:
|
// TODO2: This isn't perfect. For 258 outputs, and a MAX_OUTPUTS of 16, this will create:
|
||||||
|
@ -237,7 +258,8 @@ impl<N: Network> Scheduler<N> {
|
||||||
key: self.key,
|
key: self.key,
|
||||||
inputs,
|
inputs,
|
||||||
payments,
|
payments,
|
||||||
change: Some(N::change_address(key_for_any_change)).filter(|_| change),
|
change: Some(N::change_address(key_for_any_change).unwrap()).filter(|_| change),
|
||||||
|
scheduler_addendum: (),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -305,7 +327,7 @@ impl<N: Network> Scheduler<N> {
|
||||||
its *own* branch address, since created_output is called on the signer's Scheduler.
|
its *own* branch address, since created_output is called on the signer's Scheduler.
|
||||||
*/
|
*/
|
||||||
{
|
{
|
||||||
let branch_address = N::branch_address(self.key);
|
let branch_address = N::branch_address(self.key).unwrap();
|
||||||
payments =
|
payments =
|
||||||
payments.drain(..).filter(|payment| payment.address != branch_address).collect::<Vec<_>>();
|
payments.drain(..).filter(|payment| payment.address != branch_address).collect::<Vec<_>>();
|
||||||
}
|
}
|
||||||
|
@ -357,7 +379,8 @@ impl<N: Network> Scheduler<N> {
|
||||||
key: self.key,
|
key: self.key,
|
||||||
inputs: chunk,
|
inputs: chunk,
|
||||||
payments: vec![],
|
payments: vec![],
|
||||||
change: Some(N::change_address(key_for_any_change)),
|
change: Some(N::change_address(key_for_any_change).unwrap()),
|
||||||
|
scheduler_addendum: (),
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -403,7 +426,8 @@ impl<N: Network> Scheduler<N> {
|
||||||
key: self.key,
|
key: self.key,
|
||||||
inputs: self.utxos.drain(..).collect::<Vec<_>>(),
|
inputs: self.utxos.drain(..).collect::<Vec<_>>(),
|
||||||
payments: vec![],
|
payments: vec![],
|
||||||
change: Some(N::change_address(key_for_any_change)),
|
change: Some(N::change_address(key_for_any_change).unwrap()),
|
||||||
|
scheduler_addendum: (),
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -435,9 +459,6 @@ impl<N: Network> Scheduler<N> {
|
||||||
|
|
||||||
// Note a branch output as having been created, with the amount it was actually created with,
|
// Note a branch output as having been created, with the amount it was actually created with,
|
||||||
// or not having been created due to being too small
|
// or not having been created due to being too small
|
||||||
// This can be called whenever, so long as it's properly ordered
|
|
||||||
// (it's independent to Serai/the chain we're scheduling over, yet still expects outputs to be
|
|
||||||
// created in the same order Plans are returned in)
|
|
||||||
pub fn created_output<D: Db>(
|
pub fn created_output<D: Db>(
|
||||||
&mut self,
|
&mut self,
|
||||||
txn: &mut D::Transaction<'_>,
|
txn: &mut D::Transaction<'_>,
|
||||||
|
@ -501,3 +522,106 @@ impl<N: Network> Scheduler<N> {
|
||||||
txn.put(scheduler_key::<D, _>(&self.key), self.serialize());
|
txn.put(scheduler_key::<D, _>(&self.key), self.serialize());
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
impl<N: UtxoNetwork<Scheduler = Self>> SchedulerTrait<N> for Scheduler<N> {
|
||||||
|
type Addendum = ();
|
||||||
|
|
||||||
|
/// Check if this Scheduler is empty.
|
||||||
|
fn empty(&self) -> bool {
|
||||||
|
Scheduler::empty(self)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Create a new Scheduler.
|
||||||
|
fn new<D: Db>(
|
||||||
|
txn: &mut D::Transaction<'_>,
|
||||||
|
key: <N::Curve as Ciphersuite>::G,
|
||||||
|
network: NetworkId,
|
||||||
|
) -> Self {
|
||||||
|
Scheduler::new::<D>(txn, key, network)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Load a Scheduler from the DB.
|
||||||
|
fn from_db<D: Db>(
|
||||||
|
db: &D,
|
||||||
|
key: <N::Curve as Ciphersuite>::G,
|
||||||
|
network: NetworkId,
|
||||||
|
) -> io::Result<Self> {
|
||||||
|
Scheduler::from_db::<D>(db, key, network)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Check if a branch is usable.
|
||||||
|
fn can_use_branch(&self, balance: Balance) -> bool {
|
||||||
|
Scheduler::can_use_branch(self, balance)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Schedule a series of outputs/payments.
|
||||||
|
fn schedule<D: Db>(
|
||||||
|
&mut self,
|
||||||
|
txn: &mut D::Transaction<'_>,
|
||||||
|
utxos: Vec<N::Output>,
|
||||||
|
payments: Vec<Payment<N>>,
|
||||||
|
key_for_any_change: <N::Curve as Ciphersuite>::G,
|
||||||
|
force_spend: bool,
|
||||||
|
) -> Vec<Plan<N>> {
|
||||||
|
Scheduler::schedule::<D>(self, txn, utxos, payments, key_for_any_change, force_spend)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Consume all payments still pending within this Scheduler, without scheduling them.
|
||||||
|
fn consume_payments<D: Db>(&mut self, txn: &mut D::Transaction<'_>) -> Vec<Payment<N>> {
|
||||||
|
Scheduler::consume_payments::<D>(self, txn)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Note a branch output as having been created, with the amount it was actually created with,
|
||||||
|
/// or not having been created due to being too small.
|
||||||
|
// TODO: Move this to Balance.
|
||||||
|
fn created_output<D: Db>(
|
||||||
|
&mut self,
|
||||||
|
txn: &mut D::Transaction<'_>,
|
||||||
|
expected: u64,
|
||||||
|
actual: Option<u64>,
|
||||||
|
) {
|
||||||
|
Scheduler::created_output::<D>(self, txn, expected, actual)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn refund_plan<D: Db>(
|
||||||
|
&mut self,
|
||||||
|
_: &mut D::Transaction<'_>,
|
||||||
|
output: N::Output,
|
||||||
|
refund_to: N::Address,
|
||||||
|
) -> Plan<N> {
|
||||||
|
Plan {
|
||||||
|
key: output.key(),
|
||||||
|
// Uses a payment as this will still be successfully sent due to fee amortization,
|
||||||
|
// and because change is currently always a Serai key
|
||||||
|
payments: vec![Payment { address: refund_to, data: None, balance: output.balance() }],
|
||||||
|
inputs: vec![output],
|
||||||
|
change: None,
|
||||||
|
scheduler_addendum: (),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn shim_forward_plan(output: N::Output, to: <N::Curve as Ciphersuite>::G) -> Option<Plan<N>> {
|
||||||
|
Some(Plan {
|
||||||
|
key: output.key(),
|
||||||
|
payments: vec![Payment {
|
||||||
|
address: N::forward_address(to).unwrap(),
|
||||||
|
data: None,
|
||||||
|
balance: output.balance(),
|
||||||
|
}],
|
||||||
|
inputs: vec![output],
|
||||||
|
change: None,
|
||||||
|
scheduler_addendum: (),
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
fn forward_plan<D: Db>(
|
||||||
|
&mut self,
|
||||||
|
_: &mut D::Transaction<'_>,
|
||||||
|
output: N::Output,
|
||||||
|
to: <N::Curve as Ciphersuite>::G,
|
||||||
|
) -> Option<Plan<N>> {
|
||||||
|
assert_eq!(self.key, output.key());
|
||||||
|
// Call shim as shim returns the actual
|
||||||
|
Self::shim_forward_plan(output, to)
|
||||||
|
}
|
||||||
|
}
|
|
@ -52,9 +52,10 @@ use crate::{
|
||||||
networks::{
|
networks::{
|
||||||
NetworkError, Block as BlockTrait, OutputType, Output as OutputTrait,
|
NetworkError, Block as BlockTrait, OutputType, Output as OutputTrait,
|
||||||
Transaction as TransactionTrait, SignableTransaction as SignableTransactionTrait,
|
Transaction as TransactionTrait, SignableTransaction as SignableTransactionTrait,
|
||||||
Eventuality as EventualityTrait, EventualitiesTracker, Network,
|
Eventuality as EventualityTrait, EventualitiesTracker, Network, UtxoNetwork,
|
||||||
},
|
},
|
||||||
Payment,
|
Payment,
|
||||||
|
multisigs::scheduler::utxo::Scheduler,
|
||||||
};
|
};
|
||||||
|
|
||||||
#[derive(Clone, PartialEq, Eq, Debug)]
|
#[derive(Clone, PartialEq, Eq, Debug)]
|
||||||
|
@ -178,14 +179,6 @@ impl TransactionTrait<Bitcoin> for Transaction {
|
||||||
hash.reverse();
|
hash.reverse();
|
||||||
hash
|
hash
|
||||||
}
|
}
|
||||||
fn serialize(&self) -> Vec<u8> {
|
|
||||||
let mut buf = vec![];
|
|
||||||
self.consensus_encode(&mut buf).unwrap();
|
|
||||||
buf
|
|
||||||
}
|
|
||||||
fn read<R: io::Read>(reader: &mut R) -> io::Result<Self> {
|
|
||||||
Transaction::consensus_decode(reader).map_err(|e| io::Error::other(format!("{e}")))
|
|
||||||
}
|
|
||||||
|
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
async fn fee(&self, network: &Bitcoin) -> u64 {
|
async fn fee(&self, network: &Bitcoin) -> u64 {
|
||||||
|
@ -209,7 +202,23 @@ impl TransactionTrait<Bitcoin> for Transaction {
|
||||||
#[derive(Clone, PartialEq, Eq, Debug)]
|
#[derive(Clone, PartialEq, Eq, Debug)]
|
||||||
pub struct Eventuality([u8; 32]);
|
pub struct Eventuality([u8; 32]);
|
||||||
|
|
||||||
|
#[derive(Clone, PartialEq, Eq, Default, Debug)]
|
||||||
|
pub struct EmptyClaim;
|
||||||
|
impl AsRef<[u8]> for EmptyClaim {
|
||||||
|
fn as_ref(&self) -> &[u8] {
|
||||||
|
&[]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
impl AsMut<[u8]> for EmptyClaim {
|
||||||
|
fn as_mut(&mut self) -> &mut [u8] {
|
||||||
|
&mut []
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
impl EventualityTrait for Eventuality {
|
impl EventualityTrait for Eventuality {
|
||||||
|
type Claim = EmptyClaim;
|
||||||
|
type Completion = Transaction;
|
||||||
|
|
||||||
fn lookup(&self) -> Vec<u8> {
|
fn lookup(&self) -> Vec<u8> {
|
||||||
self.0.to_vec()
|
self.0.to_vec()
|
||||||
}
|
}
|
||||||
|
@ -224,6 +233,18 @@ impl EventualityTrait for Eventuality {
|
||||||
fn serialize(&self) -> Vec<u8> {
|
fn serialize(&self) -> Vec<u8> {
|
||||||
self.0.to_vec()
|
self.0.to_vec()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
fn claim(_: &Transaction) -> EmptyClaim {
|
||||||
|
EmptyClaim
|
||||||
|
}
|
||||||
|
fn serialize_completion(completion: &Transaction) -> Vec<u8> {
|
||||||
|
let mut buf = vec![];
|
||||||
|
completion.consensus_encode(&mut buf).unwrap();
|
||||||
|
buf
|
||||||
|
}
|
||||||
|
fn read_completion<R: io::Read>(reader: &mut R) -> io::Result<Transaction> {
|
||||||
|
Transaction::consensus_decode(reader).map_err(|e| io::Error::other(format!("{e}")))
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Clone, Debug)]
|
#[derive(Clone, Debug)]
|
||||||
|
@ -374,8 +395,12 @@ impl Bitcoin {
|
||||||
for input in &tx.input {
|
for input in &tx.input {
|
||||||
let mut input_tx = input.previous_output.txid.to_raw_hash().to_byte_array();
|
let mut input_tx = input.previous_output.txid.to_raw_hash().to_byte_array();
|
||||||
input_tx.reverse();
|
input_tx.reverse();
|
||||||
in_value += self.get_transaction(&input_tx).await?.output
|
in_value += self
|
||||||
[usize::try_from(input.previous_output.vout).unwrap()]
|
.rpc
|
||||||
|
.get_transaction(&input_tx)
|
||||||
|
.await
|
||||||
|
.map_err(|_| NetworkError::ConnectionError)?
|
||||||
|
.output[usize::try_from(input.previous_output.vout).unwrap()]
|
||||||
.value
|
.value
|
||||||
.to_sat();
|
.to_sat();
|
||||||
}
|
}
|
||||||
|
@ -537,6 +562,25 @@ impl Bitcoin {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Bitcoin has a max weight of 400,000 (MAX_STANDARD_TX_WEIGHT)
|
||||||
|
// A non-SegWit TX will have 4 weight units per byte, leaving a max size of 100,000 bytes
|
||||||
|
// While our inputs are entirely SegWit, such fine tuning is not necessary and could create
|
||||||
|
// issues in the future (if the size decreases or we misevaluate it)
|
||||||
|
// It also offers a minimal amount of benefit when we are able to logarithmically accumulate
|
||||||
|
// inputs
|
||||||
|
// For 128-byte inputs (36-byte output specification, 64-byte signature, whatever overhead) and
|
||||||
|
// 64-byte outputs (40-byte script, 8-byte amount, whatever overhead), they together take up 192
|
||||||
|
// bytes
|
||||||
|
// 100,000 / 192 = 520
|
||||||
|
// 520 * 192 leaves 160 bytes of overhead for the transaction structure itself
|
||||||
|
const MAX_INPUTS: usize = 520;
|
||||||
|
const MAX_OUTPUTS: usize = 520;
|
||||||
|
|
||||||
|
fn address_from_key(key: ProjectivePoint) -> Address {
|
||||||
|
Address::new(BAddress::<NetworkChecked>::new(BNetwork::Bitcoin, address_payload(key).unwrap()))
|
||||||
|
.unwrap()
|
||||||
|
}
|
||||||
|
|
||||||
#[async_trait]
|
#[async_trait]
|
||||||
impl Network for Bitcoin {
|
impl Network for Bitcoin {
|
||||||
type Curve = Secp256k1;
|
type Curve = Secp256k1;
|
||||||
|
@ -549,6 +593,8 @@ impl Network for Bitcoin {
|
||||||
type Eventuality = Eventuality;
|
type Eventuality = Eventuality;
|
||||||
type TransactionMachine = TransactionMachine;
|
type TransactionMachine = TransactionMachine;
|
||||||
|
|
||||||
|
type Scheduler = Scheduler<Bitcoin>;
|
||||||
|
|
||||||
type Address = Address;
|
type Address = Address;
|
||||||
|
|
||||||
const NETWORK: NetworkId = NetworkId::Bitcoin;
|
const NETWORK: NetworkId = NetworkId::Bitcoin;
|
||||||
|
@ -598,19 +644,7 @@ impl Network for Bitcoin {
|
||||||
// aggregation TX
|
// aggregation TX
|
||||||
const COST_TO_AGGREGATE: u64 = 800;
|
const COST_TO_AGGREGATE: u64 = 800;
|
||||||
|
|
||||||
// Bitcoin has a max weight of 400,000 (MAX_STANDARD_TX_WEIGHT)
|
const MAX_OUTPUTS: usize = MAX_OUTPUTS;
|
||||||
// A non-SegWit TX will have 4 weight units per byte, leaving a max size of 100,000 bytes
|
|
||||||
// While our inputs are entirely SegWit, such fine tuning is not necessary and could create
|
|
||||||
// issues in the future (if the size decreases or we misevaluate it)
|
|
||||||
// It also offers a minimal amount of benefit when we are able to logarithmically accumulate
|
|
||||||
// inputs
|
|
||||||
// For 128-byte inputs (36-byte output specification, 64-byte signature, whatever overhead) and
|
|
||||||
// 64-byte outputs (40-byte script, 8-byte amount, whatever overhead), they together take up 192
|
|
||||||
// bytes
|
|
||||||
// 100,000 / 192 = 520
|
|
||||||
// 520 * 192 leaves 160 bytes of overhead for the transaction structure itself
|
|
||||||
const MAX_INPUTS: usize = 520;
|
|
||||||
const MAX_OUTPUTS: usize = 520;
|
|
||||||
|
|
||||||
fn tweak_keys(keys: &mut ThresholdKeys<Self::Curve>) {
|
fn tweak_keys(keys: &mut ThresholdKeys<Self::Curve>) {
|
||||||
*keys = tweak_keys(keys);
|
*keys = tweak_keys(keys);
|
||||||
|
@ -618,24 +652,24 @@ impl Network for Bitcoin {
|
||||||
scanner(keys.group_key());
|
scanner(keys.group_key());
|
||||||
}
|
}
|
||||||
|
|
||||||
fn external_address(key: ProjectivePoint) -> Address {
|
#[cfg(test)]
|
||||||
Address::new(BAddress::<NetworkChecked>::new(BNetwork::Bitcoin, address_payload(key).unwrap()))
|
async fn external_address(&self, key: ProjectivePoint) -> Address {
|
||||||
.unwrap()
|
address_from_key(key)
|
||||||
}
|
}
|
||||||
|
|
||||||
fn branch_address(key: ProjectivePoint) -> Address {
|
fn branch_address(key: ProjectivePoint) -> Option<Address> {
|
||||||
let (_, offsets, _) = scanner(key);
|
let (_, offsets, _) = scanner(key);
|
||||||
Self::external_address(key + (ProjectivePoint::GENERATOR * offsets[&OutputType::Branch]))
|
Some(address_from_key(key + (ProjectivePoint::GENERATOR * offsets[&OutputType::Branch])))
|
||||||
}
|
}
|
||||||
|
|
||||||
fn change_address(key: ProjectivePoint) -> Address {
|
fn change_address(key: ProjectivePoint) -> Option<Address> {
|
||||||
let (_, offsets, _) = scanner(key);
|
let (_, offsets, _) = scanner(key);
|
||||||
Self::external_address(key + (ProjectivePoint::GENERATOR * offsets[&OutputType::Change]))
|
Some(address_from_key(key + (ProjectivePoint::GENERATOR * offsets[&OutputType::Change])))
|
||||||
}
|
}
|
||||||
|
|
||||||
fn forward_address(key: ProjectivePoint) -> Address {
|
fn forward_address(key: ProjectivePoint) -> Option<Address> {
|
||||||
let (_, offsets, _) = scanner(key);
|
let (_, offsets, _) = scanner(key);
|
||||||
Self::external_address(key + (ProjectivePoint::GENERATOR * offsets[&OutputType::Forwarded]))
|
Some(address_from_key(key + (ProjectivePoint::GENERATOR * offsets[&OutputType::Forwarded])))
|
||||||
}
|
}
|
||||||
|
|
||||||
async fn get_latest_block_number(&self) -> Result<usize, NetworkError> {
|
async fn get_latest_block_number(&self) -> Result<usize, NetworkError> {
|
||||||
|
@ -682,7 +716,7 @@ impl Network for Bitcoin {
|
||||||
spent_tx.reverse();
|
spent_tx.reverse();
|
||||||
let mut tx;
|
let mut tx;
|
||||||
while {
|
while {
|
||||||
tx = self.get_transaction(&spent_tx).await;
|
tx = self.rpc.get_transaction(&spent_tx).await;
|
||||||
tx.is_err()
|
tx.is_err()
|
||||||
} {
|
} {
|
||||||
log::error!("couldn't get transaction from bitcoin node: {tx:?}");
|
log::error!("couldn't get transaction from bitcoin node: {tx:?}");
|
||||||
|
@ -710,7 +744,7 @@ impl Network for Bitcoin {
|
||||||
&self,
|
&self,
|
||||||
eventualities: &mut EventualitiesTracker<Eventuality>,
|
eventualities: &mut EventualitiesTracker<Eventuality>,
|
||||||
block: &Self::Block,
|
block: &Self::Block,
|
||||||
) -> HashMap<[u8; 32], (usize, Transaction)> {
|
) -> HashMap<[u8; 32], (usize, [u8; 32], Transaction)> {
|
||||||
let mut res = HashMap::new();
|
let mut res = HashMap::new();
|
||||||
if eventualities.map.is_empty() {
|
if eventualities.map.is_empty() {
|
||||||
return res;
|
return res;
|
||||||
|
@ -719,11 +753,11 @@ impl Network for Bitcoin {
|
||||||
fn check_block(
|
fn check_block(
|
||||||
eventualities: &mut EventualitiesTracker<Eventuality>,
|
eventualities: &mut EventualitiesTracker<Eventuality>,
|
||||||
block: &Block,
|
block: &Block,
|
||||||
res: &mut HashMap<[u8; 32], (usize, Transaction)>,
|
res: &mut HashMap<[u8; 32], (usize, [u8; 32], Transaction)>,
|
||||||
) {
|
) {
|
||||||
for tx in &block.txdata[1 ..] {
|
for tx in &block.txdata[1 ..] {
|
||||||
if let Some((plan, _)) = eventualities.map.remove(tx.id().as_slice()) {
|
if let Some((plan, _)) = eventualities.map.remove(tx.id().as_slice()) {
|
||||||
res.insert(plan, (eventualities.block_number, tx.clone()));
|
res.insert(plan, (eventualities.block_number, tx.id(), tx.clone()));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -770,7 +804,6 @@ impl Network for Bitcoin {
|
||||||
async fn needed_fee(
|
async fn needed_fee(
|
||||||
&self,
|
&self,
|
||||||
block_number: usize,
|
block_number: usize,
|
||||||
_: &[u8; 32],
|
|
||||||
inputs: &[Output],
|
inputs: &[Output],
|
||||||
payments: &[Payment<Self>],
|
payments: &[Payment<Self>],
|
||||||
change: &Option<Address>,
|
change: &Option<Address>,
|
||||||
|
@ -787,9 +820,11 @@ impl Network for Bitcoin {
|
||||||
&self,
|
&self,
|
||||||
block_number: usize,
|
block_number: usize,
|
||||||
plan_id: &[u8; 32],
|
plan_id: &[u8; 32],
|
||||||
|
_key: ProjectivePoint,
|
||||||
inputs: &[Output],
|
inputs: &[Output],
|
||||||
payments: &[Payment<Self>],
|
payments: &[Payment<Self>],
|
||||||
change: &Option<Address>,
|
change: &Option<Address>,
|
||||||
|
(): &(),
|
||||||
) -> Result<Option<(Self::SignableTransaction, Self::Eventuality)>, NetworkError> {
|
) -> Result<Option<(Self::SignableTransaction, Self::Eventuality)>, NetworkError> {
|
||||||
Ok(self.make_signable_transaction(block_number, inputs, payments, change, false).await?.map(
|
Ok(self.make_signable_transaction(block_number, inputs, payments, change, false).await?.map(
|
||||||
|signable| {
|
|signable| {
|
||||||
|
@ -803,7 +838,7 @@ impl Network for Bitcoin {
|
||||||
))
|
))
|
||||||
}
|
}
|
||||||
|
|
||||||
async fn attempt_send(
|
async fn attempt_sign(
|
||||||
&self,
|
&self,
|
||||||
keys: ThresholdKeys<Self::Curve>,
|
keys: ThresholdKeys<Self::Curve>,
|
||||||
transaction: Self::SignableTransaction,
|
transaction: Self::SignableTransaction,
|
||||||
|
@ -817,7 +852,7 @@ impl Network for Bitcoin {
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
async fn publish_transaction(&self, tx: &Self::Transaction) -> Result<(), NetworkError> {
|
async fn publish_completion(&self, tx: &Transaction) -> Result<(), NetworkError> {
|
||||||
match self.rpc.send_raw_transaction(tx).await {
|
match self.rpc.send_raw_transaction(tx).await {
|
||||||
Ok(_) => (),
|
Ok(_) => (),
|
||||||
Err(RpcError::ConnectionError) => Err(NetworkError::ConnectionError)?,
|
Err(RpcError::ConnectionError) => Err(NetworkError::ConnectionError)?,
|
||||||
|
@ -828,12 +863,14 @@ impl Network for Bitcoin {
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
async fn get_transaction(&self, id: &[u8; 32]) -> Result<Transaction, NetworkError> {
|
async fn confirm_completion(
|
||||||
self.rpc.get_transaction(id).await.map_err(|_| NetworkError::ConnectionError)
|
&self,
|
||||||
}
|
eventuality: &Self::Eventuality,
|
||||||
|
_: &EmptyClaim,
|
||||||
fn confirm_completion(&self, eventuality: &Self::Eventuality, tx: &Transaction) -> bool {
|
) -> Result<Option<Transaction>, NetworkError> {
|
||||||
eventuality.0 == tx.id()
|
Ok(Some(
|
||||||
|
self.rpc.get_transaction(&eventuality.0).await.map_err(|_| NetworkError::ConnectionError)?,
|
||||||
|
))
|
||||||
}
|
}
|
||||||
|
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
|
@ -841,6 +878,20 @@ impl Network for Bitcoin {
|
||||||
self.rpc.get_block_number(id).await.unwrap()
|
self.rpc.get_block_number(id).await.unwrap()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
async fn check_eventuality_by_claim(
|
||||||
|
&self,
|
||||||
|
eventuality: &Self::Eventuality,
|
||||||
|
_: &EmptyClaim,
|
||||||
|
) -> bool {
|
||||||
|
self.rpc.get_transaction(&eventuality.0).await.is_ok()
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
async fn get_transaction_by_eventuality(&self, _: usize, id: &Eventuality) -> Transaction {
|
||||||
|
self.rpc.get_transaction(&id.0).await.unwrap()
|
||||||
|
}
|
||||||
|
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
async fn mine_block(&self) {
|
async fn mine_block(&self) {
|
||||||
self
|
self
|
||||||
|
@ -892,3 +943,7 @@ impl Network for Bitcoin {
|
||||||
self.get_block(block).await.unwrap()
|
self.get_block(block).await.unwrap()
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
impl UtxoNetwork for Bitcoin {
|
||||||
|
const MAX_INPUTS: usize = MAX_INPUTS;
|
||||||
|
}
|
||||||
|
|
827
processor/src/networks/ethereum.rs
Normal file
827
processor/src/networks/ethereum.rs
Normal file
|
@ -0,0 +1,827 @@
|
||||||
|
use core::{fmt::Debug, time::Duration};
|
||||||
|
use std::{
|
||||||
|
sync::Arc,
|
||||||
|
collections::{HashSet, HashMap},
|
||||||
|
io,
|
||||||
|
};
|
||||||
|
|
||||||
|
use async_trait::async_trait;
|
||||||
|
|
||||||
|
use ciphersuite::{group::GroupEncoding, Ciphersuite, Secp256k1};
|
||||||
|
use frost::ThresholdKeys;
|
||||||
|
|
||||||
|
use ethereum_serai::{
|
||||||
|
alloy_core::primitives::U256,
|
||||||
|
alloy_rpc_types::{BlockNumberOrTag, Transaction},
|
||||||
|
alloy_simple_request_transport::SimpleRequest,
|
||||||
|
alloy_rpc_client::ClientBuilder,
|
||||||
|
alloy_provider::{Provider, RootProvider},
|
||||||
|
crypto::{PublicKey, Signature},
|
||||||
|
deployer::Deployer,
|
||||||
|
router::{Router, Coin as EthereumCoin, InInstruction as EthereumInInstruction},
|
||||||
|
machine::*,
|
||||||
|
};
|
||||||
|
#[cfg(test)]
|
||||||
|
use ethereum_serai::alloy_core::primitives::B256;
|
||||||
|
|
||||||
|
use tokio::{
|
||||||
|
time::sleep,
|
||||||
|
sync::{RwLock, RwLockReadGuard},
|
||||||
|
};
|
||||||
|
|
||||||
|
use serai_client::{
|
||||||
|
primitives::{Coin, Amount, Balance, NetworkId},
|
||||||
|
validator_sets::primitives::Session,
|
||||||
|
};
|
||||||
|
|
||||||
|
use crate::{
|
||||||
|
Db, Payment,
|
||||||
|
networks::{
|
||||||
|
OutputType, Output, Transaction as TransactionTrait, SignableTransaction, Block,
|
||||||
|
Eventuality as EventualityTrait, EventualitiesTracker, NetworkError, Network,
|
||||||
|
},
|
||||||
|
key_gen::NetworkKeyDb,
|
||||||
|
multisigs::scheduler::{
|
||||||
|
Scheduler as SchedulerTrait,
|
||||||
|
smart_contract::{Addendum, Scheduler},
|
||||||
|
},
|
||||||
|
};
|
||||||
|
|
||||||
|
#[cfg(not(test))]
|
||||||
|
const DAI: [u8; 20] =
|
||||||
|
match const_hex::const_decode_to_array(b"0x6B175474E89094C44Da98b954EedeAC495271d0F") {
|
||||||
|
Ok(res) => res,
|
||||||
|
Err(_) => panic!("invalid non-test DAI hex address"),
|
||||||
|
};
|
||||||
|
#[cfg(test)] // TODO
|
||||||
|
const DAI: [u8; 20] =
|
||||||
|
match const_hex::const_decode_to_array(b"0000000000000000000000000000000000000000") {
|
||||||
|
Ok(res) => res,
|
||||||
|
Err(_) => panic!("invalid test DAI hex address"),
|
||||||
|
};
|
||||||
|
|
||||||
|
fn coin_to_serai_coin(coin: &EthereumCoin) -> Option<Coin> {
|
||||||
|
match coin {
|
||||||
|
EthereumCoin::Ether => Some(Coin::Ether),
|
||||||
|
EthereumCoin::Erc20(token) => {
|
||||||
|
if *token == DAI {
|
||||||
|
return Some(Coin::Dai);
|
||||||
|
}
|
||||||
|
None
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn amount_to_serai_amount(coin: Coin, amount: U256) -> Amount {
|
||||||
|
assert_eq!(coin.network(), NetworkId::Ethereum);
|
||||||
|
assert_eq!(coin.decimals(), 8);
|
||||||
|
// Remove 10 decimals so we go from 18 decimals to 8 decimals
|
||||||
|
let divisor = U256::from(10_000_000_000u64);
|
||||||
|
// This is valid up to 184b, which is assumed for the coins allowed
|
||||||
|
Amount(u64::try_from(amount / divisor).unwrap())
|
||||||
|
}
|
||||||
|
|
||||||
|
fn balance_to_ethereum_amount(balance: Balance) -> U256 {
|
||||||
|
assert_eq!(balance.coin.network(), NetworkId::Ethereum);
|
||||||
|
assert_eq!(balance.coin.decimals(), 8);
|
||||||
|
// Restore 10 decimals so we go from 8 decimals to 18 decimals
|
||||||
|
let factor = U256::from(10_000_000_000u64);
|
||||||
|
U256::from(balance.amount.0) * factor
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
|
||||||
|
pub struct Address(pub [u8; 20]);
|
||||||
|
impl TryFrom<Vec<u8>> for Address {
|
||||||
|
type Error = ();
|
||||||
|
fn try_from(bytes: Vec<u8>) -> Result<Address, ()> {
|
||||||
|
if bytes.len() != 20 {
|
||||||
|
Err(())?;
|
||||||
|
}
|
||||||
|
let mut res = [0; 20];
|
||||||
|
res.copy_from_slice(&bytes);
|
||||||
|
Ok(Address(res))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
impl TryInto<Vec<u8>> for Address {
|
||||||
|
type Error = ();
|
||||||
|
fn try_into(self) -> Result<Vec<u8>, ()> {
|
||||||
|
Ok(self.0.to_vec())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
impl ToString for Address {
|
||||||
|
fn to_string(&self) -> String {
|
||||||
|
ethereum_serai::alloy_core::primitives::Address::from(self.0).to_string()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl SignableTransaction for RouterCommand {
|
||||||
|
fn fee(&self) -> u64 {
|
||||||
|
// Return a fee of 0 as we'll handle amortization on our end
|
||||||
|
0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
impl<D: Debug + Db> TransactionTrait<Ethereum<D>> for Transaction {
|
||||||
|
type Id = [u8; 32];
|
||||||
|
fn id(&self) -> Self::Id {
|
||||||
|
self.hash.0
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
async fn fee(&self, _network: &Ethereum<D>) -> u64 {
|
||||||
|
// Return a fee of 0 as we'll handle amortization on our end
|
||||||
|
0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// We use 32-block Epochs to represent blocks.
|
||||||
|
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
|
||||||
|
pub struct Epoch {
|
||||||
|
// The hash of the block which ended the prior Epoch.
|
||||||
|
prior_end_hash: [u8; 32],
|
||||||
|
// The first block number within this Epoch.
|
||||||
|
start: u64,
|
||||||
|
// The hash of the last block within this Epoch.
|
||||||
|
end_hash: [u8; 32],
|
||||||
|
// The monotonic time for this Epoch.
|
||||||
|
time: u64,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl Epoch {
|
||||||
|
fn end(&self) -> u64 {
|
||||||
|
self.start + 31
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
impl<D: Debug + Db> Block<Ethereum<D>> for Epoch {
|
||||||
|
type Id = [u8; 32];
|
||||||
|
fn id(&self) -> [u8; 32] {
|
||||||
|
self.end_hash
|
||||||
|
}
|
||||||
|
fn parent(&self) -> [u8; 32] {
|
||||||
|
self.prior_end_hash
|
||||||
|
}
|
||||||
|
async fn time(&self, _: &Ethereum<D>) -> u64 {
|
||||||
|
self.time
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl<D: Debug + Db> Output<Ethereum<D>> for EthereumInInstruction {
|
||||||
|
type Id = [u8; 32];
|
||||||
|
|
||||||
|
fn kind(&self) -> OutputType {
|
||||||
|
OutputType::External
|
||||||
|
}
|
||||||
|
|
||||||
|
fn id(&self) -> Self::Id {
|
||||||
|
let mut id = [0; 40];
|
||||||
|
id[.. 32].copy_from_slice(&self.id.0);
|
||||||
|
id[32 ..].copy_from_slice(&self.id.1.to_le_bytes());
|
||||||
|
*ethereum_serai::alloy_core::primitives::keccak256(id)
|
||||||
|
}
|
||||||
|
fn tx_id(&self) -> [u8; 32] {
|
||||||
|
self.id.0
|
||||||
|
}
|
||||||
|
fn key(&self) -> <Secp256k1 as Ciphersuite>::G {
|
||||||
|
self.key_at_end_of_block
|
||||||
|
}
|
||||||
|
|
||||||
|
fn presumed_origin(&self) -> Option<Address> {
|
||||||
|
Some(Address(self.from))
|
||||||
|
}
|
||||||
|
|
||||||
|
fn balance(&self) -> Balance {
|
||||||
|
let coin = coin_to_serai_coin(&self.coin).unwrap_or_else(|| {
|
||||||
|
panic!(
|
||||||
|
"requesting coin for an EthereumInInstruction with a coin {}",
|
||||||
|
"we don't handle. this never should have been yielded"
|
||||||
|
)
|
||||||
|
});
|
||||||
|
Balance { coin, amount: amount_to_serai_amount(coin, self.amount) }
|
||||||
|
}
|
||||||
|
fn data(&self) -> &[u8] {
|
||||||
|
&self.data
|
||||||
|
}
|
||||||
|
|
||||||
|
fn write<W: io::Write>(&self, writer: &mut W) -> io::Result<()> {
|
||||||
|
EthereumInInstruction::write(self, writer)
|
||||||
|
}
|
||||||
|
fn read<R: io::Read>(reader: &mut R) -> io::Result<Self> {
|
||||||
|
EthereumInInstruction::read(reader)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Clone, PartialEq, Eq, Debug)]
|
||||||
|
pub struct Claim {
|
||||||
|
signature: [u8; 64],
|
||||||
|
}
|
||||||
|
impl AsRef<[u8]> for Claim {
|
||||||
|
fn as_ref(&self) -> &[u8] {
|
||||||
|
&self.signature
|
||||||
|
}
|
||||||
|
}
|
||||||
|
impl AsMut<[u8]> for Claim {
|
||||||
|
fn as_mut(&mut self) -> &mut [u8] {
|
||||||
|
&mut self.signature
|
||||||
|
}
|
||||||
|
}
|
||||||
|
impl Default for Claim {
|
||||||
|
fn default() -> Self {
|
||||||
|
Self { signature: [0; 64] }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
impl From<&Signature> for Claim {
|
||||||
|
fn from(sig: &Signature) -> Self {
|
||||||
|
Self { signature: sig.to_bytes() }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Clone, PartialEq, Eq, Debug)]
|
||||||
|
pub struct Eventuality(PublicKey, RouterCommand);
|
||||||
|
impl EventualityTrait for Eventuality {
|
||||||
|
type Claim = Claim;
|
||||||
|
type Completion = SignedRouterCommand;
|
||||||
|
|
||||||
|
fn lookup(&self) -> Vec<u8> {
|
||||||
|
match self.1 {
|
||||||
|
RouterCommand::UpdateSeraiKey { nonce, .. } | RouterCommand::Execute { nonce, .. } => {
|
||||||
|
nonce.as_le_bytes().to_vec()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn read<R: io::Read>(reader: &mut R) -> io::Result<Self> {
|
||||||
|
let point = Secp256k1::read_G(reader)?;
|
||||||
|
let command = RouterCommand::read(reader)?;
|
||||||
|
Ok(Eventuality(
|
||||||
|
PublicKey::new(point).ok_or(io::Error::other("unusable key within Eventuality"))?,
|
||||||
|
command,
|
||||||
|
))
|
||||||
|
}
|
||||||
|
fn serialize(&self) -> Vec<u8> {
|
||||||
|
let mut res = vec![];
|
||||||
|
res.extend(self.0.point().to_bytes().as_slice());
|
||||||
|
self.1.write(&mut res).unwrap();
|
||||||
|
res
|
||||||
|
}
|
||||||
|
|
||||||
|
fn claim(completion: &Self::Completion) -> Self::Claim {
|
||||||
|
Claim::from(completion.signature())
|
||||||
|
}
|
||||||
|
fn serialize_completion(completion: &Self::Completion) -> Vec<u8> {
|
||||||
|
let mut res = vec![];
|
||||||
|
completion.write(&mut res).unwrap();
|
||||||
|
res
|
||||||
|
}
|
||||||
|
fn read_completion<R: io::Read>(reader: &mut R) -> io::Result<Self::Completion> {
|
||||||
|
SignedRouterCommand::read(reader)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Clone, Debug)]
|
||||||
|
pub struct Ethereum<D: Debug + Db> {
|
||||||
|
// This DB is solely used to access the first key generated, as needed to determine the Router's
|
||||||
|
// address. Accordingly, all methods present are consistent to a Serai chain with a finalized
|
||||||
|
// first key (regardless of local state), and this is safe.
|
||||||
|
db: D,
|
||||||
|
provider: Arc<RootProvider<SimpleRequest>>,
|
||||||
|
deployer: Deployer,
|
||||||
|
router: Arc<RwLock<Option<Router>>>,
|
||||||
|
}
|
||||||
|
impl<D: Debug + Db> PartialEq for Ethereum<D> {
|
||||||
|
fn eq(&self, _other: &Ethereum<D>) -> bool {
|
||||||
|
true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
impl<D: Debug + Db> Ethereum<D> {
|
||||||
|
pub async fn new(db: D, url: String) -> Self {
|
||||||
|
let provider = Arc::new(RootProvider::new(
|
||||||
|
ClientBuilder::default().transport(SimpleRequest::new(url), true),
|
||||||
|
));
|
||||||
|
|
||||||
|
#[cfg(test)] // TODO: Move to test code
|
||||||
|
provider.raw_request::<_, ()>("evm_setAutomine".into(), false).await.unwrap();
|
||||||
|
|
||||||
|
let mut deployer = Deployer::new(provider.clone()).await;
|
||||||
|
while !matches!(deployer, Ok(Some(_))) {
|
||||||
|
log::error!("Deployer wasn't deployed yet or networking error");
|
||||||
|
sleep(Duration::from_secs(5)).await;
|
||||||
|
deployer = Deployer::new(provider.clone()).await;
|
||||||
|
}
|
||||||
|
let deployer = deployer.unwrap().unwrap();
|
||||||
|
|
||||||
|
Ethereum { db, provider, deployer, router: Arc::new(RwLock::new(None)) }
|
||||||
|
}
|
||||||
|
|
||||||
|
// Obtain a reference to the Router, sleeping until it's deployed if it hasn't already been.
|
||||||
|
// This is guaranteed to return Some.
|
||||||
|
pub async fn router(&self) -> RwLockReadGuard<'_, Option<Router>> {
|
||||||
|
// If we've already instantiated the Router, return a read reference
|
||||||
|
{
|
||||||
|
let router = self.router.read().await;
|
||||||
|
if router.is_some() {
|
||||||
|
return router;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Instantiate it
|
||||||
|
let mut router = self.router.write().await;
|
||||||
|
// If another attempt beat us to it, return
|
||||||
|
if router.is_some() {
|
||||||
|
drop(router);
|
||||||
|
return self.router.read().await;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get the first key from the DB
|
||||||
|
let first_key =
|
||||||
|
NetworkKeyDb::get(&self.db, Session(0)).expect("getting outputs before confirming a key");
|
||||||
|
let key = Secp256k1::read_G(&mut first_key.as_slice()).unwrap();
|
||||||
|
let public_key = PublicKey::new(key).unwrap();
|
||||||
|
|
||||||
|
// Find the router
|
||||||
|
let mut found = self.deployer.find_router(self.provider.clone(), &public_key).await;
|
||||||
|
while !matches!(found, Ok(Some(_))) {
|
||||||
|
log::error!("Router wasn't deployed yet or networking error");
|
||||||
|
sleep(Duration::from_secs(5)).await;
|
||||||
|
found = self.deployer.find_router(self.provider.clone(), &public_key).await;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Set it
|
||||||
|
*router = Some(found.unwrap().unwrap());
|
||||||
|
|
||||||
|
// Downgrade to a read lock
|
||||||
|
// Explicitly doesn't use `downgrade` so that another pending write txn can realize it's no
|
||||||
|
// longer necessary
|
||||||
|
drop(router);
|
||||||
|
self.router.read().await
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
impl<D: Debug + Db> Network for Ethereum<D> {
|
||||||
|
type Curve = Secp256k1;
|
||||||
|
|
||||||
|
type Transaction = Transaction;
|
||||||
|
type Block = Epoch;
|
||||||
|
|
||||||
|
type Output = EthereumInInstruction;
|
||||||
|
type SignableTransaction = RouterCommand;
|
||||||
|
type Eventuality = Eventuality;
|
||||||
|
type TransactionMachine = RouterCommandMachine;
|
||||||
|
|
||||||
|
type Scheduler = Scheduler<Self>;
|
||||||
|
|
||||||
|
type Address = Address;
|
||||||
|
|
||||||
|
const NETWORK: NetworkId = NetworkId::Ethereum;
|
||||||
|
const ID: &'static str = "Ethereum";
|
||||||
|
const ESTIMATED_BLOCK_TIME_IN_SECONDS: usize = 32 * 12;
|
||||||
|
const CONFIRMATIONS: usize = 1;
|
||||||
|
|
||||||
|
const DUST: u64 = 0; // TODO
|
||||||
|
|
||||||
|
const COST_TO_AGGREGATE: u64 = 0;
|
||||||
|
|
||||||
|
// TODO: usize::max, with a merkle tree in the router
|
||||||
|
const MAX_OUTPUTS: usize = 256;
|
||||||
|
|
||||||
|
fn tweak_keys(keys: &mut ThresholdKeys<Self::Curve>) {
|
||||||
|
while PublicKey::new(keys.group_key()).is_none() {
|
||||||
|
*keys = keys.offset(<Secp256k1 as Ciphersuite>::F::ONE);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
async fn external_address(&self, _key: <Secp256k1 as Ciphersuite>::G) -> Address {
|
||||||
|
Address(self.router().await.as_ref().unwrap().address())
|
||||||
|
}
|
||||||
|
|
||||||
|
fn branch_address(_key: <Secp256k1 as Ciphersuite>::G) -> Option<Address> {
|
||||||
|
None
|
||||||
|
}
|
||||||
|
|
||||||
|
fn change_address(_key: <Secp256k1 as Ciphersuite>::G) -> Option<Address> {
|
||||||
|
None
|
||||||
|
}
|
||||||
|
|
||||||
|
fn forward_address(_key: <Secp256k1 as Ciphersuite>::G) -> Option<Address> {
|
||||||
|
None
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn get_latest_block_number(&self) -> Result<usize, NetworkError> {
|
||||||
|
let actual_number = self
|
||||||
|
.provider
|
||||||
|
.get_block(BlockNumberOrTag::Finalized.into(), false)
|
||||||
|
.await
|
||||||
|
.map_err(|_| NetworkError::ConnectionError)?
|
||||||
|
.expect("no blocks were finalized")
|
||||||
|
.header
|
||||||
|
.number
|
||||||
|
.unwrap();
|
||||||
|
// Error if there hasn't been a full epoch yet
|
||||||
|
if actual_number < 32 {
|
||||||
|
Err(NetworkError::ConnectionError)?
|
||||||
|
}
|
||||||
|
// If this is 33, the division will return 1, yet 1 is the epoch in progress
|
||||||
|
let latest_full_epoch = (actual_number / 32).saturating_sub(1);
|
||||||
|
Ok(latest_full_epoch.try_into().unwrap())
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn get_block(&self, number: usize) -> Result<Self::Block, NetworkError> {
|
||||||
|
let latest_finalized = self.get_latest_block_number().await?;
|
||||||
|
if number > latest_finalized {
|
||||||
|
Err(NetworkError::ConnectionError)?
|
||||||
|
}
|
||||||
|
|
||||||
|
let start = number * 32;
|
||||||
|
let prior_end_hash = if start == 0 {
|
||||||
|
[0; 32]
|
||||||
|
} else {
|
||||||
|
self
|
||||||
|
.provider
|
||||||
|
.get_block(u64::try_from(start - 1).unwrap().into(), false)
|
||||||
|
.await
|
||||||
|
.ok()
|
||||||
|
.flatten()
|
||||||
|
.ok_or(NetworkError::ConnectionError)?
|
||||||
|
.header
|
||||||
|
.hash
|
||||||
|
.unwrap()
|
||||||
|
.into()
|
||||||
|
};
|
||||||
|
|
||||||
|
let end_header = self
|
||||||
|
.provider
|
||||||
|
.get_block(u64::try_from(start + 31).unwrap().into(), false)
|
||||||
|
.await
|
||||||
|
.ok()
|
||||||
|
.flatten()
|
||||||
|
.ok_or(NetworkError::ConnectionError)?
|
||||||
|
.header;
|
||||||
|
|
||||||
|
let end_hash = end_header.hash.unwrap().into();
|
||||||
|
let time = end_header.timestamp;
|
||||||
|
|
||||||
|
Ok(Epoch { prior_end_hash, start: start.try_into().unwrap(), end_hash, time })
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn get_outputs(
|
||||||
|
&self,
|
||||||
|
block: &Self::Block,
|
||||||
|
_: <Secp256k1 as Ciphersuite>::G,
|
||||||
|
) -> Vec<Self::Output> {
|
||||||
|
let router = self.router().await;
|
||||||
|
let router = router.as_ref().unwrap();
|
||||||
|
|
||||||
|
// TODO: Top-level transfers
|
||||||
|
|
||||||
|
let mut all_events = vec![];
|
||||||
|
for block in block.start .. (block.start + 32) {
|
||||||
|
let mut events = router.in_instructions(block, &HashSet::from([DAI])).await;
|
||||||
|
while let Err(e) = events {
|
||||||
|
log::error!("couldn't connect to Ethereum node for the Router's events: {e:?}");
|
||||||
|
sleep(Duration::from_secs(5)).await;
|
||||||
|
events = router.in_instructions(block, &HashSet::from([DAI])).await;
|
||||||
|
}
|
||||||
|
all_events.extend(events.unwrap());
|
||||||
|
}
|
||||||
|
|
||||||
|
for event in &all_events {
|
||||||
|
assert!(
|
||||||
|
coin_to_serai_coin(&event.coin).is_some(),
|
||||||
|
"router yielded events for unrecognized coins"
|
||||||
|
);
|
||||||
|
}
|
||||||
|
all_events
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn get_eventuality_completions(
|
||||||
|
&self,
|
||||||
|
eventualities: &mut EventualitiesTracker<Self::Eventuality>,
|
||||||
|
block: &Self::Block,
|
||||||
|
) -> HashMap<
|
||||||
|
[u8; 32],
|
||||||
|
(
|
||||||
|
usize,
|
||||||
|
<Self::Transaction as TransactionTrait<Self>>::Id,
|
||||||
|
<Self::Eventuality as EventualityTrait>::Completion,
|
||||||
|
),
|
||||||
|
> {
|
||||||
|
let mut res = HashMap::new();
|
||||||
|
if eventualities.map.is_empty() {
|
||||||
|
return res;
|
||||||
|
}
|
||||||
|
|
||||||
|
let router = self.router().await;
|
||||||
|
let router = router.as_ref().unwrap();
|
||||||
|
|
||||||
|
let past_scanned_epoch = loop {
|
||||||
|
match self.get_block(eventualities.block_number).await {
|
||||||
|
Ok(block) => break block,
|
||||||
|
Err(e) => log::error!("couldn't get the last scanned block in the tracker: {}", e),
|
||||||
|
}
|
||||||
|
sleep(Duration::from_secs(10)).await;
|
||||||
|
};
|
||||||
|
assert_eq!(
|
||||||
|
past_scanned_epoch.start / 32,
|
||||||
|
u64::try_from(eventualities.block_number).unwrap(),
|
||||||
|
"assumption of tracker block number's relation to epoch start is incorrect"
|
||||||
|
);
|
||||||
|
|
||||||
|
// Iterate from after the epoch number in the tracker to the end of this epoch
|
||||||
|
for block_num in (past_scanned_epoch.end() + 1) ..= block.end() {
|
||||||
|
let executed = loop {
|
||||||
|
match router.executed_commands(block_num).await {
|
||||||
|
Ok(executed) => break executed,
|
||||||
|
Err(e) => log::error!("couldn't get the executed commands in block {block_num}: {e}"),
|
||||||
|
}
|
||||||
|
sleep(Duration::from_secs(10)).await;
|
||||||
|
};
|
||||||
|
|
||||||
|
for executed in executed {
|
||||||
|
let lookup = executed.nonce.to_le_bytes().to_vec();
|
||||||
|
if let Some((plan_id, eventuality)) = eventualities.map.get(&lookup) {
|
||||||
|
if let Some(command) =
|
||||||
|
SignedRouterCommand::new(&eventuality.0, eventuality.1.clone(), &executed.signature)
|
||||||
|
{
|
||||||
|
res.insert(*plan_id, (block_num.try_into().unwrap(), executed.tx_id, command));
|
||||||
|
eventualities.map.remove(&lookup);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
eventualities.block_number = (block.start / 32).try_into().unwrap();
|
||||||
|
|
||||||
|
res
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn needed_fee(
|
||||||
|
&self,
|
||||||
|
_block_number: usize,
|
||||||
|
inputs: &[Self::Output],
|
||||||
|
_payments: &[Payment<Self>],
|
||||||
|
_change: &Option<Self::Address>,
|
||||||
|
) -> Result<Option<u64>, NetworkError> {
|
||||||
|
assert_eq!(inputs.len(), 0);
|
||||||
|
// Claim no fee is needed so we can perform amortization ourselves
|
||||||
|
Ok(Some(0))
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn signable_transaction(
|
||||||
|
&self,
|
||||||
|
_block_number: usize,
|
||||||
|
_plan_id: &[u8; 32],
|
||||||
|
key: <Self::Curve as Ciphersuite>::G,
|
||||||
|
inputs: &[Self::Output],
|
||||||
|
payments: &[Payment<Self>],
|
||||||
|
change: &Option<Self::Address>,
|
||||||
|
scheduler_addendum: &<Self::Scheduler as SchedulerTrait<Self>>::Addendum,
|
||||||
|
) -> Result<Option<(Self::SignableTransaction, Self::Eventuality)>, NetworkError> {
|
||||||
|
assert_eq!(inputs.len(), 0);
|
||||||
|
assert!(change.is_none());
|
||||||
|
let chain_id = self.provider.get_chain_id().await.map_err(|_| NetworkError::ConnectionError)?;
|
||||||
|
|
||||||
|
// TODO: Perform fee amortization (in scheduler?
|
||||||
|
// TODO: Make this function internal and have needed_fee properly return None as expected?
|
||||||
|
// TODO: signable_transaction is written as cannot return None if needed_fee returns Some
|
||||||
|
// TODO: Why can this return None at all if it isn't allowed to return None?
|
||||||
|
|
||||||
|
let command = match scheduler_addendum {
|
||||||
|
Addendum::Nonce(nonce) => RouterCommand::Execute {
|
||||||
|
chain_id: U256::try_from(chain_id).unwrap(),
|
||||||
|
nonce: U256::try_from(*nonce).unwrap(),
|
||||||
|
outs: payments
|
||||||
|
.iter()
|
||||||
|
.filter_map(|payment| {
|
||||||
|
Some(OutInstruction {
|
||||||
|
target: if let Some(data) = payment.data.as_ref() {
|
||||||
|
// This introspects the Call serialization format, expecting the first 20 bytes to
|
||||||
|
// be the address
|
||||||
|
// This avoids wasting the 20-bytes allocated within address
|
||||||
|
let full_data = [payment.address.0.as_slice(), data].concat();
|
||||||
|
let mut reader = full_data.as_slice();
|
||||||
|
|
||||||
|
let mut calls = vec![];
|
||||||
|
while !reader.is_empty() {
|
||||||
|
calls.push(Call::read(&mut reader).ok()?)
|
||||||
|
}
|
||||||
|
// The above must have executed at least once since reader contains the address
|
||||||
|
assert_eq!(calls[0].to, payment.address.0);
|
||||||
|
|
||||||
|
OutInstructionTarget::Calls(calls)
|
||||||
|
} else {
|
||||||
|
OutInstructionTarget::Direct(payment.address.0)
|
||||||
|
},
|
||||||
|
value: {
|
||||||
|
assert_eq!(payment.balance.coin, Coin::Ether); // TODO
|
||||||
|
balance_to_ethereum_amount(payment.balance)
|
||||||
|
},
|
||||||
|
})
|
||||||
|
})
|
||||||
|
.collect(),
|
||||||
|
},
|
||||||
|
Addendum::RotateTo { nonce, new_key } => {
|
||||||
|
assert!(payments.is_empty());
|
||||||
|
RouterCommand::UpdateSeraiKey {
|
||||||
|
chain_id: U256::try_from(chain_id).unwrap(),
|
||||||
|
nonce: U256::try_from(*nonce).unwrap(),
|
||||||
|
key: PublicKey::new(*new_key).expect("new key wasn't a valid ETH public key"),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
};
|
||||||
|
Ok(Some((
|
||||||
|
command.clone(),
|
||||||
|
Eventuality(PublicKey::new(key).expect("key wasn't a valid ETH public key"), command),
|
||||||
|
)))
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn attempt_sign(
|
||||||
|
&self,
|
||||||
|
keys: ThresholdKeys<Self::Curve>,
|
||||||
|
transaction: Self::SignableTransaction,
|
||||||
|
) -> Result<Self::TransactionMachine, NetworkError> {
|
||||||
|
Ok(
|
||||||
|
RouterCommandMachine::new(keys, transaction)
|
||||||
|
.expect("keys weren't usable to sign router commands"),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn publish_completion(
|
||||||
|
&self,
|
||||||
|
completion: &<Self::Eventuality as EventualityTrait>::Completion,
|
||||||
|
) -> Result<(), NetworkError> {
|
||||||
|
// Publish this to the dedicated TX server for a solver to actually publish
|
||||||
|
#[cfg(not(test))]
|
||||||
|
{
|
||||||
|
let _ = completion;
|
||||||
|
todo!("TODO");
|
||||||
|
}
|
||||||
|
|
||||||
|
// Publish this using a dummy account we fund with magic RPC commands
|
||||||
|
#[cfg(test)]
|
||||||
|
{
|
||||||
|
use rand_core::OsRng;
|
||||||
|
use ciphersuite::group::ff::Field;
|
||||||
|
|
||||||
|
let key = <Secp256k1 as Ciphersuite>::F::random(&mut OsRng);
|
||||||
|
let address = ethereum_serai::crypto::address(&(Secp256k1::generator() * key));
|
||||||
|
|
||||||
|
// Set a 1.1 ETH balance
|
||||||
|
self
|
||||||
|
.provider
|
||||||
|
.raw_request::<_, ()>(
|
||||||
|
"anvil_setBalance".into(),
|
||||||
|
[Address(address).to_string(), "1100000000000000000".into()],
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let router = self.router().await;
|
||||||
|
let router = router.as_ref().unwrap();
|
||||||
|
|
||||||
|
let mut tx = match completion.command() {
|
||||||
|
RouterCommand::UpdateSeraiKey { key, .. } => {
|
||||||
|
router.update_serai_key(key, completion.signature())
|
||||||
|
}
|
||||||
|
RouterCommand::Execute { outs, .. } => router.execute(
|
||||||
|
&outs.iter().cloned().map(Into::into).collect::<Vec<_>>(),
|
||||||
|
completion.signature(),
|
||||||
|
),
|
||||||
|
};
|
||||||
|
tx.gas_price = 100_000_000_000u128;
|
||||||
|
|
||||||
|
use ethereum_serai::alloy_consensus::SignableTransaction;
|
||||||
|
let sig =
|
||||||
|
k256::ecdsa::SigningKey::from(k256::elliptic_curve::NonZeroScalar::new(key).unwrap())
|
||||||
|
.sign_prehash_recoverable(tx.signature_hash().as_ref())
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let mut bytes = vec![];
|
||||||
|
tx.encode_with_signature_fields(&sig.into(), &mut bytes);
|
||||||
|
let _ = self.provider.send_raw_transaction(&bytes).await.ok().unwrap();
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn confirm_completion(
|
||||||
|
&self,
|
||||||
|
eventuality: &Self::Eventuality,
|
||||||
|
claim: &<Self::Eventuality as EventualityTrait>::Claim,
|
||||||
|
) -> Result<Option<<Self::Eventuality as EventualityTrait>::Completion>, NetworkError> {
|
||||||
|
Ok(SignedRouterCommand::new(&eventuality.0, eventuality.1.clone(), &claim.signature))
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
async fn get_block_number(&self, id: &<Self::Block as Block<Self>>::Id) -> usize {
|
||||||
|
self
|
||||||
|
.provider
|
||||||
|
.get_block(B256::from(*id).into(), false)
|
||||||
|
.await
|
||||||
|
.unwrap()
|
||||||
|
.unwrap()
|
||||||
|
.header
|
||||||
|
.number
|
||||||
|
.unwrap()
|
||||||
|
.try_into()
|
||||||
|
.unwrap()
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
async fn check_eventuality_by_claim(
|
||||||
|
&self,
|
||||||
|
eventuality: &Self::Eventuality,
|
||||||
|
claim: &<Self::Eventuality as EventualityTrait>::Claim,
|
||||||
|
) -> bool {
|
||||||
|
SignedRouterCommand::new(&eventuality.0, eventuality.1.clone(), &claim.signature).is_some()
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
async fn get_transaction_by_eventuality(
|
||||||
|
&self,
|
||||||
|
block: usize,
|
||||||
|
eventuality: &Self::Eventuality,
|
||||||
|
) -> Self::Transaction {
|
||||||
|
match eventuality.1 {
|
||||||
|
RouterCommand::UpdateSeraiKey { nonce, .. } | RouterCommand::Execute { nonce, .. } => {
|
||||||
|
let router = self.router().await;
|
||||||
|
let router = router.as_ref().unwrap();
|
||||||
|
|
||||||
|
let block = u64::try_from(block).unwrap();
|
||||||
|
let filter = router
|
||||||
|
.key_updated_filter()
|
||||||
|
.from_block(block * 32)
|
||||||
|
.to_block(((block + 1) * 32) - 1)
|
||||||
|
.topic1(nonce);
|
||||||
|
let logs = self.provider.get_logs(&filter).await.unwrap();
|
||||||
|
if let Some(log) = logs.first() {
|
||||||
|
return self
|
||||||
|
.provider
|
||||||
|
.get_transaction_by_hash(log.clone().transaction_hash.unwrap())
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
};
|
||||||
|
|
||||||
|
let filter = router
|
||||||
|
.executed_filter()
|
||||||
|
.from_block(block * 32)
|
||||||
|
.to_block(((block + 1) * 32) - 1)
|
||||||
|
.topic1(nonce);
|
||||||
|
let logs = self.provider.get_logs(&filter).await.unwrap();
|
||||||
|
self.provider.get_transaction_by_hash(logs[0].transaction_hash.unwrap()).await.unwrap()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
async fn mine_block(&self) {
|
||||||
|
self.provider.raw_request::<_, ()>("anvil_mine".into(), [32]).await.unwrap();
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
async fn test_send(&self, send_to: Self::Address) -> Self::Block {
|
||||||
|
use rand_core::OsRng;
|
||||||
|
use ciphersuite::group::ff::Field;
|
||||||
|
|
||||||
|
let key = <Secp256k1 as Ciphersuite>::F::random(&mut OsRng);
|
||||||
|
let address = ethereum_serai::crypto::address(&(Secp256k1::generator() * key));
|
||||||
|
|
||||||
|
// Set a 1.1 ETH balance
|
||||||
|
self
|
||||||
|
.provider
|
||||||
|
.raw_request::<_, ()>(
|
||||||
|
"anvil_setBalance".into(),
|
||||||
|
[Address(address).to_string(), "1100000000000000000".into()],
|
||||||
|
)
|
||||||
|
.await
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let tx = ethereum_serai::alloy_consensus::TxLegacy {
|
||||||
|
chain_id: None,
|
||||||
|
nonce: 0,
|
||||||
|
gas_price: 100_000_000_000u128,
|
||||||
|
gas_limit: 21_0000u128,
|
||||||
|
to: ethereum_serai::alloy_core::primitives::TxKind::Call(send_to.0.into()),
|
||||||
|
// 1 ETH
|
||||||
|
value: U256::from_str_radix("1000000000000000000", 10).unwrap(),
|
||||||
|
input: vec![].into(),
|
||||||
|
};
|
||||||
|
|
||||||
|
use ethereum_serai::alloy_consensus::SignableTransaction;
|
||||||
|
let sig = k256::ecdsa::SigningKey::from(k256::elliptic_curve::NonZeroScalar::new(key).unwrap())
|
||||||
|
.sign_prehash_recoverable(tx.signature_hash().as_ref())
|
||||||
|
.unwrap();
|
||||||
|
|
||||||
|
let mut bytes = vec![];
|
||||||
|
tx.encode_with_signature_fields(&sig.into(), &mut bytes);
|
||||||
|
let pending_tx = self.provider.send_raw_transaction(&bytes).await.ok().unwrap();
|
||||||
|
|
||||||
|
// Mine an epoch containing this TX
|
||||||
|
self.mine_block().await;
|
||||||
|
assert!(pending_tx.get_receipt().await.unwrap().status());
|
||||||
|
// Yield the freshly mined block
|
||||||
|
self.get_block(self.get_latest_block_number().await.unwrap()).await.unwrap()
|
||||||
|
}
|
||||||
|
}
|
|
@ -21,12 +21,17 @@ pub mod bitcoin;
|
||||||
#[cfg(feature = "bitcoin")]
|
#[cfg(feature = "bitcoin")]
|
||||||
pub use self::bitcoin::Bitcoin;
|
pub use self::bitcoin::Bitcoin;
|
||||||
|
|
||||||
|
#[cfg(feature = "ethereum")]
|
||||||
|
pub mod ethereum;
|
||||||
|
#[cfg(feature = "ethereum")]
|
||||||
|
pub use ethereum::Ethereum;
|
||||||
|
|
||||||
#[cfg(feature = "monero")]
|
#[cfg(feature = "monero")]
|
||||||
pub mod monero;
|
pub mod monero;
|
||||||
#[cfg(feature = "monero")]
|
#[cfg(feature = "monero")]
|
||||||
pub use monero::Monero;
|
pub use monero::Monero;
|
||||||
|
|
||||||
use crate::{Payment, Plan};
|
use crate::{Payment, Plan, multisigs::scheduler::Scheduler};
|
||||||
|
|
||||||
#[derive(Clone, Copy, Error, Debug)]
|
#[derive(Clone, Copy, Error, Debug)]
|
||||||
pub enum NetworkError {
|
pub enum NetworkError {
|
||||||
|
@ -105,7 +110,7 @@ pub trait Output<N: Network>: Send + Sync + Sized + Clone + PartialEq + Eq + Deb
|
||||||
fn kind(&self) -> OutputType;
|
fn kind(&self) -> OutputType;
|
||||||
|
|
||||||
fn id(&self) -> Self::Id;
|
fn id(&self) -> Self::Id;
|
||||||
fn tx_id(&self) -> <N::Transaction as Transaction<N>>::Id;
|
fn tx_id(&self) -> <N::Transaction as Transaction<N>>::Id; // TODO: Review use of
|
||||||
fn key(&self) -> <N::Curve as Ciphersuite>::G;
|
fn key(&self) -> <N::Curve as Ciphersuite>::G;
|
||||||
|
|
||||||
fn presumed_origin(&self) -> Option<N::Address>;
|
fn presumed_origin(&self) -> Option<N::Address>;
|
||||||
|
@ -118,25 +123,33 @@ pub trait Output<N: Network>: Send + Sync + Sized + Clone + PartialEq + Eq + Deb
|
||||||
}
|
}
|
||||||
|
|
||||||
#[async_trait]
|
#[async_trait]
|
||||||
pub trait Transaction<N: Network>: Send + Sync + Sized + Clone + Debug {
|
pub trait Transaction<N: Network>: Send + Sync + Sized + Clone + PartialEq + Debug {
|
||||||
type Id: 'static + Id;
|
type Id: 'static + Id;
|
||||||
fn id(&self) -> Self::Id;
|
fn id(&self) -> Self::Id;
|
||||||
fn serialize(&self) -> Vec<u8>;
|
// TODO: Move to Balance
|
||||||
fn read<R: io::Read>(reader: &mut R) -> io::Result<Self>;
|
|
||||||
|
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
async fn fee(&self, network: &N) -> u64;
|
async fn fee(&self, network: &N) -> u64;
|
||||||
}
|
}
|
||||||
|
|
||||||
pub trait SignableTransaction: Send + Sync + Clone + Debug {
|
pub trait SignableTransaction: Send + Sync + Clone + Debug {
|
||||||
|
// TODO: Move to Balance
|
||||||
fn fee(&self) -> u64;
|
fn fee(&self) -> u64;
|
||||||
}
|
}
|
||||||
|
|
||||||
pub trait Eventuality: Send + Sync + Clone + Debug {
|
pub trait Eventuality: Send + Sync + Clone + PartialEq + Debug {
|
||||||
|
type Claim: Send + Sync + Clone + PartialEq + Default + AsRef<[u8]> + AsMut<[u8]> + Debug;
|
||||||
|
type Completion: Send + Sync + Clone + PartialEq + Debug;
|
||||||
|
|
||||||
fn lookup(&self) -> Vec<u8>;
|
fn lookup(&self) -> Vec<u8>;
|
||||||
|
|
||||||
fn read<R: io::Read>(reader: &mut R) -> io::Result<Self>;
|
fn read<R: io::Read>(reader: &mut R) -> io::Result<Self>;
|
||||||
fn serialize(&self) -> Vec<u8>;
|
fn serialize(&self) -> Vec<u8>;
|
||||||
|
|
||||||
|
fn claim(completion: &Self::Completion) -> Self::Claim;
|
||||||
|
|
||||||
|
// TODO: Make a dedicated Completion trait
|
||||||
|
fn serialize_completion(completion: &Self::Completion) -> Vec<u8>;
|
||||||
|
fn read_completion<R: io::Read>(reader: &mut R) -> io::Result<Self::Completion>;
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Clone, PartialEq, Eq, Debug)]
|
#[derive(Clone, PartialEq, Eq, Debug)]
|
||||||
|
@ -211,7 +224,7 @@ fn drop_branches<N: Network>(
|
||||||
) -> Vec<PostFeeBranch> {
|
) -> Vec<PostFeeBranch> {
|
||||||
let mut branch_outputs = vec![];
|
let mut branch_outputs = vec![];
|
||||||
for payment in payments {
|
for payment in payments {
|
||||||
if payment.address == N::branch_address(key) {
|
if Some(&payment.address) == N::branch_address(key).as_ref() {
|
||||||
branch_outputs.push(PostFeeBranch { expected: payment.balance.amount.0, actual: None });
|
branch_outputs.push(PostFeeBranch { expected: payment.balance.amount.0, actual: None });
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -227,12 +240,12 @@ pub struct PreparedSend<N: Network> {
|
||||||
}
|
}
|
||||||
|
|
||||||
#[async_trait]
|
#[async_trait]
|
||||||
pub trait Network: 'static + Send + Sync + Clone + PartialEq + Eq + Debug {
|
pub trait Network: 'static + Send + Sync + Clone + PartialEq + Debug {
|
||||||
/// The elliptic curve used for this network.
|
/// The elliptic curve used for this network.
|
||||||
type Curve: Curve;
|
type Curve: Curve;
|
||||||
|
|
||||||
/// The type representing the transaction for this network.
|
/// The type representing the transaction for this network.
|
||||||
type Transaction: Transaction<Self>;
|
type Transaction: Transaction<Self>; // TODO: Review use of
|
||||||
/// The type representing the block for this network.
|
/// The type representing the block for this network.
|
||||||
type Block: Block<Self>;
|
type Block: Block<Self>;
|
||||||
|
|
||||||
|
@ -246,7 +259,12 @@ pub trait Network: 'static + Send + Sync + Clone + PartialEq + Eq + Debug {
|
||||||
/// This must be binding to both the outputs expected and the plan ID.
|
/// This must be binding to both the outputs expected and the plan ID.
|
||||||
type Eventuality: Eventuality;
|
type Eventuality: Eventuality;
|
||||||
/// The FROST machine to sign a transaction.
|
/// The FROST machine to sign a transaction.
|
||||||
type TransactionMachine: PreprocessMachine<Signature = Self::Transaction>;
|
type TransactionMachine: PreprocessMachine<
|
||||||
|
Signature = <Self::Eventuality as Eventuality>::Completion,
|
||||||
|
>;
|
||||||
|
|
||||||
|
/// The scheduler for this network.
|
||||||
|
type Scheduler: Scheduler<Self>;
|
||||||
|
|
||||||
/// The type representing an address.
|
/// The type representing an address.
|
||||||
// This should NOT be a String, yet a tailored type representing an efficient binary encoding,
|
// This should NOT be a String, yet a tailored type representing an efficient binary encoding,
|
||||||
|
@ -269,10 +287,6 @@ pub trait Network: 'static + Send + Sync + Clone + PartialEq + Eq + Debug {
|
||||||
const ESTIMATED_BLOCK_TIME_IN_SECONDS: usize;
|
const ESTIMATED_BLOCK_TIME_IN_SECONDS: usize;
|
||||||
/// The amount of confirmations required to consider a block 'final'.
|
/// The amount of confirmations required to consider a block 'final'.
|
||||||
const CONFIRMATIONS: usize;
|
const CONFIRMATIONS: usize;
|
||||||
/// The maximum amount of inputs which will fit in a TX.
|
|
||||||
/// This should be equal to MAX_OUTPUTS unless one is specifically limited.
|
|
||||||
/// A TX with MAX_INPUTS and MAX_OUTPUTS must not exceed the max size.
|
|
||||||
const MAX_INPUTS: usize;
|
|
||||||
/// The maximum amount of outputs which will fit in a TX.
|
/// The maximum amount of outputs which will fit in a TX.
|
||||||
/// This should be equal to MAX_INPUTS unless one is specifically limited.
|
/// This should be equal to MAX_INPUTS unless one is specifically limited.
|
||||||
/// A TX with MAX_INPUTS and MAX_OUTPUTS must not exceed the max size.
|
/// A TX with MAX_INPUTS and MAX_OUTPUTS must not exceed the max size.
|
||||||
|
@ -293,13 +307,16 @@ pub trait Network: 'static + Send + Sync + Clone + PartialEq + Eq + Debug {
|
||||||
fn tweak_keys(key: &mut ThresholdKeys<Self::Curve>);
|
fn tweak_keys(key: &mut ThresholdKeys<Self::Curve>);
|
||||||
|
|
||||||
/// Address for the given group key to receive external coins to.
|
/// Address for the given group key to receive external coins to.
|
||||||
fn external_address(key: <Self::Curve as Ciphersuite>::G) -> Self::Address;
|
#[cfg(test)]
|
||||||
|
async fn external_address(&self, key: <Self::Curve as Ciphersuite>::G) -> Self::Address;
|
||||||
/// Address for the given group key to use for scheduled branches.
|
/// Address for the given group key to use for scheduled branches.
|
||||||
fn branch_address(key: <Self::Curve as Ciphersuite>::G) -> Self::Address;
|
fn branch_address(key: <Self::Curve as Ciphersuite>::G) -> Option<Self::Address>;
|
||||||
/// Address for the given group key to use for change.
|
/// Address for the given group key to use for change.
|
||||||
fn change_address(key: <Self::Curve as Ciphersuite>::G) -> Self::Address;
|
fn change_address(key: <Self::Curve as Ciphersuite>::G) -> Option<Self::Address>;
|
||||||
/// Address for forwarded outputs from prior multisigs.
|
/// Address for forwarded outputs from prior multisigs.
|
||||||
fn forward_address(key: <Self::Curve as Ciphersuite>::G) -> Self::Address;
|
///
|
||||||
|
/// forward_address must only return None if explicit forwarding isn't necessary.
|
||||||
|
fn forward_address(key: <Self::Curve as Ciphersuite>::G) -> Option<Self::Address>;
|
||||||
|
|
||||||
/// Get the latest block's number.
|
/// Get the latest block's number.
|
||||||
async fn get_latest_block_number(&self) -> Result<usize, NetworkError>;
|
async fn get_latest_block_number(&self) -> Result<usize, NetworkError>;
|
||||||
|
@ -349,13 +366,24 @@ pub trait Network: 'static + Send + Sync + Clone + PartialEq + Eq + Debug {
|
||||||
/// registered eventualities may have been completed in.
|
/// registered eventualities may have been completed in.
|
||||||
///
|
///
|
||||||
/// This may panic if not fed a block greater than the tracker's block number.
|
/// This may panic if not fed a block greater than the tracker's block number.
|
||||||
|
///
|
||||||
|
/// Plan ID -> (block number, TX ID, completion)
|
||||||
// TODO: get_eventuality_completions_internal + provided get_eventuality_completions for common
|
// TODO: get_eventuality_completions_internal + provided get_eventuality_completions for common
|
||||||
// code
|
// code
|
||||||
|
// TODO: Consider having this return the Transaction + the Completion?
|
||||||
|
// Or Transaction with extract_completion?
|
||||||
async fn get_eventuality_completions(
|
async fn get_eventuality_completions(
|
||||||
&self,
|
&self,
|
||||||
eventualities: &mut EventualitiesTracker<Self::Eventuality>,
|
eventualities: &mut EventualitiesTracker<Self::Eventuality>,
|
||||||
block: &Self::Block,
|
block: &Self::Block,
|
||||||
) -> HashMap<[u8; 32], (usize, Self::Transaction)>;
|
) -> HashMap<
|
||||||
|
[u8; 32],
|
||||||
|
(
|
||||||
|
usize,
|
||||||
|
<Self::Transaction as Transaction<Self>>::Id,
|
||||||
|
<Self::Eventuality as Eventuality>::Completion,
|
||||||
|
),
|
||||||
|
>;
|
||||||
|
|
||||||
/// Returns the needed fee to fulfill this Plan at this fee rate.
|
/// Returns the needed fee to fulfill this Plan at this fee rate.
|
||||||
///
|
///
|
||||||
|
@ -363,7 +391,6 @@ pub trait Network: 'static + Send + Sync + Clone + PartialEq + Eq + Debug {
|
||||||
async fn needed_fee(
|
async fn needed_fee(
|
||||||
&self,
|
&self,
|
||||||
block_number: usize,
|
block_number: usize,
|
||||||
plan_id: &[u8; 32],
|
|
||||||
inputs: &[Self::Output],
|
inputs: &[Self::Output],
|
||||||
payments: &[Payment<Self>],
|
payments: &[Payment<Self>],
|
||||||
change: &Option<Self::Address>,
|
change: &Option<Self::Address>,
|
||||||
|
@ -375,16 +402,25 @@ pub trait Network: 'static + Send + Sync + Clone + PartialEq + Eq + Debug {
|
||||||
/// 1) Call needed_fee
|
/// 1) Call needed_fee
|
||||||
/// 2) If the Plan is fulfillable, amortize the fee
|
/// 2) If the Plan is fulfillable, amortize the fee
|
||||||
/// 3) Call signable_transaction *which MUST NOT return None if the above was done properly*
|
/// 3) Call signable_transaction *which MUST NOT return None if the above was done properly*
|
||||||
|
///
|
||||||
|
/// This takes a destructured Plan as some of these arguments are malleated from the original
|
||||||
|
/// Plan.
|
||||||
|
// TODO: Explicit AmortizedPlan?
|
||||||
|
#[allow(clippy::too_many_arguments)]
|
||||||
async fn signable_transaction(
|
async fn signable_transaction(
|
||||||
&self,
|
&self,
|
||||||
block_number: usize,
|
block_number: usize,
|
||||||
plan_id: &[u8; 32],
|
plan_id: &[u8; 32],
|
||||||
|
key: <Self::Curve as Ciphersuite>::G,
|
||||||
inputs: &[Self::Output],
|
inputs: &[Self::Output],
|
||||||
payments: &[Payment<Self>],
|
payments: &[Payment<Self>],
|
||||||
change: &Option<Self::Address>,
|
change: &Option<Self::Address>,
|
||||||
|
scheduler_addendum: &<Self::Scheduler as Scheduler<Self>>::Addendum,
|
||||||
) -> Result<Option<(Self::SignableTransaction, Self::Eventuality)>, NetworkError>;
|
) -> Result<Option<(Self::SignableTransaction, Self::Eventuality)>, NetworkError>;
|
||||||
|
|
||||||
/// Prepare a SignableTransaction for a transaction.
|
/// Prepare a SignableTransaction for a transaction.
|
||||||
|
///
|
||||||
|
/// This must not persist anything as we will prepare Plans we never intend to execute.
|
||||||
async fn prepare_send(
|
async fn prepare_send(
|
||||||
&self,
|
&self,
|
||||||
block_number: usize,
|
block_number: usize,
|
||||||
|
@ -395,13 +431,12 @@ pub trait Network: 'static + Send + Sync + Clone + PartialEq + Eq + Debug {
|
||||||
assert!((!plan.payments.is_empty()) || plan.change.is_some());
|
assert!((!plan.payments.is_empty()) || plan.change.is_some());
|
||||||
|
|
||||||
let plan_id = plan.id();
|
let plan_id = plan.id();
|
||||||
let Plan { key, inputs, mut payments, change } = plan;
|
let Plan { key, inputs, mut payments, change, scheduler_addendum } = plan;
|
||||||
let theoretical_change_amount =
|
let theoretical_change_amount =
|
||||||
inputs.iter().map(|input| input.balance().amount.0).sum::<u64>() -
|
inputs.iter().map(|input| input.balance().amount.0).sum::<u64>() -
|
||||||
payments.iter().map(|payment| payment.balance.amount.0).sum::<u64>();
|
payments.iter().map(|payment| payment.balance.amount.0).sum::<u64>();
|
||||||
|
|
||||||
let Some(tx_fee) = self.needed_fee(block_number, &plan_id, &inputs, &payments, &change).await?
|
let Some(tx_fee) = self.needed_fee(block_number, &inputs, &payments, &change).await? else {
|
||||||
else {
|
|
||||||
// This Plan is not fulfillable
|
// This Plan is not fulfillable
|
||||||
// TODO: Have Plan explicitly distinguish payments and branches in two separate Vecs?
|
// TODO: Have Plan explicitly distinguish payments and branches in two separate Vecs?
|
||||||
return Ok(PreparedSend {
|
return Ok(PreparedSend {
|
||||||
|
@ -466,7 +501,7 @@ pub trait Network: 'static + Send + Sync + Clone + PartialEq + Eq + Debug {
|
||||||
// Note the branch outputs' new values
|
// Note the branch outputs' new values
|
||||||
let mut branch_outputs = vec![];
|
let mut branch_outputs = vec![];
|
||||||
for (initial_amount, payment) in initial_payment_amounts.into_iter().zip(&payments) {
|
for (initial_amount, payment) in initial_payment_amounts.into_iter().zip(&payments) {
|
||||||
if payment.address == Self::branch_address(key) {
|
if Some(&payment.address) == Self::branch_address(key).as_ref() {
|
||||||
branch_outputs.push(PostFeeBranch {
|
branch_outputs.push(PostFeeBranch {
|
||||||
expected: initial_amount,
|
expected: initial_amount,
|
||||||
actual: if payment.balance.amount.0 == 0 {
|
actual: if payment.balance.amount.0 == 0 {
|
||||||
|
@ -508,11 +543,20 @@ pub trait Network: 'static + Send + Sync + Clone + PartialEq + Eq + Debug {
|
||||||
)
|
)
|
||||||
})();
|
})();
|
||||||
|
|
||||||
let Some(tx) =
|
let Some(tx) = self
|
||||||
self.signable_transaction(block_number, &plan_id, &inputs, &payments, &change).await?
|
.signable_transaction(
|
||||||
|
block_number,
|
||||||
|
&plan_id,
|
||||||
|
key,
|
||||||
|
&inputs,
|
||||||
|
&payments,
|
||||||
|
&change,
|
||||||
|
&scheduler_addendum,
|
||||||
|
)
|
||||||
|
.await?
|
||||||
else {
|
else {
|
||||||
panic!(
|
panic!(
|
||||||
"{}. {}: {}, {}: {:?}, {}: {:?}, {}: {:?}, {}: {}",
|
"{}. {}: {}, {}: {:?}, {}: {:?}, {}: {:?}, {}: {}, {}: {:?}",
|
||||||
"signable_transaction returned None for a TX we prior successfully calculated the fee for",
|
"signable_transaction returned None for a TX we prior successfully calculated the fee for",
|
||||||
"id",
|
"id",
|
||||||
hex::encode(plan_id),
|
hex::encode(plan_id),
|
||||||
|
@ -524,6 +568,8 @@ pub trait Network: 'static + Send + Sync + Clone + PartialEq + Eq + Debug {
|
||||||
change,
|
change,
|
||||||
"successfully amoritized fee",
|
"successfully amoritized fee",
|
||||||
tx_fee,
|
tx_fee,
|
||||||
|
"scheduler's addendum",
|
||||||
|
scheduler_addendum,
|
||||||
)
|
)
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -546,31 +592,49 @@ pub trait Network: 'static + Send + Sync + Clone + PartialEq + Eq + Debug {
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Attempt to sign a SignableTransaction.
|
/// Attempt to sign a SignableTransaction.
|
||||||
async fn attempt_send(
|
async fn attempt_sign(
|
||||||
&self,
|
&self,
|
||||||
keys: ThresholdKeys<Self::Curve>,
|
keys: ThresholdKeys<Self::Curve>,
|
||||||
transaction: Self::SignableTransaction,
|
transaction: Self::SignableTransaction,
|
||||||
) -> Result<Self::TransactionMachine, NetworkError>;
|
) -> Result<Self::TransactionMachine, NetworkError>;
|
||||||
|
|
||||||
/// Publish a transaction.
|
/// Publish a completion.
|
||||||
async fn publish_transaction(&self, tx: &Self::Transaction) -> Result<(), NetworkError>;
|
async fn publish_completion(
|
||||||
|
|
||||||
/// Get a transaction by its ID.
|
|
||||||
async fn get_transaction(
|
|
||||||
&self,
|
&self,
|
||||||
id: &<Self::Transaction as Transaction<Self>>::Id,
|
completion: &<Self::Eventuality as Eventuality>::Completion,
|
||||||
) -> Result<Self::Transaction, NetworkError>;
|
) -> Result<(), NetworkError>;
|
||||||
|
|
||||||
/// Confirm a plan was completed by the specified transaction.
|
/// Confirm a plan was completed by the specified transaction, per our bounds.
|
||||||
// This is allowed to take shortcuts.
|
///
|
||||||
// This may assume an honest multisig, solely checking the inputs specified were spent.
|
/// Returns Err if there was an error with the confirmation methodology.
|
||||||
// This may solely check the outputs are equivalent *so long as it's locked to the plan ID*.
|
/// Returns Ok(None) if this is not a valid completion.
|
||||||
fn confirm_completion(&self, eventuality: &Self::Eventuality, tx: &Self::Transaction) -> bool;
|
/// Returns Ok(Some(_)) with the completion if it's valid.
|
||||||
|
async fn confirm_completion(
|
||||||
|
&self,
|
||||||
|
eventuality: &Self::Eventuality,
|
||||||
|
claim: &<Self::Eventuality as Eventuality>::Claim,
|
||||||
|
) -> Result<Option<<Self::Eventuality as Eventuality>::Completion>, NetworkError>;
|
||||||
|
|
||||||
/// Get a block's number by its ID.
|
/// Get a block's number by its ID.
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
async fn get_block_number(&self, id: &<Self::Block as Block<Self>>::Id) -> usize;
|
async fn get_block_number(&self, id: &<Self::Block as Block<Self>>::Id) -> usize;
|
||||||
|
|
||||||
|
/// Check an Eventuality is fulfilled by a claim.
|
||||||
|
#[cfg(test)]
|
||||||
|
async fn check_eventuality_by_claim(
|
||||||
|
&self,
|
||||||
|
eventuality: &Self::Eventuality,
|
||||||
|
claim: &<Self::Eventuality as Eventuality>::Claim,
|
||||||
|
) -> bool;
|
||||||
|
|
||||||
|
/// Get a transaction by the Eventuality it completes.
|
||||||
|
#[cfg(test)]
|
||||||
|
async fn get_transaction_by_eventuality(
|
||||||
|
&self,
|
||||||
|
block: usize,
|
||||||
|
eventuality: &Self::Eventuality,
|
||||||
|
) -> Self::Transaction;
|
||||||
|
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
async fn mine_block(&self);
|
async fn mine_block(&self);
|
||||||
|
|
||||||
|
@ -579,3 +643,10 @@ pub trait Network: 'static + Send + Sync + Clone + PartialEq + Eq + Debug {
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
async fn test_send(&self, key: Self::Address) -> Self::Block;
|
async fn test_send(&self, key: Self::Address) -> Self::Block;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
pub trait UtxoNetwork: Network {
|
||||||
|
/// The maximum amount of inputs which will fit in a TX.
|
||||||
|
/// This should be equal to MAX_OUTPUTS unless one is specifically limited.
|
||||||
|
/// A TX with MAX_INPUTS and MAX_OUTPUTS must not exceed the max size.
|
||||||
|
const MAX_INPUTS: usize;
|
||||||
|
}
|
||||||
|
|
|
@ -39,8 +39,9 @@ use crate::{
|
||||||
networks::{
|
networks::{
|
||||||
NetworkError, Block as BlockTrait, OutputType, Output as OutputTrait,
|
NetworkError, Block as BlockTrait, OutputType, Output as OutputTrait,
|
||||||
Transaction as TransactionTrait, SignableTransaction as SignableTransactionTrait,
|
Transaction as TransactionTrait, SignableTransaction as SignableTransactionTrait,
|
||||||
Eventuality as EventualityTrait, EventualitiesTracker, Network,
|
Eventuality as EventualityTrait, EventualitiesTracker, Network, UtxoNetwork,
|
||||||
},
|
},
|
||||||
|
multisigs::scheduler::utxo::Scheduler,
|
||||||
};
|
};
|
||||||
|
|
||||||
#[derive(Clone, PartialEq, Eq, Debug)]
|
#[derive(Clone, PartialEq, Eq, Debug)]
|
||||||
|
@ -117,12 +118,6 @@ impl TransactionTrait<Monero> for Transaction {
|
||||||
fn id(&self) -> Self::Id {
|
fn id(&self) -> Self::Id {
|
||||||
self.hash()
|
self.hash()
|
||||||
}
|
}
|
||||||
fn serialize(&self) -> Vec<u8> {
|
|
||||||
self.serialize()
|
|
||||||
}
|
|
||||||
fn read<R: io::Read>(reader: &mut R) -> io::Result<Self> {
|
|
||||||
Transaction::read(reader)
|
|
||||||
}
|
|
||||||
|
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
async fn fee(&self, _: &Monero) -> u64 {
|
async fn fee(&self, _: &Monero) -> u64 {
|
||||||
|
@ -131,6 +126,9 @@ impl TransactionTrait<Monero> for Transaction {
|
||||||
}
|
}
|
||||||
|
|
||||||
impl EventualityTrait for Eventuality {
|
impl EventualityTrait for Eventuality {
|
||||||
|
type Claim = [u8; 32];
|
||||||
|
type Completion = Transaction;
|
||||||
|
|
||||||
// Use the TX extra to look up potential matches
|
// Use the TX extra to look up potential matches
|
||||||
// While anyone can forge this, a transaction with distinct outputs won't actually match
|
// While anyone can forge this, a transaction with distinct outputs won't actually match
|
||||||
// Extra includess the one time keys which are derived from the plan ID, so a collision here is a
|
// Extra includess the one time keys which are derived from the plan ID, so a collision here is a
|
||||||
|
@ -145,6 +143,16 @@ impl EventualityTrait for Eventuality {
|
||||||
fn serialize(&self) -> Vec<u8> {
|
fn serialize(&self) -> Vec<u8> {
|
||||||
self.serialize()
|
self.serialize()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
fn claim(tx: &Transaction) -> [u8; 32] {
|
||||||
|
tx.id()
|
||||||
|
}
|
||||||
|
fn serialize_completion(completion: &Transaction) -> Vec<u8> {
|
||||||
|
completion.serialize()
|
||||||
|
}
|
||||||
|
fn read_completion<R: io::Read>(reader: &mut R) -> io::Result<Transaction> {
|
||||||
|
Transaction::read(reader)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Clone, Debug)]
|
#[derive(Clone, Debug)]
|
||||||
|
@ -274,7 +282,8 @@ impl Monero {
|
||||||
async fn median_fee(&self, block: &Block) -> Result<Fee, NetworkError> {
|
async fn median_fee(&self, block: &Block) -> Result<Fee, NetworkError> {
|
||||||
let mut fees = vec![];
|
let mut fees = vec![];
|
||||||
for tx_hash in &block.txs {
|
for tx_hash in &block.txs {
|
||||||
let tx = self.get_transaction(tx_hash).await?;
|
let tx =
|
||||||
|
self.rpc.get_transaction(*tx_hash).await.map_err(|_| NetworkError::ConnectionError)?;
|
||||||
// Only consider fees from RCT transactions, else the fee property read wouldn't be accurate
|
// Only consider fees from RCT transactions, else the fee property read wouldn't be accurate
|
||||||
if tx.rct_signatures.rct_type() != RctType::Null {
|
if tx.rct_signatures.rct_type() != RctType::Null {
|
||||||
continue;
|
continue;
|
||||||
|
@ -454,6 +463,8 @@ impl Network for Monero {
|
||||||
type Eventuality = Eventuality;
|
type Eventuality = Eventuality;
|
||||||
type TransactionMachine = TransactionMachine;
|
type TransactionMachine = TransactionMachine;
|
||||||
|
|
||||||
|
type Scheduler = Scheduler<Monero>;
|
||||||
|
|
||||||
type Address = Address;
|
type Address = Address;
|
||||||
|
|
||||||
const NETWORK: NetworkId = NetworkId::Monero;
|
const NETWORK: NetworkId = NetworkId::Monero;
|
||||||
|
@ -461,11 +472,6 @@ impl Network for Monero {
|
||||||
const ESTIMATED_BLOCK_TIME_IN_SECONDS: usize = 120;
|
const ESTIMATED_BLOCK_TIME_IN_SECONDS: usize = 120;
|
||||||
const CONFIRMATIONS: usize = 10;
|
const CONFIRMATIONS: usize = 10;
|
||||||
|
|
||||||
// wallet2 will not create a transaction larger than 100kb, and Monero won't relay a transaction
|
|
||||||
// larger than 150kb. This fits within the 100kb mark
|
|
||||||
// Technically, it can be ~124, yet a small bit of buffer is appreciated
|
|
||||||
// TODO: Test creating a TX this big
|
|
||||||
const MAX_INPUTS: usize = 120;
|
|
||||||
const MAX_OUTPUTS: usize = 16;
|
const MAX_OUTPUTS: usize = 16;
|
||||||
|
|
||||||
// 0.01 XMR
|
// 0.01 XMR
|
||||||
|
@ -478,20 +484,21 @@ impl Network for Monero {
|
||||||
// Monero doesn't require/benefit from tweaking
|
// Monero doesn't require/benefit from tweaking
|
||||||
fn tweak_keys(_: &mut ThresholdKeys<Self::Curve>) {}
|
fn tweak_keys(_: &mut ThresholdKeys<Self::Curve>) {}
|
||||||
|
|
||||||
fn external_address(key: EdwardsPoint) -> Address {
|
#[cfg(test)]
|
||||||
|
async fn external_address(&self, key: EdwardsPoint) -> Address {
|
||||||
Self::address_internal(key, EXTERNAL_SUBADDRESS)
|
Self::address_internal(key, EXTERNAL_SUBADDRESS)
|
||||||
}
|
}
|
||||||
|
|
||||||
fn branch_address(key: EdwardsPoint) -> Address {
|
fn branch_address(key: EdwardsPoint) -> Option<Address> {
|
||||||
Self::address_internal(key, BRANCH_SUBADDRESS)
|
Some(Self::address_internal(key, BRANCH_SUBADDRESS))
|
||||||
}
|
}
|
||||||
|
|
||||||
fn change_address(key: EdwardsPoint) -> Address {
|
fn change_address(key: EdwardsPoint) -> Option<Address> {
|
||||||
Self::address_internal(key, CHANGE_SUBADDRESS)
|
Some(Self::address_internal(key, CHANGE_SUBADDRESS))
|
||||||
}
|
}
|
||||||
|
|
||||||
fn forward_address(key: EdwardsPoint) -> Address {
|
fn forward_address(key: EdwardsPoint) -> Option<Address> {
|
||||||
Self::address_internal(key, FORWARD_SUBADDRESS)
|
Some(Self::address_internal(key, FORWARD_SUBADDRESS))
|
||||||
}
|
}
|
||||||
|
|
||||||
async fn get_latest_block_number(&self) -> Result<usize, NetworkError> {
|
async fn get_latest_block_number(&self) -> Result<usize, NetworkError> {
|
||||||
|
@ -558,7 +565,7 @@ impl Network for Monero {
|
||||||
&self,
|
&self,
|
||||||
eventualities: &mut EventualitiesTracker<Eventuality>,
|
eventualities: &mut EventualitiesTracker<Eventuality>,
|
||||||
block: &Block,
|
block: &Block,
|
||||||
) -> HashMap<[u8; 32], (usize, Transaction)> {
|
) -> HashMap<[u8; 32], (usize, [u8; 32], Transaction)> {
|
||||||
let mut res = HashMap::new();
|
let mut res = HashMap::new();
|
||||||
if eventualities.map.is_empty() {
|
if eventualities.map.is_empty() {
|
||||||
return res;
|
return res;
|
||||||
|
@ -568,13 +575,13 @@ impl Network for Monero {
|
||||||
network: &Monero,
|
network: &Monero,
|
||||||
eventualities: &mut EventualitiesTracker<Eventuality>,
|
eventualities: &mut EventualitiesTracker<Eventuality>,
|
||||||
block: &Block,
|
block: &Block,
|
||||||
res: &mut HashMap<[u8; 32], (usize, Transaction)>,
|
res: &mut HashMap<[u8; 32], (usize, [u8; 32], Transaction)>,
|
||||||
) {
|
) {
|
||||||
for hash in &block.txs {
|
for hash in &block.txs {
|
||||||
let tx = {
|
let tx = {
|
||||||
let mut tx;
|
let mut tx;
|
||||||
while {
|
while {
|
||||||
tx = network.get_transaction(hash).await;
|
tx = network.rpc.get_transaction(*hash).await;
|
||||||
tx.is_err()
|
tx.is_err()
|
||||||
} {
|
} {
|
||||||
log::error!("couldn't get transaction {}: {}", hex::encode(hash), tx.err().unwrap());
|
log::error!("couldn't get transaction {}: {}", hex::encode(hash), tx.err().unwrap());
|
||||||
|
@ -587,7 +594,7 @@ impl Network for Monero {
|
||||||
if eventuality.matches(&tx) {
|
if eventuality.matches(&tx) {
|
||||||
res.insert(
|
res.insert(
|
||||||
eventualities.map.remove(&tx.prefix.extra).unwrap().0,
|
eventualities.map.remove(&tx.prefix.extra).unwrap().0,
|
||||||
(usize::try_from(block.number().unwrap()).unwrap(), tx),
|
(usize::try_from(block.number().unwrap()).unwrap(), tx.id(), tx),
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -625,14 +632,13 @@ impl Network for Monero {
|
||||||
async fn needed_fee(
|
async fn needed_fee(
|
||||||
&self,
|
&self,
|
||||||
block_number: usize,
|
block_number: usize,
|
||||||
plan_id: &[u8; 32],
|
|
||||||
inputs: &[Output],
|
inputs: &[Output],
|
||||||
payments: &[Payment<Self>],
|
payments: &[Payment<Self>],
|
||||||
change: &Option<Address>,
|
change: &Option<Address>,
|
||||||
) -> Result<Option<u64>, NetworkError> {
|
) -> Result<Option<u64>, NetworkError> {
|
||||||
Ok(
|
Ok(
|
||||||
self
|
self
|
||||||
.make_signable_transaction(block_number, plan_id, inputs, payments, change, true)
|
.make_signable_transaction(block_number, &[0; 32], inputs, payments, change, true)
|
||||||
.await?
|
.await?
|
||||||
.map(|(_, signable)| signable.fee()),
|
.map(|(_, signable)| signable.fee()),
|
||||||
)
|
)
|
||||||
|
@ -642,9 +648,11 @@ impl Network for Monero {
|
||||||
&self,
|
&self,
|
||||||
block_number: usize,
|
block_number: usize,
|
||||||
plan_id: &[u8; 32],
|
plan_id: &[u8; 32],
|
||||||
|
_key: EdwardsPoint,
|
||||||
inputs: &[Output],
|
inputs: &[Output],
|
||||||
payments: &[Payment<Self>],
|
payments: &[Payment<Self>],
|
||||||
change: &Option<Address>,
|
change: &Option<Address>,
|
||||||
|
(): &(),
|
||||||
) -> Result<Option<(Self::SignableTransaction, Self::Eventuality)>, NetworkError> {
|
) -> Result<Option<(Self::SignableTransaction, Self::Eventuality)>, NetworkError> {
|
||||||
Ok(
|
Ok(
|
||||||
self
|
self
|
||||||
|
@ -658,7 +666,7 @@ impl Network for Monero {
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
async fn attempt_send(
|
async fn attempt_sign(
|
||||||
&self,
|
&self,
|
||||||
keys: ThresholdKeys<Self::Curve>,
|
keys: ThresholdKeys<Self::Curve>,
|
||||||
transaction: SignableTransaction,
|
transaction: SignableTransaction,
|
||||||
|
@ -669,7 +677,7 @@ impl Network for Monero {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
async fn publish_transaction(&self, tx: &Self::Transaction) -> Result<(), NetworkError> {
|
async fn publish_completion(&self, tx: &Transaction) -> Result<(), NetworkError> {
|
||||||
match self.rpc.publish_transaction(tx).await {
|
match self.rpc.publish_transaction(tx).await {
|
||||||
Ok(()) => Ok(()),
|
Ok(()) => Ok(()),
|
||||||
Err(RpcError::ConnectionError(e)) => {
|
Err(RpcError::ConnectionError(e)) => {
|
||||||
|
@ -682,12 +690,17 @@ impl Network for Monero {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
async fn get_transaction(&self, id: &[u8; 32]) -> Result<Transaction, NetworkError> {
|
async fn confirm_completion(
|
||||||
self.rpc.get_transaction(*id).await.map_err(map_rpc_err)
|
&self,
|
||||||
}
|
eventuality: &Eventuality,
|
||||||
|
id: &[u8; 32],
|
||||||
fn confirm_completion(&self, eventuality: &Eventuality, tx: &Transaction) -> bool {
|
) -> Result<Option<Transaction>, NetworkError> {
|
||||||
eventuality.matches(tx)
|
let tx = self.rpc.get_transaction(*id).await.map_err(map_rpc_err)?;
|
||||||
|
if eventuality.matches(&tx) {
|
||||||
|
Ok(Some(tx))
|
||||||
|
} else {
|
||||||
|
Ok(None)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
|
@ -695,6 +708,31 @@ impl Network for Monero {
|
||||||
self.rpc.get_block(*id).await.unwrap().number().unwrap().try_into().unwrap()
|
self.rpc.get_block(*id).await.unwrap().number().unwrap().try_into().unwrap()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
async fn check_eventuality_by_claim(
|
||||||
|
&self,
|
||||||
|
eventuality: &Self::Eventuality,
|
||||||
|
claim: &[u8; 32],
|
||||||
|
) -> bool {
|
||||||
|
return eventuality.matches(&self.rpc.get_transaction(*claim).await.unwrap());
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
async fn get_transaction_by_eventuality(
|
||||||
|
&self,
|
||||||
|
block: usize,
|
||||||
|
eventuality: &Eventuality,
|
||||||
|
) -> Transaction {
|
||||||
|
let block = self.rpc.get_block_by_number(block).await.unwrap();
|
||||||
|
for tx in &block.txs {
|
||||||
|
let tx = self.rpc.get_transaction(*tx).await.unwrap();
|
||||||
|
if eventuality.matches(&tx) {
|
||||||
|
return tx;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
panic!("block didn't have a transaction for this eventuality")
|
||||||
|
}
|
||||||
|
|
||||||
#[cfg(test)]
|
#[cfg(test)]
|
||||||
async fn mine_block(&self) {
|
async fn mine_block(&self) {
|
||||||
// https://github.com/serai-dex/serai/issues/198
|
// https://github.com/serai-dex/serai/issues/198
|
||||||
|
@ -775,3 +813,11 @@ impl Network for Monero {
|
||||||
self.get_block(block).await.unwrap()
|
self.get_block(block).await.unwrap()
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
impl UtxoNetwork for Monero {
|
||||||
|
// wallet2 will not create a transaction larger than 100kb, and Monero won't relay a transaction
|
||||||
|
// larger than 150kb. This fits within the 100kb mark
|
||||||
|
// Technically, it can be ~124, yet a small bit of buffer is appreciated
|
||||||
|
// TODO: Test creating a TX this big
|
||||||
|
const MAX_INPUTS: usize = 120;
|
||||||
|
}
|
||||||
|
|
|
@ -8,7 +8,10 @@ use frost::curve::Ciphersuite;
|
||||||
|
|
||||||
use serai_client::primitives::Balance;
|
use serai_client::primitives::Balance;
|
||||||
|
|
||||||
use crate::networks::{Output, Network};
|
use crate::{
|
||||||
|
networks::{Output, Network},
|
||||||
|
multisigs::scheduler::{SchedulerAddendum, Scheduler},
|
||||||
|
};
|
||||||
|
|
||||||
#[derive(Clone, PartialEq, Eq, Debug)]
|
#[derive(Clone, PartialEq, Eq, Debug)]
|
||||||
pub struct Payment<N: Network> {
|
pub struct Payment<N: Network> {
|
||||||
|
@ -73,7 +76,7 @@ impl<N: Network> Payment<N> {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Clone, PartialEq, Eq)]
|
#[derive(Clone, PartialEq)]
|
||||||
pub struct Plan<N: Network> {
|
pub struct Plan<N: Network> {
|
||||||
pub key: <N::Curve as Ciphersuite>::G,
|
pub key: <N::Curve as Ciphersuite>::G,
|
||||||
pub inputs: Vec<N::Output>,
|
pub inputs: Vec<N::Output>,
|
||||||
|
@ -90,7 +93,11 @@ pub struct Plan<N: Network> {
|
||||||
/// This MUST contain a Serai address. Operating costs may be deducted from the payments in this
|
/// This MUST contain a Serai address. Operating costs may be deducted from the payments in this
|
||||||
/// Plan on the premise that the change address is Serai's, and accordingly, Serai will recoup
|
/// Plan on the premise that the change address is Serai's, and accordingly, Serai will recoup
|
||||||
/// the operating costs.
|
/// the operating costs.
|
||||||
|
//
|
||||||
|
// TODO: Consider moving to ::G?
|
||||||
pub change: Option<N::Address>,
|
pub change: Option<N::Address>,
|
||||||
|
/// The scheduler's additional data.
|
||||||
|
pub scheduler_addendum: <N::Scheduler as Scheduler<N>>::Addendum,
|
||||||
}
|
}
|
||||||
impl<N: Network> core::fmt::Debug for Plan<N> {
|
impl<N: Network> core::fmt::Debug for Plan<N> {
|
||||||
fn fmt(&self, fmt: &mut core::fmt::Formatter<'_>) -> Result<(), core::fmt::Error> {
|
fn fmt(&self, fmt: &mut core::fmt::Formatter<'_>) -> Result<(), core::fmt::Error> {
|
||||||
|
@ -100,6 +107,7 @@ impl<N: Network> core::fmt::Debug for Plan<N> {
|
||||||
.field("inputs", &self.inputs)
|
.field("inputs", &self.inputs)
|
||||||
.field("payments", &self.payments)
|
.field("payments", &self.payments)
|
||||||
.field("change", &self.change.as_ref().map(ToString::to_string))
|
.field("change", &self.change.as_ref().map(ToString::to_string))
|
||||||
|
.field("scheduler_addendum", &self.scheduler_addendum)
|
||||||
.finish()
|
.finish()
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -125,6 +133,10 @@ impl<N: Network> Plan<N> {
|
||||||
transcript.append_message(b"change", change.to_string());
|
transcript.append_message(b"change", change.to_string());
|
||||||
}
|
}
|
||||||
|
|
||||||
|
let mut addendum_bytes = vec![];
|
||||||
|
self.scheduler_addendum.write(&mut addendum_bytes).unwrap();
|
||||||
|
transcript.append_message(b"scheduler_addendum", addendum_bytes);
|
||||||
|
|
||||||
transcript
|
transcript
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -161,7 +173,8 @@ impl<N: Network> Plan<N> {
|
||||||
};
|
};
|
||||||
assert!(serai_client::primitives::MAX_ADDRESS_LEN <= u8::MAX.into());
|
assert!(serai_client::primitives::MAX_ADDRESS_LEN <= u8::MAX.into());
|
||||||
writer.write_all(&[u8::try_from(change.len()).unwrap()])?;
|
writer.write_all(&[u8::try_from(change.len()).unwrap()])?;
|
||||||
writer.write_all(&change)
|
writer.write_all(&change)?;
|
||||||
|
self.scheduler_addendum.write(writer)
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn read<R: io::Read>(reader: &mut R) -> io::Result<Self> {
|
pub fn read<R: io::Read>(reader: &mut R) -> io::Result<Self> {
|
||||||
|
@ -193,6 +206,7 @@ impl<N: Network> Plan<N> {
|
||||||
})?)
|
})?)
|
||||||
};
|
};
|
||||||
|
|
||||||
Ok(Plan { key, inputs, payments, change })
|
let scheduler_addendum = <N::Scheduler as Scheduler<N>>::Addendum::read(reader)?;
|
||||||
|
Ok(Plan { key, inputs, payments, change, scheduler_addendum })
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -2,7 +2,6 @@ use core::{marker::PhantomData, fmt};
|
||||||
use std::collections::HashMap;
|
use std::collections::HashMap;
|
||||||
|
|
||||||
use rand_core::OsRng;
|
use rand_core::OsRng;
|
||||||
use ciphersuite::group::GroupEncoding;
|
|
||||||
use frost::{
|
use frost::{
|
||||||
ThresholdKeys, FrostError,
|
ThresholdKeys, FrostError,
|
||||||
sign::{Writable, PreprocessMachine, SignMachine, SignatureMachine},
|
sign::{Writable, PreprocessMachine, SignMachine, SignatureMachine},
|
||||||
|
@ -17,7 +16,7 @@ pub use serai_db::*;
|
||||||
|
|
||||||
use crate::{
|
use crate::{
|
||||||
Get, DbTxn, Db,
|
Get, DbTxn, Db,
|
||||||
networks::{Transaction, Eventuality, Network},
|
networks::{Eventuality, Network},
|
||||||
};
|
};
|
||||||
|
|
||||||
create_db!(
|
create_db!(
|
||||||
|
@ -25,7 +24,7 @@ create_db!(
|
||||||
CompletionsDb: (id: [u8; 32]) -> Vec<u8>,
|
CompletionsDb: (id: [u8; 32]) -> Vec<u8>,
|
||||||
EventualityDb: (id: [u8; 32]) -> Vec<u8>,
|
EventualityDb: (id: [u8; 32]) -> Vec<u8>,
|
||||||
AttemptDb: (id: &SignId) -> (),
|
AttemptDb: (id: &SignId) -> (),
|
||||||
TransactionDb: (id: &[u8]) -> Vec<u8>,
|
CompletionDb: (claim: &[u8]) -> Vec<u8>,
|
||||||
ActiveSignsDb: () -> Vec<[u8; 32]>,
|
ActiveSignsDb: () -> Vec<[u8; 32]>,
|
||||||
CompletedOnChainDb: (id: &[u8; 32]) -> (),
|
CompletedOnChainDb: (id: &[u8; 32]) -> (),
|
||||||
}
|
}
|
||||||
|
@ -59,12 +58,20 @@ impl CompletionsDb {
|
||||||
fn completions<N: Network>(
|
fn completions<N: Network>(
|
||||||
getter: &impl Get,
|
getter: &impl Get,
|
||||||
id: [u8; 32],
|
id: [u8; 32],
|
||||||
) -> Vec<<N::Transaction as Transaction<N>>::Id> {
|
) -> Vec<<N::Eventuality as Eventuality>::Claim> {
|
||||||
let completions = Self::get(getter, id).unwrap_or_default();
|
let Some(completions) = Self::get(getter, id) else { return vec![] };
|
||||||
|
|
||||||
|
// If this was set yet is empty, it's because it's the encoding of a claim with a length of 0
|
||||||
|
if completions.is_empty() {
|
||||||
|
let default = <N::Eventuality as Eventuality>::Claim::default();
|
||||||
|
assert_eq!(default.as_ref().len(), 0);
|
||||||
|
return vec![default];
|
||||||
|
}
|
||||||
|
|
||||||
let mut completions_ref = completions.as_slice();
|
let mut completions_ref = completions.as_slice();
|
||||||
let mut res = vec![];
|
let mut res = vec![];
|
||||||
while !completions_ref.is_empty() {
|
while !completions_ref.is_empty() {
|
||||||
let mut id = <N::Transaction as Transaction<N>>::Id::default();
|
let mut id = <N::Eventuality as Eventuality>::Claim::default();
|
||||||
let id_len = id.as_ref().len();
|
let id_len = id.as_ref().len();
|
||||||
id.as_mut().copy_from_slice(&completions_ref[.. id_len]);
|
id.as_mut().copy_from_slice(&completions_ref[.. id_len]);
|
||||||
completions_ref = &completions_ref[id_len ..];
|
completions_ref = &completions_ref[id_len ..];
|
||||||
|
@ -73,25 +80,37 @@ impl CompletionsDb {
|
||||||
res
|
res
|
||||||
}
|
}
|
||||||
|
|
||||||
fn complete<N: Network>(txn: &mut impl DbTxn, id: [u8; 32], tx: &N::Transaction) {
|
fn complete<N: Network>(
|
||||||
let tx_id = tx.id();
|
txn: &mut impl DbTxn,
|
||||||
// Transactions can be completed by multiple signatures
|
id: [u8; 32],
|
||||||
|
completion: &<N::Eventuality as Eventuality>::Completion,
|
||||||
|
) {
|
||||||
|
// Completions can be completed by multiple signatures
|
||||||
// Save every solution in order to be robust
|
// Save every solution in order to be robust
|
||||||
TransactionDb::save_transaction::<N>(txn, tx);
|
CompletionDb::save_completion::<N>(txn, completion);
|
||||||
let mut existing = Self::get(txn, id).unwrap_or_default();
|
|
||||||
// Don't add this TX if it's already present
|
|
||||||
let tx_len = tx_id.as_ref().len();
|
|
||||||
assert_eq!(existing.len() % tx_len, 0);
|
|
||||||
|
|
||||||
let mut i = 0;
|
let claim = N::Eventuality::claim(completion);
|
||||||
while i < existing.len() {
|
let claim: &[u8] = claim.as_ref();
|
||||||
if &existing[i .. (i + tx_len)] == tx_id.as_ref() {
|
|
||||||
return;
|
// If claim has a 0-byte encoding, the set key, even if empty, is the claim
|
||||||
}
|
if claim.is_empty() {
|
||||||
i += tx_len;
|
Self::set(txn, id, &vec![]);
|
||||||
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
existing.extend(tx_id.as_ref());
|
let mut existing = Self::get(txn, id).unwrap_or_default();
|
||||||
|
assert_eq!(existing.len() % claim.len(), 0);
|
||||||
|
|
||||||
|
// Don't add this completion if it's already present
|
||||||
|
let mut i = 0;
|
||||||
|
while i < existing.len() {
|
||||||
|
if &existing[i .. (i + claim.len())] == claim {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
i += claim.len();
|
||||||
|
}
|
||||||
|
|
||||||
|
existing.extend(claim);
|
||||||
Self::set(txn, id, &existing);
|
Self::set(txn, id, &existing);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -110,25 +129,33 @@ impl EventualityDb {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl TransactionDb {
|
impl CompletionDb {
|
||||||
fn save_transaction<N: Network>(txn: &mut impl DbTxn, tx: &N::Transaction) {
|
fn save_completion<N: Network>(
|
||||||
Self::set(txn, tx.id().as_ref(), &tx.serialize());
|
txn: &mut impl DbTxn,
|
||||||
|
completion: &<N::Eventuality as Eventuality>::Completion,
|
||||||
|
) {
|
||||||
|
let claim = N::Eventuality::claim(completion);
|
||||||
|
let claim: &[u8] = claim.as_ref();
|
||||||
|
Self::set(txn, claim, &N::Eventuality::serialize_completion(completion));
|
||||||
}
|
}
|
||||||
|
|
||||||
fn transaction<N: Network>(
|
fn completion<N: Network>(
|
||||||
getter: &impl Get,
|
getter: &impl Get,
|
||||||
id: &<N::Transaction as Transaction<N>>::Id,
|
claim: &<N::Eventuality as Eventuality>::Claim,
|
||||||
) -> Option<N::Transaction> {
|
) -> Option<<N::Eventuality as Eventuality>::Completion> {
|
||||||
Self::get(getter, id.as_ref()).map(|tx| N::Transaction::read(&mut tx.as_slice()).unwrap())
|
Self::get(getter, claim.as_ref())
|
||||||
|
.map(|completion| N::Eventuality::read_completion::<&[u8]>(&mut completion.as_ref()).unwrap())
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
type PreprocessFor<N> = <<N as Network>::TransactionMachine as PreprocessMachine>::Preprocess;
|
type PreprocessFor<N> = <<N as Network>::TransactionMachine as PreprocessMachine>::Preprocess;
|
||||||
type SignMachineFor<N> = <<N as Network>::TransactionMachine as PreprocessMachine>::SignMachine;
|
type SignMachineFor<N> = <<N as Network>::TransactionMachine as PreprocessMachine>::SignMachine;
|
||||||
type SignatureShareFor<N> =
|
type SignatureShareFor<N> = <SignMachineFor<N> as SignMachine<
|
||||||
<SignMachineFor<N> as SignMachine<<N as Network>::Transaction>>::SignatureShare;
|
<<N as Network>::Eventuality as Eventuality>::Completion,
|
||||||
type SignatureMachineFor<N> =
|
>>::SignatureShare;
|
||||||
<SignMachineFor<N> as SignMachine<<N as Network>::Transaction>>::SignatureMachine;
|
type SignatureMachineFor<N> = <SignMachineFor<N> as SignMachine<
|
||||||
|
<<N as Network>::Eventuality as Eventuality>::Completion,
|
||||||
|
>>::SignatureMachine;
|
||||||
|
|
||||||
pub struct Signer<N: Network, D: Db> {
|
pub struct Signer<N: Network, D: Db> {
|
||||||
db: PhantomData<D>,
|
db: PhantomData<D>,
|
||||||
|
@ -164,12 +191,11 @@ impl<N: Network, D: Db> Signer<N, D> {
|
||||||
log::info!("rebroadcasting transactions for plans whose completions yet to be confirmed...");
|
log::info!("rebroadcasting transactions for plans whose completions yet to be confirmed...");
|
||||||
loop {
|
loop {
|
||||||
for active in ActiveSignsDb::get(&db).unwrap_or_default() {
|
for active in ActiveSignsDb::get(&db).unwrap_or_default() {
|
||||||
for completion in CompletionsDb::completions::<N>(&db, active) {
|
for claim in CompletionsDb::completions::<N>(&db, active) {
|
||||||
log::info!("rebroadcasting {}", hex::encode(&completion));
|
log::info!("rebroadcasting completion with claim {}", hex::encode(claim.as_ref()));
|
||||||
// TODO: Don't drop the error entirely. Check for invariants
|
// TODO: Don't drop the error entirely. Check for invariants
|
||||||
let _ = network
|
let _ =
|
||||||
.publish_transaction(&TransactionDb::transaction::<N>(&db, &completion).unwrap())
|
network.publish_completion(&CompletionDb::completion::<N>(&db, &claim).unwrap()).await;
|
||||||
.await;
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
// Only run every five minutes so we aren't frequently loading tens to hundreds of KB from
|
// Only run every five minutes so we aren't frequently loading tens to hundreds of KB from
|
||||||
|
@ -242,7 +268,7 @@ impl<N: Network, D: Db> Signer<N, D> {
|
||||||
fn complete(
|
fn complete(
|
||||||
&mut self,
|
&mut self,
|
||||||
id: [u8; 32],
|
id: [u8; 32],
|
||||||
tx_id: &<N::Transaction as Transaction<N>>::Id,
|
claim: &<N::Eventuality as Eventuality>::Claim,
|
||||||
) -> ProcessorMessage {
|
) -> ProcessorMessage {
|
||||||
// Assert we're actively signing for this TX
|
// Assert we're actively signing for this TX
|
||||||
assert!(self.signable.remove(&id).is_some(), "completed a TX we weren't signing for");
|
assert!(self.signable.remove(&id).is_some(), "completed a TX we weren't signing for");
|
||||||
|
@ -256,7 +282,7 @@ impl<N: Network, D: Db> Signer<N, D> {
|
||||||
self.signing.remove(&id);
|
self.signing.remove(&id);
|
||||||
|
|
||||||
// Emit the event for it
|
// Emit the event for it
|
||||||
ProcessorMessage::Completed { session: self.session, id, tx: tx_id.as_ref().to_vec() }
|
ProcessorMessage::Completed { session: self.session, id, tx: claim.as_ref().to_vec() }
|
||||||
}
|
}
|
||||||
|
|
||||||
#[must_use]
|
#[must_use]
|
||||||
|
@ -264,16 +290,16 @@ impl<N: Network, D: Db> Signer<N, D> {
|
||||||
&mut self,
|
&mut self,
|
||||||
txn: &mut D::Transaction<'_>,
|
txn: &mut D::Transaction<'_>,
|
||||||
id: [u8; 32],
|
id: [u8; 32],
|
||||||
tx: &N::Transaction,
|
completion: &<N::Eventuality as Eventuality>::Completion,
|
||||||
) -> Option<ProcessorMessage> {
|
) -> Option<ProcessorMessage> {
|
||||||
let first_completion = !Self::already_completed(txn, id);
|
let first_completion = !Self::already_completed(txn, id);
|
||||||
|
|
||||||
// Save this completion to the DB
|
// Save this completion to the DB
|
||||||
CompletedOnChainDb::complete_on_chain(txn, &id);
|
CompletedOnChainDb::complete_on_chain(txn, &id);
|
||||||
CompletionsDb::complete::<N>(txn, id, tx);
|
CompletionsDb::complete::<N>(txn, id, completion);
|
||||||
|
|
||||||
if first_completion {
|
if first_completion {
|
||||||
Some(self.complete(id, &tx.id()))
|
Some(self.complete(id, &N::Eventuality::claim(completion)))
|
||||||
} else {
|
} else {
|
||||||
None
|
None
|
||||||
}
|
}
|
||||||
|
@ -286,49 +312,50 @@ impl<N: Network, D: Db> Signer<N, D> {
|
||||||
&mut self,
|
&mut self,
|
||||||
txn: &mut D::Transaction<'_>,
|
txn: &mut D::Transaction<'_>,
|
||||||
id: [u8; 32],
|
id: [u8; 32],
|
||||||
tx_id: &<N::Transaction as Transaction<N>>::Id,
|
claim: &<N::Eventuality as Eventuality>::Claim,
|
||||||
) -> Option<ProcessorMessage> {
|
) -> Option<ProcessorMessage> {
|
||||||
if let Some(eventuality) = EventualityDb::eventuality::<N>(txn, id) {
|
if let Some(eventuality) = EventualityDb::eventuality::<N>(txn, id) {
|
||||||
// Transaction hasn't hit our mempool/was dropped for a different signature
|
match self.network.confirm_completion(&eventuality, claim).await {
|
||||||
// The latter can happen given certain latency conditions/a single malicious signer
|
Ok(Some(completion)) => {
|
||||||
// In the case of a single malicious signer, they can drag multiple honest validators down
|
info!(
|
||||||
// with them, so we unfortunately can't slash on this case
|
"signer eventuality for {} resolved in {}",
|
||||||
let Ok(tx) = self.network.get_transaction(tx_id).await else {
|
hex::encode(id),
|
||||||
warn!(
|
hex::encode(claim.as_ref())
|
||||||
"a validator claimed {} completed {} yet we didn't have that TX in our mempool {}",
|
);
|
||||||
hex::encode(tx_id),
|
|
||||||
hex::encode(id),
|
|
||||||
"(or had another connectivity issue)",
|
|
||||||
);
|
|
||||||
return None;
|
|
||||||
};
|
|
||||||
|
|
||||||
if self.network.confirm_completion(&eventuality, &tx) {
|
let first_completion = !Self::already_completed(txn, id);
|
||||||
info!("signer eventuality for {} resolved in TX {}", hex::encode(id), hex::encode(tx_id));
|
|
||||||
|
|
||||||
let first_completion = !Self::already_completed(txn, id);
|
// Save this completion to the DB
|
||||||
|
CompletionsDb::complete::<N>(txn, id, &completion);
|
||||||
|
|
||||||
// Save this completion to the DB
|
if first_completion {
|
||||||
CompletionsDb::complete::<N>(txn, id, &tx);
|
return Some(self.complete(id, claim));
|
||||||
|
}
|
||||||
if first_completion {
|
}
|
||||||
return Some(self.complete(id, &tx.id()));
|
Ok(None) => {
|
||||||
|
warn!(
|
||||||
|
"a validator claimed {} completed {} when it did not",
|
||||||
|
hex::encode(claim.as_ref()),
|
||||||
|
hex::encode(id),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
Err(_) => {
|
||||||
|
// Transaction hasn't hit our mempool/was dropped for a different signature
|
||||||
|
// The latter can happen given certain latency conditions/a single malicious signer
|
||||||
|
// In the case of a single malicious signer, they can drag multiple honest validators down
|
||||||
|
// with them, so we unfortunately can't slash on this case
|
||||||
|
warn!(
|
||||||
|
"a validator claimed {} completed {} yet we couldn't check that claim",
|
||||||
|
hex::encode(claim.as_ref()),
|
||||||
|
hex::encode(id),
|
||||||
|
);
|
||||||
}
|
}
|
||||||
} else {
|
|
||||||
warn!(
|
|
||||||
"a validator claimed {} completed {} when it did not",
|
|
||||||
hex::encode(tx_id),
|
|
||||||
hex::encode(id)
|
|
||||||
);
|
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
// If we don't have this in RAM, it should be because we already finished signing it
|
warn!(
|
||||||
assert!(!CompletionsDb::completions::<N>(txn, id).is_empty());
|
"informed of completion {} for eventuality {}, when we didn't have that eventuality",
|
||||||
info!(
|
hex::encode(claim.as_ref()),
|
||||||
"signer {} informed of the eventuality completion for plan {}, {}",
|
|
||||||
hex::encode(self.keys[0].group_key().to_bytes()),
|
|
||||||
hex::encode(id),
|
hex::encode(id),
|
||||||
"which we already marked as completed",
|
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
None
|
None
|
||||||
|
@ -405,7 +432,7 @@ impl<N: Network, D: Db> Signer<N, D> {
|
||||||
let mut preprocesses = vec![];
|
let mut preprocesses = vec![];
|
||||||
let mut serialized_preprocesses = vec![];
|
let mut serialized_preprocesses = vec![];
|
||||||
for keys in &self.keys {
|
for keys in &self.keys {
|
||||||
let machine = match self.network.attempt_send(keys.clone(), tx.clone()).await {
|
let machine = match self.network.attempt_sign(keys.clone(), tx.clone()).await {
|
||||||
Err(e) => {
|
Err(e) => {
|
||||||
error!("failed to attempt {}, #{}: {:?}", hex::encode(id.id), id.attempt, e);
|
error!("failed to attempt {}, #{}: {:?}", hex::encode(id.id), id.attempt, e);
|
||||||
return None;
|
return None;
|
||||||
|
@ -572,7 +599,7 @@ impl<N: Network, D: Db> Signer<N, D> {
|
||||||
assert!(shares.insert(self.keys[i].params().i(), our_share).is_none());
|
assert!(shares.insert(self.keys[i].params().i(), our_share).is_none());
|
||||||
}
|
}
|
||||||
|
|
||||||
let tx = match machine.complete(shares) {
|
let completion = match machine.complete(shares) {
|
||||||
Ok(res) => res,
|
Ok(res) => res,
|
||||||
Err(e) => match e {
|
Err(e) => match e {
|
||||||
FrostError::InternalError(_) |
|
FrostError::InternalError(_) |
|
||||||
|
@ -588,40 +615,39 @@ impl<N: Network, D: Db> Signer<N, D> {
|
||||||
},
|
},
|
||||||
};
|
};
|
||||||
|
|
||||||
// Save the transaction in case it's needed for recovery
|
// Save the completion in case it's needed for recovery
|
||||||
CompletionsDb::complete::<N>(txn, id.id, &tx);
|
CompletionsDb::complete::<N>(txn, id.id, &completion);
|
||||||
|
|
||||||
// Publish it
|
// Publish it
|
||||||
let tx_id = tx.id();
|
if let Err(e) = self.network.publish_completion(&completion).await {
|
||||||
if let Err(e) = self.network.publish_transaction(&tx).await {
|
error!("couldn't publish completion for plan {}: {:?}", hex::encode(id.id), e);
|
||||||
error!("couldn't publish {:?}: {:?}", tx, e);
|
|
||||||
} else {
|
} else {
|
||||||
info!("published {} for plan {}", hex::encode(&tx_id), hex::encode(id.id));
|
info!("published completion for plan {}", hex::encode(id.id));
|
||||||
}
|
}
|
||||||
|
|
||||||
// Stop trying to sign for this TX
|
// Stop trying to sign for this TX
|
||||||
Some(self.complete(id.id, &tx_id))
|
Some(self.complete(id.id, &N::Eventuality::claim(&completion)))
|
||||||
}
|
}
|
||||||
|
|
||||||
CoordinatorMessage::Reattempt { id } => self.attempt(txn, id.id, id.attempt).await,
|
CoordinatorMessage::Reattempt { id } => self.attempt(txn, id.id, id.attempt).await,
|
||||||
|
|
||||||
CoordinatorMessage::Completed { session: _, id, tx: mut tx_vec } => {
|
CoordinatorMessage::Completed { session: _, id, tx: mut claim_vec } => {
|
||||||
let mut tx = <N::Transaction as Transaction<N>>::Id::default();
|
let mut claim = <N::Eventuality as Eventuality>::Claim::default();
|
||||||
if tx.as_ref().len() != tx_vec.len() {
|
if claim.as_ref().len() != claim_vec.len() {
|
||||||
let true_len = tx_vec.len();
|
let true_len = claim_vec.len();
|
||||||
tx_vec.truncate(2 * tx.as_ref().len());
|
claim_vec.truncate(2 * claim.as_ref().len());
|
||||||
warn!(
|
warn!(
|
||||||
"a validator claimed {}... (actual length {}) completed {} yet {}",
|
"a validator claimed {}... (actual length {}) completed {} yet {}",
|
||||||
hex::encode(&tx_vec),
|
hex::encode(&claim_vec),
|
||||||
true_len,
|
true_len,
|
||||||
hex::encode(id),
|
hex::encode(id),
|
||||||
"that's not a valid TX ID",
|
"that's not a valid Claim",
|
||||||
);
|
);
|
||||||
return None;
|
return None;
|
||||||
}
|
}
|
||||||
tx.as_mut().copy_from_slice(&tx_vec);
|
claim.as_mut().copy_from_slice(&claim_vec);
|
||||||
|
|
||||||
self.claimed_eventuality_completion(txn, id, &tx).await
|
self.claimed_eventuality_completion(txn, id, &claim).await
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -13,18 +13,23 @@ use serai_db::{DbTxn, MemDb};
|
||||||
|
|
||||||
use crate::{
|
use crate::{
|
||||||
Plan, Db,
|
Plan, Db,
|
||||||
networks::{OutputType, Output, Block, Network},
|
networks::{OutputType, Output, Block, UtxoNetwork},
|
||||||
multisigs::scanner::{ScannerEvent, Scanner, ScannerHandle},
|
multisigs::{
|
||||||
|
scheduler::Scheduler,
|
||||||
|
scanner::{ScannerEvent, Scanner, ScannerHandle},
|
||||||
|
},
|
||||||
tests::sign,
|
tests::sign,
|
||||||
};
|
};
|
||||||
|
|
||||||
async fn spend<N: Network, D: Db>(
|
async fn spend<N: UtxoNetwork, D: Db>(
|
||||||
db: &mut D,
|
db: &mut D,
|
||||||
network: &N,
|
network: &N,
|
||||||
keys: &HashMap<Participant, ThresholdKeys<N::Curve>>,
|
keys: &HashMap<Participant, ThresholdKeys<N::Curve>>,
|
||||||
scanner: &mut ScannerHandle<N, D>,
|
scanner: &mut ScannerHandle<N, D>,
|
||||||
outputs: Vec<N::Output>,
|
outputs: Vec<N::Output>,
|
||||||
) {
|
) where
|
||||||
|
<N::Scheduler as Scheduler<N>>::Addendum: From<()>,
|
||||||
|
{
|
||||||
let key = keys[&Participant::new(1).unwrap()].group_key();
|
let key = keys[&Participant::new(1).unwrap()].group_key();
|
||||||
|
|
||||||
let mut keys_txs = HashMap::new();
|
let mut keys_txs = HashMap::new();
|
||||||
|
@ -41,7 +46,8 @@ async fn spend<N: Network, D: Db>(
|
||||||
key,
|
key,
|
||||||
inputs: outputs.clone(),
|
inputs: outputs.clone(),
|
||||||
payments: vec![],
|
payments: vec![],
|
||||||
change: Some(N::change_address(key)),
|
change: Some(N::change_address(key).unwrap()),
|
||||||
|
scheduler_addendum: ().into(),
|
||||||
},
|
},
|
||||||
0,
|
0,
|
||||||
)
|
)
|
||||||
|
@ -70,13 +76,16 @@ async fn spend<N: Network, D: Db>(
|
||||||
scanner.release_lock().await;
|
scanner.release_lock().await;
|
||||||
txn.commit();
|
txn.commit();
|
||||||
}
|
}
|
||||||
ScannerEvent::Completed(_, _, _, _) => {
|
ScannerEvent::Completed(_, _, _, _, _) => {
|
||||||
panic!("unexpectedly got eventuality completion");
|
panic!("unexpectedly got eventuality completion");
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub async fn test_addresses<N: Network>(network: N) {
|
pub async fn test_addresses<N: UtxoNetwork>(network: N)
|
||||||
|
where
|
||||||
|
<N::Scheduler as Scheduler<N>>::Addendum: From<()>,
|
||||||
|
{
|
||||||
let mut keys = frost::tests::key_gen::<_, N::Curve>(&mut OsRng);
|
let mut keys = frost::tests::key_gen::<_, N::Curve>(&mut OsRng);
|
||||||
for keys in keys.values_mut() {
|
for keys in keys.values_mut() {
|
||||||
N::tweak_keys(keys);
|
N::tweak_keys(keys);
|
||||||
|
@ -101,10 +110,10 @@ pub async fn test_addresses<N: Network>(network: N) {
|
||||||
// Receive funds to the various addresses and make sure they're properly identified
|
// Receive funds to the various addresses and make sure they're properly identified
|
||||||
let mut received_outputs = vec![];
|
let mut received_outputs = vec![];
|
||||||
for (kind, address) in [
|
for (kind, address) in [
|
||||||
(OutputType::External, N::external_address(key)),
|
(OutputType::External, N::external_address(&network, key).await),
|
||||||
(OutputType::Branch, N::branch_address(key)),
|
(OutputType::Branch, N::branch_address(key).unwrap()),
|
||||||
(OutputType::Change, N::change_address(key)),
|
(OutputType::Change, N::change_address(key).unwrap()),
|
||||||
(OutputType::Forwarded, N::forward_address(key)),
|
(OutputType::Forwarded, N::forward_address(key).unwrap()),
|
||||||
] {
|
] {
|
||||||
let block_id = network.test_send(address).await.id();
|
let block_id = network.test_send(address).await.id();
|
||||||
|
|
||||||
|
@ -123,7 +132,7 @@ pub async fn test_addresses<N: Network>(network: N) {
|
||||||
txn.commit();
|
txn.commit();
|
||||||
received_outputs.extend(outputs);
|
received_outputs.extend(outputs);
|
||||||
}
|
}
|
||||||
ScannerEvent::Completed(_, _, _, _) => {
|
ScannerEvent::Completed(_, _, _, _, _) => {
|
||||||
panic!("unexpectedly got eventuality completion");
|
panic!("unexpectedly got eventuality completion");
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
|
@ -65,7 +65,7 @@ mod bitcoin {
|
||||||
.unwrap();
|
.unwrap();
|
||||||
<Bitcoin as Network>::tweak_keys(&mut keys);
|
<Bitcoin as Network>::tweak_keys(&mut keys);
|
||||||
let group_key = keys.group_key();
|
let group_key = keys.group_key();
|
||||||
let serai_btc_address = <Bitcoin as Network>::external_address(group_key);
|
let serai_btc_address = <Bitcoin as Network>::external_address(&btc, group_key).await;
|
||||||
|
|
||||||
// btc key pair to send from
|
// btc key pair to send from
|
||||||
let private_key = PrivateKey::new(SecretKey::new(&mut rand_core::OsRng), BNetwork::Regtest);
|
let private_key = PrivateKey::new(SecretKey::new(&mut rand_core::OsRng), BNetwork::Regtest);
|
||||||
|
|
|
@ -11,11 +11,11 @@ use tokio::{sync::Mutex, time::timeout};
|
||||||
use serai_db::{DbTxn, Db, MemDb};
|
use serai_db::{DbTxn, Db, MemDb};
|
||||||
|
|
||||||
use crate::{
|
use crate::{
|
||||||
networks::{OutputType, Output, Block, Network},
|
networks::{OutputType, Output, Block, UtxoNetwork},
|
||||||
multisigs::scanner::{ScannerEvent, Scanner, ScannerHandle},
|
multisigs::scanner::{ScannerEvent, Scanner, ScannerHandle},
|
||||||
};
|
};
|
||||||
|
|
||||||
pub async fn new_scanner<N: Network, D: Db>(
|
pub async fn new_scanner<N: UtxoNetwork, D: Db>(
|
||||||
network: &N,
|
network: &N,
|
||||||
db: &D,
|
db: &D,
|
||||||
group_key: <N::Curve as Ciphersuite>::G,
|
group_key: <N::Curve as Ciphersuite>::G,
|
||||||
|
@ -40,7 +40,7 @@ pub async fn new_scanner<N: Network, D: Db>(
|
||||||
scanner
|
scanner
|
||||||
}
|
}
|
||||||
|
|
||||||
pub async fn test_scanner<N: Network>(network: N) {
|
pub async fn test_scanner<N: UtxoNetwork>(network: N) {
|
||||||
let mut keys =
|
let mut keys =
|
||||||
frost::tests::key_gen::<_, N::Curve>(&mut OsRng).remove(&Participant::new(1).unwrap()).unwrap();
|
frost::tests::key_gen::<_, N::Curve>(&mut OsRng).remove(&Participant::new(1).unwrap()).unwrap();
|
||||||
N::tweak_keys(&mut keys);
|
N::tweak_keys(&mut keys);
|
||||||
|
@ -56,7 +56,7 @@ pub async fn test_scanner<N: Network>(network: N) {
|
||||||
let scanner = new_scanner(&network, &db, group_key, &first).await;
|
let scanner = new_scanner(&network, &db, group_key, &first).await;
|
||||||
|
|
||||||
// Receive funds
|
// Receive funds
|
||||||
let block = network.test_send(N::external_address(keys.group_key())).await;
|
let block = network.test_send(N::external_address(&network, keys.group_key()).await).await;
|
||||||
let block_id = block.id();
|
let block_id = block.id();
|
||||||
|
|
||||||
// Verify the Scanner picked them up
|
// Verify the Scanner picked them up
|
||||||
|
@ -71,7 +71,7 @@ pub async fn test_scanner<N: Network>(network: N) {
|
||||||
assert_eq!(outputs[0].kind(), OutputType::External);
|
assert_eq!(outputs[0].kind(), OutputType::External);
|
||||||
outputs
|
outputs
|
||||||
}
|
}
|
||||||
ScannerEvent::Completed(_, _, _, _) => {
|
ScannerEvent::Completed(_, _, _, _, _) => {
|
||||||
panic!("unexpectedly got eventuality completion");
|
panic!("unexpectedly got eventuality completion");
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
@ -101,7 +101,7 @@ pub async fn test_scanner<N: Network>(network: N) {
|
||||||
.is_err());
|
.is_err());
|
||||||
}
|
}
|
||||||
|
|
||||||
pub async fn test_no_deadlock_in_multisig_completed<N: Network>(network: N) {
|
pub async fn test_no_deadlock_in_multisig_completed<N: UtxoNetwork>(network: N) {
|
||||||
// Mine blocks so there's a confirmed block
|
// Mine blocks so there's a confirmed block
|
||||||
for _ in 0 .. N::CONFIRMATIONS {
|
for _ in 0 .. N::CONFIRMATIONS {
|
||||||
network.mine_block().await;
|
network.mine_block().await;
|
||||||
|
@ -142,14 +142,14 @@ pub async fn test_no_deadlock_in_multisig_completed<N: Network>(network: N) {
|
||||||
assert!(!is_retirement_block);
|
assert!(!is_retirement_block);
|
||||||
block
|
block
|
||||||
}
|
}
|
||||||
ScannerEvent::Completed(_, _, _, _) => {
|
ScannerEvent::Completed(_, _, _, _, _) => {
|
||||||
panic!("unexpectedly got eventuality completion");
|
panic!("unexpectedly got eventuality completion");
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
match timeout(Duration::from_secs(30), scanner.events.recv()).await.unwrap().unwrap() {
|
match timeout(Duration::from_secs(30), scanner.events.recv()).await.unwrap().unwrap() {
|
||||||
ScannerEvent::Block { .. } => {}
|
ScannerEvent::Block { .. } => {}
|
||||||
ScannerEvent::Completed(_, _, _, _) => {
|
ScannerEvent::Completed(_, _, _, _, _) => {
|
||||||
panic!("unexpectedly got eventuality completion");
|
panic!("unexpectedly got eventuality completion");
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
|
@ -17,19 +17,20 @@ use serai_client::{
|
||||||
use messages::sign::*;
|
use messages::sign::*;
|
||||||
use crate::{
|
use crate::{
|
||||||
Payment, Plan,
|
Payment, Plan,
|
||||||
networks::{Output, Transaction, Network},
|
networks::{Output, Transaction, Eventuality, UtxoNetwork},
|
||||||
|
multisigs::scheduler::Scheduler,
|
||||||
signer::Signer,
|
signer::Signer,
|
||||||
};
|
};
|
||||||
|
|
||||||
#[allow(clippy::type_complexity)]
|
#[allow(clippy::type_complexity)]
|
||||||
pub async fn sign<N: Network>(
|
pub async fn sign<N: UtxoNetwork>(
|
||||||
network: N,
|
network: N,
|
||||||
session: Session,
|
session: Session,
|
||||||
mut keys_txs: HashMap<
|
mut keys_txs: HashMap<
|
||||||
Participant,
|
Participant,
|
||||||
(ThresholdKeys<N::Curve>, (N::SignableTransaction, N::Eventuality)),
|
(ThresholdKeys<N::Curve>, (N::SignableTransaction, N::Eventuality)),
|
||||||
>,
|
>,
|
||||||
) -> <N::Transaction as Transaction<N>>::Id {
|
) -> <N::Eventuality as Eventuality>::Claim {
|
||||||
let actual_id = SignId { session, id: [0xaa; 32], attempt: 0 };
|
let actual_id = SignId { session, id: [0xaa; 32], attempt: 0 };
|
||||||
|
|
||||||
let mut keys = HashMap::new();
|
let mut keys = HashMap::new();
|
||||||
|
@ -65,14 +66,15 @@ pub async fn sign<N: Network>(
|
||||||
|
|
||||||
let mut preprocesses = HashMap::new();
|
let mut preprocesses = HashMap::new();
|
||||||
|
|
||||||
|
let mut eventuality = None;
|
||||||
for i in 1 ..= signers.len() {
|
for i in 1 ..= signers.len() {
|
||||||
let i = Participant::new(u16::try_from(i).unwrap()).unwrap();
|
let i = Participant::new(u16::try_from(i).unwrap()).unwrap();
|
||||||
let (tx, eventuality) = txs.remove(&i).unwrap();
|
let (tx, this_eventuality) = txs.remove(&i).unwrap();
|
||||||
let mut txn = dbs.get_mut(&i).unwrap().txn();
|
let mut txn = dbs.get_mut(&i).unwrap().txn();
|
||||||
match signers
|
match signers
|
||||||
.get_mut(&i)
|
.get_mut(&i)
|
||||||
.unwrap()
|
.unwrap()
|
||||||
.sign_transaction(&mut txn, actual_id.id, tx, &eventuality)
|
.sign_transaction(&mut txn, actual_id.id, tx, &this_eventuality)
|
||||||
.await
|
.await
|
||||||
{
|
{
|
||||||
// All participants should emit a preprocess
|
// All participants should emit a preprocess
|
||||||
|
@ -86,6 +88,11 @@ pub async fn sign<N: Network>(
|
||||||
_ => panic!("didn't get preprocess back"),
|
_ => panic!("didn't get preprocess back"),
|
||||||
}
|
}
|
||||||
txn.commit();
|
txn.commit();
|
||||||
|
|
||||||
|
if eventuality.is_none() {
|
||||||
|
eventuality = Some(this_eventuality.clone());
|
||||||
|
}
|
||||||
|
assert_eq!(eventuality, Some(this_eventuality));
|
||||||
}
|
}
|
||||||
|
|
||||||
let mut shares = HashMap::new();
|
let mut shares = HashMap::new();
|
||||||
|
@ -140,19 +147,25 @@ pub async fn sign<N: Network>(
|
||||||
txn.commit();
|
txn.commit();
|
||||||
}
|
}
|
||||||
|
|
||||||
let mut typed_tx_id = <N::Transaction as Transaction<N>>::Id::default();
|
let mut typed_claim = <N::Eventuality as Eventuality>::Claim::default();
|
||||||
typed_tx_id.as_mut().copy_from_slice(tx_id.unwrap().as_ref());
|
typed_claim.as_mut().copy_from_slice(tx_id.unwrap().as_ref());
|
||||||
typed_tx_id
|
assert!(network.check_eventuality_by_claim(&eventuality.unwrap(), &typed_claim).await);
|
||||||
|
typed_claim
|
||||||
}
|
}
|
||||||
|
|
||||||
pub async fn test_signer<N: Network>(network: N) {
|
pub async fn test_signer<N: UtxoNetwork>(network: N)
|
||||||
|
where
|
||||||
|
<N::Scheduler as Scheduler<N>>::Addendum: From<()>,
|
||||||
|
{
|
||||||
let mut keys = key_gen(&mut OsRng);
|
let mut keys = key_gen(&mut OsRng);
|
||||||
for keys in keys.values_mut() {
|
for keys in keys.values_mut() {
|
||||||
N::tweak_keys(keys);
|
N::tweak_keys(keys);
|
||||||
}
|
}
|
||||||
let key = keys[&Participant::new(1).unwrap()].group_key();
|
let key = keys[&Participant::new(1).unwrap()].group_key();
|
||||||
|
|
||||||
let outputs = network.get_outputs(&network.test_send(N::external_address(key)).await, key).await;
|
let outputs = network
|
||||||
|
.get_outputs(&network.test_send(N::external_address(&network, key).await).await, key)
|
||||||
|
.await;
|
||||||
let sync_block = network.get_latest_block_number().await.unwrap() - N::CONFIRMATIONS;
|
let sync_block = network.get_latest_block_number().await.unwrap() - N::CONFIRMATIONS;
|
||||||
|
|
||||||
let amount = 2 * N::DUST;
|
let amount = 2 * N::DUST;
|
||||||
|
@ -166,7 +179,7 @@ pub async fn test_signer<N: Network>(network: N) {
|
||||||
key,
|
key,
|
||||||
inputs: outputs.clone(),
|
inputs: outputs.clone(),
|
||||||
payments: vec![Payment {
|
payments: vec![Payment {
|
||||||
address: N::external_address(key),
|
address: N::external_address(&network, key).await,
|
||||||
data: None,
|
data: None,
|
||||||
balance: Balance {
|
balance: Balance {
|
||||||
coin: match N::NETWORK {
|
coin: match N::NETWORK {
|
||||||
|
@ -178,7 +191,8 @@ pub async fn test_signer<N: Network>(network: N) {
|
||||||
amount: Amount(amount),
|
amount: Amount(amount),
|
||||||
},
|
},
|
||||||
}],
|
}],
|
||||||
change: Some(N::change_address(key)),
|
change: Some(N::change_address(key).unwrap()),
|
||||||
|
scheduler_addendum: ().into(),
|
||||||
},
|
},
|
||||||
0,
|
0,
|
||||||
)
|
)
|
||||||
|
@ -191,13 +205,12 @@ pub async fn test_signer<N: Network>(network: N) {
|
||||||
keys_txs.insert(i, (keys, (signable, eventuality)));
|
keys_txs.insert(i, (keys, (signable, eventuality)));
|
||||||
}
|
}
|
||||||
|
|
||||||
// The signer may not publish the TX if it has a connection error
|
let claim = sign(network.clone(), Session(0), keys_txs).await;
|
||||||
// It doesn't fail in this case
|
|
||||||
let txid = sign(network.clone(), Session(0), keys_txs).await;
|
|
||||||
let tx = network.get_transaction(&txid).await.unwrap();
|
|
||||||
assert_eq!(tx.id(), txid);
|
|
||||||
// Mine a block, and scan it, to ensure that the TX actually made it on chain
|
// Mine a block, and scan it, to ensure that the TX actually made it on chain
|
||||||
network.mine_block().await;
|
network.mine_block().await;
|
||||||
|
let block_number = network.get_latest_block_number().await.unwrap();
|
||||||
|
let tx = network.get_transaction_by_eventuality(block_number, &eventualities[0]).await;
|
||||||
let outputs = network
|
let outputs = network
|
||||||
.get_outputs(
|
.get_outputs(
|
||||||
&network.get_block(network.get_latest_block_number().await.unwrap()).await.unwrap(),
|
&network.get_block(network.get_latest_block_number().await.unwrap()).await.unwrap(),
|
||||||
|
@ -212,6 +225,7 @@ pub async fn test_signer<N: Network>(network: N) {
|
||||||
|
|
||||||
// Check the eventualities pass
|
// Check the eventualities pass
|
||||||
for eventuality in eventualities {
|
for eventuality in eventualities {
|
||||||
assert!(network.confirm_completion(&eventuality, &tx));
|
let completion = network.confirm_completion(&eventuality, &claim).await.unwrap().unwrap();
|
||||||
|
assert_eq!(N::Eventuality::claim(&completion), claim);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -15,7 +15,7 @@ use serai_client::{
|
||||||
|
|
||||||
use crate::{
|
use crate::{
|
||||||
Payment, Plan,
|
Payment, Plan,
|
||||||
networks::{Output, Transaction, Block, Network},
|
networks::{Output, Transaction, Eventuality, Block, UtxoNetwork},
|
||||||
multisigs::{
|
multisigs::{
|
||||||
scanner::{ScannerEvent, Scanner},
|
scanner::{ScannerEvent, Scanner},
|
||||||
scheduler::Scheduler,
|
scheduler::Scheduler,
|
||||||
|
@ -24,7 +24,7 @@ use crate::{
|
||||||
};
|
};
|
||||||
|
|
||||||
// Tests the Scanner, Scheduler, and Signer together
|
// Tests the Scanner, Scheduler, and Signer together
|
||||||
pub async fn test_wallet<N: Network>(network: N) {
|
pub async fn test_wallet<N: UtxoNetwork>(network: N) {
|
||||||
// Mine blocks so there's a confirmed block
|
// Mine blocks so there's a confirmed block
|
||||||
for _ in 0 .. N::CONFIRMATIONS {
|
for _ in 0 .. N::CONFIRMATIONS {
|
||||||
network.mine_block().await;
|
network.mine_block().await;
|
||||||
|
@ -47,7 +47,7 @@ pub async fn test_wallet<N: Network>(network: N) {
|
||||||
network.mine_block().await;
|
network.mine_block().await;
|
||||||
}
|
}
|
||||||
|
|
||||||
let block = network.test_send(N::external_address(key)).await;
|
let block = network.test_send(N::external_address(&network, key).await).await;
|
||||||
let block_id = block.id();
|
let block_id = block.id();
|
||||||
|
|
||||||
match timeout(Duration::from_secs(30), scanner.events.recv()).await.unwrap().unwrap() {
|
match timeout(Duration::from_secs(30), scanner.events.recv()).await.unwrap().unwrap() {
|
||||||
|
@ -58,7 +58,7 @@ pub async fn test_wallet<N: Network>(network: N) {
|
||||||
assert_eq!(outputs.len(), 1);
|
assert_eq!(outputs.len(), 1);
|
||||||
(block_id, outputs)
|
(block_id, outputs)
|
||||||
}
|
}
|
||||||
ScannerEvent::Completed(_, _, _, _) => {
|
ScannerEvent::Completed(_, _, _, _, _) => {
|
||||||
panic!("unexpectedly got eventuality completion");
|
panic!("unexpectedly got eventuality completion");
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -69,22 +69,13 @@ pub async fn test_wallet<N: Network>(network: N) {
|
||||||
txn.commit();
|
txn.commit();
|
||||||
|
|
||||||
let mut txn = db.txn();
|
let mut txn = db.txn();
|
||||||
let mut scheduler = Scheduler::new::<MemDb>(
|
let mut scheduler = N::Scheduler::new::<MemDb>(&mut txn, key, N::NETWORK);
|
||||||
&mut txn,
|
|
||||||
key,
|
|
||||||
match N::NETWORK {
|
|
||||||
NetworkId::Serai => panic!("test_wallet called with Serai"),
|
|
||||||
NetworkId::Bitcoin => Coin::Bitcoin,
|
|
||||||
NetworkId::Ethereum => Coin::Ether,
|
|
||||||
NetworkId::Monero => Coin::Monero,
|
|
||||||
},
|
|
||||||
);
|
|
||||||
let amount = 2 * N::DUST;
|
let amount = 2 * N::DUST;
|
||||||
let plans = scheduler.schedule::<MemDb>(
|
let plans = scheduler.schedule::<MemDb>(
|
||||||
&mut txn,
|
&mut txn,
|
||||||
outputs.clone(),
|
outputs.clone(),
|
||||||
vec![Payment {
|
vec![Payment {
|
||||||
address: N::external_address(key),
|
address: N::external_address(&network, key).await,
|
||||||
data: None,
|
data: None,
|
||||||
balance: Balance {
|
balance: Balance {
|
||||||
coin: match N::NETWORK {
|
coin: match N::NETWORK {
|
||||||
|
@ -100,27 +91,26 @@ pub async fn test_wallet<N: Network>(network: N) {
|
||||||
false,
|
false,
|
||||||
);
|
);
|
||||||
txn.commit();
|
txn.commit();
|
||||||
|
assert_eq!(plans.len(), 1);
|
||||||
|
assert_eq!(plans[0].key, key);
|
||||||
|
assert_eq!(plans[0].inputs, outputs);
|
||||||
assert_eq!(
|
assert_eq!(
|
||||||
plans,
|
plans[0].payments,
|
||||||
vec![Plan {
|
vec![Payment {
|
||||||
key,
|
address: N::external_address(&network, key).await,
|
||||||
inputs: outputs.clone(),
|
data: None,
|
||||||
payments: vec![Payment {
|
balance: Balance {
|
||||||
address: N::external_address(key),
|
coin: match N::NETWORK {
|
||||||
data: None,
|
NetworkId::Serai => panic!("test_wallet called with Serai"),
|
||||||
balance: Balance {
|
NetworkId::Bitcoin => Coin::Bitcoin,
|
||||||
coin: match N::NETWORK {
|
NetworkId::Ethereum => Coin::Ether,
|
||||||
NetworkId::Serai => panic!("test_wallet called with Serai"),
|
NetworkId::Monero => Coin::Monero,
|
||||||
NetworkId::Bitcoin => Coin::Bitcoin,
|
},
|
||||||
NetworkId::Ethereum => Coin::Ether,
|
amount: Amount(amount),
|
||||||
NetworkId::Monero => Coin::Monero,
|
}
|
||||||
},
|
|
||||||
amount: Amount(amount),
|
|
||||||
}
|
|
||||||
}],
|
|
||||||
change: Some(N::change_address(key)),
|
|
||||||
}]
|
}]
|
||||||
);
|
);
|
||||||
|
assert_eq!(plans[0].change, Some(N::change_address(key).unwrap()));
|
||||||
|
|
||||||
{
|
{
|
||||||
let mut buf = vec![];
|
let mut buf = vec![];
|
||||||
|
@ -143,10 +133,10 @@ pub async fn test_wallet<N: Network>(network: N) {
|
||||||
keys_txs.insert(i, (keys, (signable, eventuality)));
|
keys_txs.insert(i, (keys, (signable, eventuality)));
|
||||||
}
|
}
|
||||||
|
|
||||||
let txid = sign(network.clone(), Session(0), keys_txs).await;
|
let claim = sign(network.clone(), Session(0), keys_txs).await;
|
||||||
let tx = network.get_transaction(&txid).await.unwrap();
|
|
||||||
network.mine_block().await;
|
network.mine_block().await;
|
||||||
let block_number = network.get_latest_block_number().await.unwrap();
|
let block_number = network.get_latest_block_number().await.unwrap();
|
||||||
|
let tx = network.get_transaction_by_eventuality(block_number, &eventualities[0]).await;
|
||||||
let block = network.get_block(block_number).await.unwrap();
|
let block = network.get_block(block_number).await.unwrap();
|
||||||
let outputs = network.get_outputs(&block, key).await;
|
let outputs = network.get_outputs(&block, key).await;
|
||||||
assert_eq!(outputs.len(), 2);
|
assert_eq!(outputs.len(), 2);
|
||||||
|
@ -154,7 +144,8 @@ pub async fn test_wallet<N: Network>(network: N) {
|
||||||
assert!((outputs[0].balance().amount.0 == amount) || (outputs[1].balance().amount.0 == amount));
|
assert!((outputs[0].balance().amount.0 == amount) || (outputs[1].balance().amount.0 == amount));
|
||||||
|
|
||||||
for eventuality in eventualities {
|
for eventuality in eventualities {
|
||||||
assert!(network.confirm_completion(&eventuality, &tx));
|
let completion = network.confirm_completion(&eventuality, &claim).await.unwrap().unwrap();
|
||||||
|
assert_eq!(N::Eventuality::claim(&completion), claim);
|
||||||
}
|
}
|
||||||
|
|
||||||
for _ in 1 .. N::CONFIRMATIONS {
|
for _ in 1 .. N::CONFIRMATIONS {
|
||||||
|
@ -168,7 +159,7 @@ pub async fn test_wallet<N: Network>(network: N) {
|
||||||
assert_eq!(block_id, block.id());
|
assert_eq!(block_id, block.id());
|
||||||
assert_eq!(these_outputs, outputs);
|
assert_eq!(these_outputs, outputs);
|
||||||
}
|
}
|
||||||
ScannerEvent::Completed(_, _, _, _) => {
|
ScannerEvent::Completed(_, _, _, _, _) => {
|
||||||
panic!("unexpectedly got eventuality completion");
|
panic!("unexpectedly got eventuality completion");
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -15,24 +15,11 @@ is the caller.
|
||||||
|
|
||||||
`data` is limited to 512 bytes.
|
`data` is limited to 512 bytes.
|
||||||
|
|
||||||
If `data` is provided, the Ethereum Router will call a contract-calling child
|
If `data` isn't provided or is malformed, ETH transfers will execute with 5,000
|
||||||
contract in order to sandbox it. The first byte of `data` designates which child
|
gas and token transfers with 100,000 gas.
|
||||||
child contract to call. After this byte is read, `data` is solely considered as
|
|
||||||
`data`, post its first byte. The child contract is sent the funds before this
|
|
||||||
call is performed.
|
|
||||||
|
|
||||||
##### Child Contract 0
|
If `data` is provided and well-formed, `destination` is ignored and the Ethereum
|
||||||
|
Router will construct and call a new contract to proxy the contained calls. The
|
||||||
This contract is intended to enable connecting with other protocols, and should
|
transfer executes to the constructed contract as above, before the constructed
|
||||||
be used to convert withdrawn assets to other assets on Ethereum.
|
contract is called with the calls inside `data`. The sandboxed execution has a
|
||||||
|
gas limit of 350,000.
|
||||||
1) Transfers the asset to `destination`.
|
|
||||||
2) Calls `destination` with `data`.
|
|
||||||
|
|
||||||
##### Child Contract 1
|
|
||||||
|
|
||||||
This contract is intended to enable authenticated calls from Serai.
|
|
||||||
|
|
||||||
1) Transfers the asset to `destination`.
|
|
||||||
2) Calls `destination` with `data[.. 4], serai_address, data[4 ..]`, where
|
|
||||||
`serai_address` is the address which triggered this Out Instruction.
|
|
||||||
|
|
|
@ -416,7 +416,11 @@ impl Coordinator {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub async fn get_transaction(&self, ops: &DockerOperations, tx: &[u8]) -> Option<Vec<u8>> {
|
pub async fn get_published_transaction(
|
||||||
|
&self,
|
||||||
|
ops: &DockerOperations,
|
||||||
|
tx: &[u8],
|
||||||
|
) -> Option<Vec<u8>> {
|
||||||
let rpc_url = network_rpc(self.network, ops, &self.network_handle);
|
let rpc_url = network_rpc(self.network, ops, &self.network_handle);
|
||||||
match self.network {
|
match self.network {
|
||||||
NetworkId::Bitcoin => {
|
NetworkId::Bitcoin => {
|
||||||
|
@ -424,8 +428,15 @@ impl Coordinator {
|
||||||
|
|
||||||
let rpc =
|
let rpc =
|
||||||
Rpc::new(rpc_url).await.expect("couldn't connect to the coordinator's Bitcoin RPC");
|
Rpc::new(rpc_url).await.expect("couldn't connect to the coordinator's Bitcoin RPC");
|
||||||
|
|
||||||
|
// Bitcoin publishes a 0-byte TX ID to reduce variables
|
||||||
|
// Accordingly, read the mempool to find the (presumed relevant) TX
|
||||||
|
let entries: Vec<String> =
|
||||||
|
rpc.rpc_call("getrawmempool", serde_json::json!([false])).await.unwrap();
|
||||||
|
assert_eq!(entries.len(), 1, "more than one entry in the mempool, so unclear which to get");
|
||||||
|
|
||||||
let mut hash = [0; 32];
|
let mut hash = [0; 32];
|
||||||
hash.copy_from_slice(tx);
|
hash.copy_from_slice(&hex::decode(&entries[0]).unwrap());
|
||||||
if let Ok(tx) = rpc.get_transaction(&hash).await {
|
if let Ok(tx) = rpc.get_transaction(&hash).await {
|
||||||
let mut buf = vec![];
|
let mut buf = vec![];
|
||||||
tx.consensus_encode(&mut buf).unwrap();
|
tx.consensus_encode(&mut buf).unwrap();
|
||||||
|
|
|
@ -261,12 +261,12 @@ fn send_test() {
|
||||||
let participating =
|
let participating =
|
||||||
participating.iter().map(|p| usize::from(u16::from(*p) - 1)).collect::<HashSet<_>>();
|
participating.iter().map(|p| usize::from(u16::from(*p) - 1)).collect::<HashSet<_>>();
|
||||||
for participant in &participating {
|
for participant in &participating {
|
||||||
assert!(coordinators[*participant].get_transaction(&ops, &tx_id).await.is_some());
|
assert!(coordinators[*participant].get_published_transaction(&ops, &tx_id).await.is_some());
|
||||||
}
|
}
|
||||||
|
|
||||||
// Publish this transaction to the left out nodes
|
// Publish this transaction to the left out nodes
|
||||||
let tx = coordinators[*participating.iter().next().unwrap()]
|
let tx = coordinators[*participating.iter().next().unwrap()]
|
||||||
.get_transaction(&ops, &tx_id)
|
.get_published_transaction(&ops, &tx_id)
|
||||||
.await
|
.await
|
||||||
.unwrap();
|
.unwrap();
|
||||||
for (i, coordinator) in coordinators.iter_mut().enumerate() {
|
for (i, coordinator) in coordinators.iter_mut().enumerate() {
|
||||||
|
|
Loading…
Reference in a new issue