mirror of
https://github.com/serai-dex/serai.git
synced 2024-12-27 14:09:48 +00:00
e4e4245ee3
* Upstream GBP, divisor, circuit abstraction, and EC gadgets from FCMP++ * Initial eVRF implementation Not quite done yet. It needs to communicate the resulting points and proofs to extract them from the Pedersen Commitments in order to return those, and then be tested. * Add the openings of the PCs to the eVRF as necessary * Add implementation of secq256k1 * Make DKG Encryption a bit more flexible No longer requires the use of an EncryptionKeyMessage, and allows pre-defined keys for encryption. * Make NUM_BITS an argument for the field macro * Have the eVRF take a Zeroizing private key * Initial eVRF-based DKG * Add embedwards25519 curve * Inline the eVRF into the DKG library Due to how we're handling share encryption, we'd either need two circuits or to dedicate this circuit to the DKG. The latter makes sense at this time. * Add documentation to the eVRF-based DKG * Add paragraph claiming robustness * Update to the new eVRF proof * Finish routing the eVRF functionality Still needs errors and serialization, along with a few other TODOs. * Add initial eVRF DKG test * Improve eVRF DKG Updates how we calculcate verification shares, improves performance when extracting multiple sets of keys, and adds more to the test for it. * Start using a proper error for the eVRF DKG * Resolve various TODOs Supports recovering multiple key shares from the eVRF DKG. Inlines two loops to save 2**16 iterations. Adds support for creating a constant time representation of scalars < NUM_BITS. * Ban zero ECDH keys, document non-zero requirements * Implement eVRF traits, all the way up to the DKG, for secp256k1/ed25519 * Add Ristretto eVRF trait impls * Support participating multiple times in the eVRF DKG * Only participate once per key, not once per key share * Rewrite processor key-gen around the eVRF DKG Still a WIP. * Finish routing the new key gen in the processor Doesn't touch the tests, coordinator, nor Substrate yet. `cargo +nightly fmt && cargo +nightly-2024-07-01 clippy --all-features -p serai-processor` does pass. * Deduplicate and better document in processor key_gen * Update serai-processor tests to the new key gen * Correct amount of yx coefficients, get processor key gen test to pass * Add embedded elliptic curve keys to Substrate * Update processor key gen tests to the eVRF DKG * Have set_keys take signature_participants, not removed_participants Now no one is removed from the DKG. Only `t` people publish the key however. Uses a BitVec for an efficient encoding of the participants. * Update the coordinator binary for the new DKG This does not yet update any tests. * Add sensible Debug to key_gen::[Processor, Coordinator]Message * Have the DKG explicitly declare how to interpolate its shares Removes the hack for MuSig where we multiply keys by the inverse of their lagrange interpolation factor. * Replace Interpolation::None with Interpolation::Constant Allows the MuSig DKG to keep the secret share as the original private key, enabling deriving FROST nonces consistently regardless of the MuSig context. * Get coordinator tests to pass * Update spec to the new DKG * Get clippy to pass across the repo * cargo machete * Add an extra sleep to ensure expected ordering of `Participation`s * Update orchestration * Remove bad panic in coordinator It expected ConfirmationShare to be n-of-n, not t-of-n. * Improve documentation on functions * Update TX size limit We now no longer have to support the ridiculous case of having 49 DKG participations within a 101-of-150 DKG. It does remain quite high due to needing to _sign_ so many times. It'd may be optimal for parties with multiple key shares to independently send their preprocesses/shares (despite the overhead that'll cause with signatures and the transaction structure). * Correct error in the Processor spec document * Update a few comments in the validator-sets pallet * Send/Recv Participation one at a time Sending all, then attempting to receive all in an expected order, wasn't working even with notable delays between sending messages. This points to the mempool not working as expected... * Correct ThresholdKeys serialization in modular-frost test * Updating existing TX size limit test for the new DKG parameters * Increase time allowed for the DKG on the GH CI * Correct construction of signature_participants in serai-client tests Fault identified by akil. * Further contextualize DkgConfirmer by ValidatorSet Caught by a safety check we wouldn't reuse preprocesses across messages. That raises the question of we were prior reusing preprocesses (reusing keys)? Except that'd have caused a variety of signing failures (suggesting we had some staggered timing avoiding it in practice but yes, this was possible in theory). * Add necessary calls to set_embedded_elliptic_curve_key in coordinator set rotation tests * Correct shimmed setting of a secq256k1 key * cargo fmt * Don't use `[0; 32]` for the embedded keys in the coordinator rotation test The key_gen function expects the random values already decided. * Big-endian secq256k1 scalars Also restores the prior, safer, Encryption::register function.
674 lines
19 KiB
Rust
674 lines
19 KiB
Rust
// TODO: Generate randomized RPC credentials for all services
|
|
// TODO: Generate keys for a validator and the infra
|
|
|
|
use core::ops::Deref;
|
|
use std::{
|
|
collections::{HashSet, HashMap},
|
|
env,
|
|
path::PathBuf,
|
|
io::Write,
|
|
fs,
|
|
process::{Stdio, Command},
|
|
};
|
|
|
|
use zeroize::Zeroizing;
|
|
|
|
use rand_core::{RngCore, SeedableRng, OsRng};
|
|
use rand_chacha::ChaCha20Rng;
|
|
|
|
use transcript::{Transcript, RecommendedTranscript};
|
|
|
|
use ciphersuite::{
|
|
group::{
|
|
ff::{Field, PrimeField},
|
|
GroupEncoding,
|
|
},
|
|
Ciphersuite, Ristretto,
|
|
};
|
|
use embedwards25519::Embedwards25519;
|
|
use secq256k1::Secq256k1;
|
|
|
|
mod mimalloc;
|
|
use mimalloc::mimalloc;
|
|
|
|
mod networks;
|
|
use networks::*;
|
|
|
|
mod ethereum_relayer;
|
|
use ethereum_relayer::ethereum_relayer;
|
|
|
|
mod message_queue;
|
|
use message_queue::message_queue;
|
|
|
|
mod processor;
|
|
use processor::processor;
|
|
|
|
mod coordinator;
|
|
use coordinator::coordinator;
|
|
|
|
mod serai;
|
|
use serai::serai;
|
|
|
|
mod docker;
|
|
|
|
#[global_allocator]
|
|
static ALLOCATOR: zalloc::ZeroizingAlloc<std::alloc::System> =
|
|
zalloc::ZeroizingAlloc(std::alloc::System);
|
|
|
|
#[derive(Clone, Copy, PartialEq, Eq, Debug, PartialOrd, Ord, Hash)]
|
|
pub enum Network {
|
|
Dev,
|
|
Testnet,
|
|
}
|
|
|
|
impl Network {
|
|
pub fn db(&self) -> &'static str {
|
|
match self {
|
|
Network::Dev => "parity-db",
|
|
Network::Testnet => "rocksdb",
|
|
}
|
|
}
|
|
|
|
pub fn release(&self) -> bool {
|
|
match self {
|
|
Network::Dev => false,
|
|
Network::Testnet => true,
|
|
}
|
|
}
|
|
|
|
pub fn label(&self) -> &'static str {
|
|
match self {
|
|
Network::Dev => "dev",
|
|
Network::Testnet => "testnet",
|
|
}
|
|
}
|
|
}
|
|
|
|
#[derive(Clone, Copy, PartialEq, Eq, Debug, PartialOrd, Ord, Hash)]
|
|
enum Os {
|
|
Alpine,
|
|
Debian,
|
|
}
|
|
|
|
fn os(os: Os, additional_root: &str, user: &str) -> String {
|
|
match os {
|
|
Os::Alpine => format!(
|
|
r#"
|
|
FROM alpine:latest as image
|
|
|
|
COPY --from=mimalloc-alpine libmimalloc.so /usr/lib
|
|
ENV LD_PRELOAD=libmimalloc.so
|
|
|
|
RUN apk update && apk upgrade
|
|
|
|
RUN adduser --system --shell /sbin/nologin --disabled-password {user}
|
|
RUN addgroup {user}
|
|
RUN addgroup {user} {user}
|
|
|
|
# Make the /volume directory and transfer it to the user
|
|
RUN mkdir /volume && chown {user}:{user} /volume
|
|
|
|
{additional_root}
|
|
|
|
# Switch to a non-root user
|
|
USER {user}
|
|
|
|
WORKDIR /home/{user}
|
|
"#
|
|
),
|
|
|
|
Os::Debian => format!(
|
|
r#"
|
|
FROM debian:bookworm-slim as image
|
|
|
|
COPY --from=mimalloc-debian libmimalloc.so /usr/lib
|
|
RUN echo "/usr/lib/libmimalloc.so" >> /etc/ld.so.preload
|
|
|
|
RUN apt update && apt upgrade -y && apt autoremove -y && apt clean
|
|
|
|
RUN useradd --system --user-group --create-home --shell /sbin/nologin {user}
|
|
|
|
# Make the /volume directory and transfer it to the user
|
|
RUN mkdir /volume && chown {user}:{user} /volume
|
|
|
|
{additional_root}
|
|
|
|
# Switch to a non-root user
|
|
USER {user}
|
|
|
|
WORKDIR /home/{user}
|
|
"#
|
|
),
|
|
}
|
|
}
|
|
|
|
fn build_serai_service(prelude: &str, release: bool, features: &str, package: &str) -> String {
|
|
let profile = if release { "release" } else { "debug" };
|
|
let profile_flag = if release { "--release" } else { "" };
|
|
|
|
format!(
|
|
r#"
|
|
FROM rust:1.80-slim-bookworm as builder
|
|
|
|
COPY --from=mimalloc-debian libmimalloc.so /usr/lib
|
|
RUN echo "/usr/lib/libmimalloc.so" >> /etc/ld.so.preload
|
|
|
|
RUN apt update && apt upgrade -y && apt autoremove -y && apt clean
|
|
|
|
# Add dev dependencies
|
|
RUN apt install -y pkg-config clang
|
|
|
|
# Dependencies for the Serai node
|
|
RUN apt install -y make protobuf-compiler
|
|
|
|
# Add the wasm toolchain
|
|
RUN rustup target add wasm32-unknown-unknown
|
|
|
|
{prelude}
|
|
|
|
# Add files for build
|
|
ADD patches /serai/patches
|
|
ADD common /serai/common
|
|
ADD crypto /serai/crypto
|
|
ADD networks /serai/networks
|
|
ADD message-queue /serai/message-queue
|
|
ADD processor /serai/processor
|
|
ADD coordinator /serai/coordinator
|
|
ADD substrate /serai/substrate
|
|
ADD orchestration/Cargo.toml /serai/orchestration/Cargo.toml
|
|
ADD orchestration/src /serai/orchestration/src
|
|
ADD mini /serai/mini
|
|
ADD tests /serai/tests
|
|
ADD Cargo.toml /serai
|
|
ADD Cargo.lock /serai
|
|
ADD AGPL-3.0 /serai
|
|
|
|
WORKDIR /serai
|
|
|
|
# Mount the caches and build
|
|
RUN --mount=type=cache,target=/root/.cargo \
|
|
--mount=type=cache,target=/usr/local/cargo/registry \
|
|
--mount=type=cache,target=/usr/local/cargo/git \
|
|
--mount=type=cache,target=/serai/target \
|
|
mkdir /serai/bin && \
|
|
cargo build {profile_flag} --features "{features}" -p {package} && \
|
|
mv /serai/target/{profile}/{package} /serai/bin
|
|
"#
|
|
)
|
|
}
|
|
|
|
pub fn write_dockerfile(path: PathBuf, dockerfile: &str) {
|
|
if let Ok(existing) = fs::read_to_string(&path).as_ref() {
|
|
if existing == dockerfile {
|
|
return;
|
|
}
|
|
}
|
|
fs::File::create(path).unwrap().write_all(dockerfile.as_bytes()).unwrap();
|
|
}
|
|
|
|
fn orchestration_path(network: Network) -> PathBuf {
|
|
let mut repo_path = env::current_exe().unwrap();
|
|
repo_path.pop();
|
|
assert!(repo_path.as_path().ends_with("debug"));
|
|
repo_path.pop();
|
|
assert!(repo_path.as_path().ends_with("target"));
|
|
repo_path.pop();
|
|
|
|
let mut orchestration_path = repo_path.clone();
|
|
orchestration_path.push("orchestration");
|
|
orchestration_path.push(network.label());
|
|
orchestration_path
|
|
}
|
|
|
|
type InfrastructureKeys =
|
|
HashMap<&'static str, (Zeroizing<<Ristretto as Ciphersuite>::F>, <Ristretto as Ciphersuite>::G)>;
|
|
fn infrastructure_keys(network: Network) -> InfrastructureKeys {
|
|
// Generate entropy for the infrastructure keys
|
|
|
|
let entropy = if network == Network::Dev {
|
|
// Don't use actual entropy if this is a dev environment
|
|
Zeroizing::new([0; 32])
|
|
} else {
|
|
let path = home::home_dir()
|
|
.unwrap()
|
|
.join(".serai")
|
|
.join(network.label())
|
|
.join("infrastructure_keys_entropy");
|
|
// Check if there's existing entropy
|
|
if let Ok(entropy) = fs::read(&path).map(Zeroizing::new) {
|
|
assert_eq!(entropy.len(), 32, "entropy saved to disk wasn't 32 bytes");
|
|
let mut res = Zeroizing::new([0; 32]);
|
|
res.copy_from_slice(entropy.as_ref());
|
|
res
|
|
} else {
|
|
// If there isn't, generate fresh entropy
|
|
let mut res = Zeroizing::new([0; 32]);
|
|
OsRng.fill_bytes(res.as_mut());
|
|
fs::write(&path, &res).unwrap();
|
|
res
|
|
}
|
|
};
|
|
|
|
let mut transcript =
|
|
RecommendedTranscript::new(b"Serai Orchestrator Infrastructure Keys Transcript");
|
|
transcript.append_message(b"network", network.label().as_bytes());
|
|
transcript.append_message(b"entropy", entropy);
|
|
let mut rng = ChaCha20Rng::from_seed(transcript.rng_seed(b"infrastructure_keys"));
|
|
|
|
let mut key_pair = || {
|
|
let key = Zeroizing::new(<Ristretto as Ciphersuite>::F::random(&mut rng));
|
|
let public = Ristretto::generator() * key.deref();
|
|
(key, public)
|
|
};
|
|
|
|
HashMap::from([
|
|
("coordinator", key_pair()),
|
|
("bitcoin", key_pair()),
|
|
("ethereum", key_pair()),
|
|
("monero", key_pair()),
|
|
])
|
|
}
|
|
|
|
struct EmbeddedCurveKeys {
|
|
embedwards25519: (Zeroizing<Vec<u8>>, Vec<u8>),
|
|
secq256k1: (Zeroizing<Vec<u8>>, Vec<u8>),
|
|
}
|
|
|
|
fn embedded_curve_keys(network: Network) -> EmbeddedCurveKeys {
|
|
// Generate entropy for the embedded curve keys
|
|
|
|
let entropy = {
|
|
let path = home::home_dir()
|
|
.unwrap()
|
|
.join(".serai")
|
|
.join(network.label())
|
|
.join("embedded_curve_keys_entropy");
|
|
// Check if there's existing entropy
|
|
if let Ok(entropy) = fs::read(&path).map(Zeroizing::new) {
|
|
assert_eq!(entropy.len(), 32, "entropy saved to disk wasn't 32 bytes");
|
|
let mut res = Zeroizing::new([0; 32]);
|
|
res.copy_from_slice(entropy.as_ref());
|
|
res
|
|
} else {
|
|
// If there isn't, generate fresh entropy
|
|
let mut res = Zeroizing::new([0; 32]);
|
|
OsRng.fill_bytes(res.as_mut());
|
|
fs::write(&path, &res).unwrap();
|
|
res
|
|
}
|
|
};
|
|
|
|
let mut transcript =
|
|
RecommendedTranscript::new(b"Serai Orchestrator Embedded Curve Keys Transcript");
|
|
transcript.append_message(b"network", network.label().as_bytes());
|
|
transcript.append_message(b"entropy", entropy);
|
|
let mut rng = ChaCha20Rng::from_seed(transcript.rng_seed(b"embedded_curve_keys"));
|
|
|
|
EmbeddedCurveKeys {
|
|
embedwards25519: {
|
|
let key = Zeroizing::new(<Embedwards25519 as Ciphersuite>::F::random(&mut rng));
|
|
let pub_key = Embedwards25519::generator() * key.deref();
|
|
(Zeroizing::new(key.to_repr().as_slice().to_vec()), pub_key.to_bytes().to_vec())
|
|
},
|
|
secq256k1: {
|
|
let key = Zeroizing::new(<Secq256k1 as Ciphersuite>::F::random(&mut rng));
|
|
let pub_key = Secq256k1::generator() * key.deref();
|
|
(Zeroizing::new(key.to_repr().as_slice().to_vec()), pub_key.to_bytes().to_vec())
|
|
},
|
|
}
|
|
}
|
|
|
|
fn dockerfiles(network: Network) {
|
|
let orchestration_path = orchestration_path(network);
|
|
|
|
bitcoin(&orchestration_path, network);
|
|
ethereum(&orchestration_path, network);
|
|
monero(&orchestration_path, network);
|
|
if network == Network::Dev {
|
|
monero_wallet_rpc(&orchestration_path);
|
|
}
|
|
|
|
let mut infrastructure_keys = infrastructure_keys(network);
|
|
let coordinator_key = infrastructure_keys.remove("coordinator").unwrap();
|
|
let bitcoin_key = infrastructure_keys.remove("bitcoin").unwrap();
|
|
let ethereum_key = infrastructure_keys.remove("ethereum").unwrap();
|
|
let monero_key = infrastructure_keys.remove("monero").unwrap();
|
|
|
|
ethereum_relayer(&orchestration_path, network);
|
|
|
|
message_queue(
|
|
&orchestration_path,
|
|
network,
|
|
coordinator_key.1,
|
|
bitcoin_key.1,
|
|
ethereum_key.1,
|
|
monero_key.1,
|
|
);
|
|
|
|
let embedded_curve_keys = embedded_curve_keys(network);
|
|
processor(
|
|
&orchestration_path,
|
|
network,
|
|
"bitcoin",
|
|
coordinator_key.1,
|
|
bitcoin_key.0,
|
|
embedded_curve_keys.embedwards25519.0.clone(),
|
|
embedded_curve_keys.secq256k1.0.clone(),
|
|
);
|
|
processor(
|
|
&orchestration_path,
|
|
network,
|
|
"ethereum",
|
|
coordinator_key.1,
|
|
ethereum_key.0,
|
|
embedded_curve_keys.embedwards25519.0.clone(),
|
|
embedded_curve_keys.secq256k1.0.clone(),
|
|
);
|
|
processor(
|
|
&orchestration_path,
|
|
network,
|
|
"monero",
|
|
coordinator_key.1,
|
|
monero_key.0,
|
|
embedded_curve_keys.embedwards25519.0.clone(),
|
|
embedded_curve_keys.embedwards25519.0.clone(),
|
|
);
|
|
|
|
let serai_key = {
|
|
let serai_key = Zeroizing::new(
|
|
fs::read(home::home_dir().unwrap().join(".serai").join(network.label()).join("key"))
|
|
.expect("couldn't read key for this network"),
|
|
);
|
|
let mut serai_key_repr =
|
|
Zeroizing::new(<<Ristretto as Ciphersuite>::F as PrimeField>::Repr::default());
|
|
serai_key_repr.as_mut().copy_from_slice(serai_key.as_ref());
|
|
Zeroizing::new(<Ristretto as Ciphersuite>::F::from_repr(*serai_key_repr).unwrap())
|
|
};
|
|
|
|
coordinator(&orchestration_path, network, coordinator_key.0, &serai_key);
|
|
|
|
serai(&orchestration_path, network, &serai_key);
|
|
}
|
|
|
|
fn key_gen(network: Network) {
|
|
let serai_dir = home::home_dir().unwrap().join(".serai").join(network.label());
|
|
let key_file = serai_dir.join("key");
|
|
if fs::File::open(&key_file).is_ok() {
|
|
println!("already created key");
|
|
return;
|
|
}
|
|
|
|
let key = <Ristretto as Ciphersuite>::F::random(&mut OsRng);
|
|
|
|
let _ = fs::create_dir_all(&serai_dir);
|
|
fs::write(key_file, key.to_repr()).expect("couldn't write key");
|
|
|
|
// TODO: Move embedded curve key gen here, and print them
|
|
println!(
|
|
"Public Key: {}",
|
|
hex::encode((<Ristretto as Ciphersuite>::generator() * key).to_bytes())
|
|
);
|
|
}
|
|
|
|
fn start(network: Network, services: HashSet<String>) {
|
|
// Create the serai network
|
|
Command::new("docker")
|
|
.arg("network")
|
|
.arg("create")
|
|
.arg("--driver")
|
|
.arg("bridge")
|
|
.arg("serai")
|
|
.output()
|
|
.unwrap();
|
|
|
|
for service in services {
|
|
println!("Starting {service}");
|
|
let name = match service.as_ref() {
|
|
"serai" => "serai",
|
|
"coordinator" => "coordinator",
|
|
"ethereum-relayer" => "ethereum-relayer",
|
|
"message-queue" => "message-queue",
|
|
"bitcoin-daemon" => "bitcoin",
|
|
"bitcoin-processor" => "bitcoin-processor",
|
|
"monero-daemon" => "monero",
|
|
"monero-processor" => "monero-processor",
|
|
"monero-wallet-rpc" => "monero-wallet-rpc",
|
|
_ => panic!("starting unrecognized service"),
|
|
};
|
|
|
|
// If we're building the Serai service, first build the runtime
|
|
let serai_runtime_volume = format!("serai-{}-runtime-volume", network.label());
|
|
if name == "serai" {
|
|
// Check if it's built by checking if the volume has the expected runtime file
|
|
let wasm_build_container_name = format!("serai-{}-runtime", network.label());
|
|
let built = || {
|
|
if let Ok(state_and_status) = Command::new("docker")
|
|
.arg("inspect")
|
|
.arg("-f")
|
|
.arg("{{.State.Status}}:{{.State.ExitCode}}")
|
|
.arg(&wasm_build_container_name)
|
|
.output()
|
|
{
|
|
if let Ok(state_and_status) = String::from_utf8(state_and_status.stdout) {
|
|
return state_and_status.trim() == "exited:0";
|
|
}
|
|
}
|
|
false
|
|
};
|
|
|
|
if !built() {
|
|
let mut repo_path = env::current_exe().unwrap();
|
|
repo_path.pop();
|
|
if repo_path.as_path().ends_with("deps") {
|
|
repo_path.pop();
|
|
}
|
|
assert!(repo_path.as_path().ends_with("debug") || repo_path.as_path().ends_with("release"));
|
|
repo_path.pop();
|
|
assert!(repo_path.as_path().ends_with("target"));
|
|
repo_path.pop();
|
|
|
|
// Build the image to build the runtime
|
|
if !Command::new("docker")
|
|
.current_dir(&repo_path)
|
|
.arg("build")
|
|
.arg("-f")
|
|
.arg("orchestration/runtime/Dockerfile")
|
|
.arg(".")
|
|
.arg("-t")
|
|
.arg(format!("serai-{}-runtime-img", network.label()))
|
|
.spawn()
|
|
.unwrap()
|
|
.wait()
|
|
.unwrap()
|
|
.success()
|
|
{
|
|
panic!("failed to build runtime image");
|
|
}
|
|
|
|
// Run the image, building the runtime
|
|
println!("Building the Serai runtime");
|
|
let container_name = format!("serai-{}-runtime", network.label());
|
|
let _ =
|
|
Command::new("docker").arg("rm").arg("-f").arg(&container_name).spawn().unwrap().wait();
|
|
let _ = Command::new("docker")
|
|
.arg("run")
|
|
.arg("--name")
|
|
.arg(container_name)
|
|
.arg("--volume")
|
|
.arg(format!("{serai_runtime_volume}:/volume"))
|
|
.arg(format!("serai-{}-runtime-img", network.label()))
|
|
.spawn();
|
|
|
|
// Wait until its built
|
|
let mut ticks = 0;
|
|
while !built() {
|
|
std::thread::sleep(core::time::Duration::from_secs(60));
|
|
ticks += 1;
|
|
if ticks > 6 * 60 {
|
|
panic!("couldn't build the runtime after 6 hours")
|
|
}
|
|
}
|
|
}
|
|
}
|
|
|
|
// Build it
|
|
println!("Building {service}");
|
|
docker::build(&orchestration_path(network), network, name);
|
|
|
|
let docker_name = format!("serai-{}-{name}", network.label());
|
|
let docker_image = format!("{docker_name}-img");
|
|
if !Command::new("docker")
|
|
.arg("container")
|
|
.arg("inspect")
|
|
.arg(&docker_name)
|
|
// Use null for all IO to silence 'container does not exist'
|
|
.stdin(Stdio::null())
|
|
.stdout(Stdio::null())
|
|
.stderr(Stdio::null())
|
|
.status()
|
|
.unwrap()
|
|
.success()
|
|
{
|
|
// Create the docker container
|
|
println!("Creating new container for {service}");
|
|
let volume = format!("serai-{}-{name}-volume:/volume", network.label());
|
|
let mut command = Command::new("docker");
|
|
let command = command.arg("create").arg("--name").arg(&docker_name);
|
|
let command = command.arg("--network").arg("serai");
|
|
let command = command.arg("--restart").arg("always");
|
|
let command = command.arg("--log-opt").arg("max-size=100m");
|
|
let command = command.arg("--log-opt").arg("max-file=3");
|
|
let command = if network == Network::Dev {
|
|
command
|
|
} else {
|
|
// Assign a persistent volume if this isn't for Dev
|
|
command.arg("--volume").arg(volume)
|
|
};
|
|
let command = match name {
|
|
"bitcoin" => {
|
|
// Expose the RPC for tests
|
|
if network == Network::Dev {
|
|
command.arg("-p").arg("8332:8332")
|
|
} else {
|
|
command
|
|
}
|
|
}
|
|
"ethereum-relayer" => {
|
|
// Expose the router command fetch server
|
|
command.arg("-p").arg("20831:20831")
|
|
}
|
|
"monero" => {
|
|
// Expose the RPC for tests
|
|
if network == Network::Dev {
|
|
command.arg("-p").arg("18081:18081")
|
|
} else {
|
|
command
|
|
}
|
|
}
|
|
"monero-wallet-rpc" => {
|
|
assert_eq!(network, Network::Dev, "monero-wallet-rpc is only for dev");
|
|
// Expose the RPC for tests
|
|
command.arg("-p").arg("18082:18082")
|
|
}
|
|
"coordinator" => {
|
|
if network == Network::Dev {
|
|
command
|
|
} else {
|
|
// Publish the port
|
|
command.arg("-p").arg("30563:30563")
|
|
}
|
|
}
|
|
"serai" => {
|
|
let command = command.arg("--volume").arg(format!("{serai_runtime_volume}:/runtime"));
|
|
if network == Network::Dev {
|
|
command
|
|
} else {
|
|
// Publish the port
|
|
command.arg("-p").arg("30333:30333")
|
|
}
|
|
}
|
|
_ => command,
|
|
};
|
|
assert!(
|
|
command.arg(docker_image).status().unwrap().success(),
|
|
"couldn't create the container"
|
|
);
|
|
}
|
|
|
|
// Start it
|
|
// TODO: Check it successfully started
|
|
println!("Starting existing container for {service}");
|
|
let _ = Command::new("docker").arg("start").arg(docker_name).output();
|
|
}
|
|
}
|
|
|
|
fn main() {
|
|
let help = || -> ! {
|
|
println!(
|
|
r#"
|
|
Serai Orchestrator v0.0.1
|
|
|
|
Commands:
|
|
key_gen *network*
|
|
Generate a key for the validator.
|
|
|
|
setup *network*
|
|
Generate the Dockerfiles for every Serai service.
|
|
|
|
start *network* [service1, service2...]
|
|
Start the specified services for the specified network ("dev" or "testnet").
|
|
|
|
- `serai`
|
|
- `coordinator`
|
|
- `message-queue`
|
|
- `bitcoin-daemon`
|
|
- `bitcoin-processor`
|
|
- `ethereum-daemon`
|
|
- `ethereum-processor`
|
|
- `ethereum-relayer`
|
|
- `monero-daemon`
|
|
- `monero-processor`
|
|
- `monero-wallet-rpc` (if "dev")
|
|
|
|
are valid services.
|
|
|
|
`*network*-processor` will automatically start `*network*-daemon`.
|
|
"#
|
|
);
|
|
std::process::exit(1);
|
|
};
|
|
|
|
let mut args = env::args();
|
|
args.next();
|
|
let command = args.next();
|
|
let network = match args.next().as_ref().map(AsRef::as_ref) {
|
|
Some("dev") => Network::Dev,
|
|
Some("testnet") => Network::Testnet,
|
|
Some(_) => panic!(r#"unrecognized network. only "dev" and "testnet" are recognized"#),
|
|
None => help(),
|
|
};
|
|
|
|
match command.as_ref().map(AsRef::as_ref) {
|
|
Some("key_gen") => {
|
|
key_gen(network);
|
|
}
|
|
Some("setup") => {
|
|
dockerfiles(network);
|
|
}
|
|
Some("start") => {
|
|
let mut services = HashSet::new();
|
|
for arg in args {
|
|
if arg == "ethereum-processor" {
|
|
services.insert("ethereum-relayer".to_string());
|
|
}
|
|
if let Some(ext_network) = arg.strip_suffix("-processor") {
|
|
services.insert(ext_network.to_string() + "-daemon");
|
|
}
|
|
services.insert(arg);
|
|
}
|
|
|
|
start(network, services);
|
|
}
|
|
_ => help(),
|
|
}
|
|
}
|