From 8f4d6f79f3f833a5a1b1c9da32addcad694824f3 Mon Sep 17 00:00:00 2001 From: Luke Parker Date: Sat, 3 Dec 2022 18:38:02 -0500 Subject: [PATCH] Initial Tendermint implementation (#145) * Machine without timeouts * Time code * Move substrate/consensus/tendermint to substrate/tendermint * Delete the old paper doc * Refactor out external parts to generics Also creates a dedicated file for the message log. * Refactor to type V, type B * Successfully compiling * Calculate timeouts * Fix test * Finish timeouts * Misc cleanup * Define a signature scheme trait * Implement serialization via parity's scale codec Ideally, this would be generic. Unfortunately, the generic API serde doesn't natively support borsh, nor SCALE, and while there is a serde SCALE crate, it's old. While it may be complete, it's not worth working with. While we could still grab bincode, and a variety of other formats, it wasn't worth it to go custom and for Serai, we'll be using SCALE almost everywhere anyways. * Implement usage of the signature scheme * Make the infinite test non-infinite * Provide a dedicated signature in Precommit of just the block hash Greatly simplifies verifying when syncing. * Dedicated Commit object Restores sig aggregation API. * Tidy README * Document tendermint * Sign the ID directly instead of its SCALE encoding For a hash, which is fixed-size, these should be the same yet this helps move past the dependency on SCALE. It also, for any type where the two values are different, smooths integration. * Litany of bug fixes Also attempts to make the code more readable while updating/correcting documentation. * Remove async recursion Greatly increases safety as well by ensuring only one message is processed at once. * Correct timing issues 1) Commit didn't include the round, leaving the clock in question. 2) Machines started with a local time, instead of a proper start time. 3) Machines immediately started the next block instead of waiting for the block time. * Replace MultiSignature with sr25519::Signature * Minor SignatureScheme API changes * Map TM SignatureScheme to Substrate's sr25519 * Initial work on an import queue * Properly use check_block * Rename import to import_queue * Implement tendermint_machine::Block for Substrate Blocks Unfortunately, this immediately makes Tendermint machine capable of deployment as crate since it uses a git reference. In the future, a Cargo.toml patch section for serai/substrate should be investigated. This is being done regardless as it's the quickest way forward and this is for Serai. * Dummy Weights * Move documentation to the top of the file * Move logic into TendermintImport itself Multiple traits exist to verify/handle blocks. I'm unsure exactly when each will be called in the pipeline, so the easiest solution is to have every step run every check. That would be extremely computationally expensive if we ran EVERY check, yet we rely on Substrate for execution (and according checks), which are limited to just the actual import function. Since we're calling this code from many places, it makes sense for it to be consolidated under TendermintImport. * BlockImport, JustificationImport, Verifier, and import_queue function * Update consensus/lib.rs from PoW to Tendermint Not possible to be used as the previous consensus could. It will not produce blocks nor does it currenly even instantiate a machine. This is just he next step. * Update Cargo.tomls for substrate packages * Tendermint SelectChain This is incompatible with Substrate's expectations, yet should be valid for ours * Move the node over to the new SelectChain * Minor tweaks * Update SelectChain documentation * Remove substrate/node lib.rs This shouldn't be used as a library AFAIK. While runtime should be, and arguably should even be published, I have yet to see node in the same way. Helps tighten API boundaries. * Remove unused macro_use * Replace panicking todos with stubs and // TODO Enables progress. * Reduce chain_spec and use more accurate naming * Implement block proposal logic * Modularize to get_proposal * Trigger block importing Doesn't wait for the response yet, which it needs to. * Get the result of block importing * Split import_queue into a series of files * Provide a way to create the machine The BasicQueue returned obscures the TendermintImport struct. Accordingly, a Future scoped with access is returned upwards, which when awaited will create the machine. This makes creating the machine optional while maintaining scope boundaries. Is sufficient to create a 1-node net which produces and finalizes blocks. * Don't import justifications multiple times Also don't broadcast blocks which were solely proposed. * Correct justication import pipeline Removes JustificationImport as it should never be used. * Announce blocks By claiming File, they're not sent ovber the P2P network before they have a justification, as desired. Unfortunately, they never were. This works around that. * Add an assert to verify proposed children aren't best * Consolidate C and I generics into a TendermintClient trait alias * Expand sanity checks Substrate doesn't expect nor officially support children with less work than their parents. It's a trick used here. Accordingly, ensure the trick's validity. * When resetting, use the end time of the round which was committed to The machine reset to the end time of the current round. For a delayed network connection, a machine may move ahead in rounds and only later realize a prior round succeeded. Despite acknowledging that round's success, it would maintain its delay when moving to the next block, bricking it. Done by tracking the end time for each round as they occur. * Move Commit from including the round to including the round's end_time The round was usable to build the current clock in an accumulated fashion, relative to the previous round. The end time is the absolute metric of it, which can be used to calculate the round number (with all previous end times). Substrate now builds off the best block, not genesis, using the end time included in the justification to start its machine in a synchronized state. Knowing the end time of a round, or the round in which block was committed to, is necessary for nodes to sync up with Tendermint. Encoding it in the commit ensures it's long lasting and makes it readily available, without the load of an entire transaction. * Add a TODO on Tendermint * Misc bug fixes * More misc bug fixes * Clean up lock acquisition * Merge weights and signing scheme into validators, documenting needed changes * Add pallet sessions to runtime, create pallet-tendermint * Update node to use pallet sessions * Update support URL * Partial work on correcting pallet calls * Redo Tendermint folder structure * TendermintApi, compilation fixes * Fix the stub round robin At some point, the modulus was removed causing it to exceed the validators list and stop proposing. * Use the validators list from the session pallet * Basic Gossip Validator * Correct Substrate Tendermint start block The Tendermint machine uses the passed in number as the block's being worked on number. Substrate passed in the already finalized block's number. Also updates misc comments. * Clean generics in Tendermint with a monolith with associated types * Remove the Future triggering the machine for an async fn Enables passing data in, such as the network. * Move TendermintMachine from start_num, time to last_num, time Provides an explicitly clear API clearer to program around. Also adds additional time code to handle an edge case. * Connect the Tendermint machine to a GossipEngine * Connect broadcast * Remove machine from TendermintImport It's not used there at all. * Merge Verifier into block_import.rs These two files were largely the same, just hooking into sync structs with almost identical imports. As this project shapes up, removing dead weight is appreciated. * Create a dedicated file for being a Tendermint authority * Deleted comment code related to PoW * Move serai_runtime specific code from tendermint/client to node Renames serai-consensus to sc_tendermint * Consolidate file structure in sc_tendermint * Replace best_* with finalized_* We test their equivalency yet still better to use finalized_* in general. * Consolidate references to sr25519 in sc_tendermint * Add documentation to public structs/functions in sc_tendermint * Add another missing comment * Make sign asynchronous Some relation to https://github.com/serai-dex/serai/issues/95. * Move sc_tendermint to async sign * Implement proper checking of inherents * Take in a Keystore and validator ID * Remove unnecessary PhantomDatas * Update node to latest sc_tendermint * Configure node for a multi-node testnet * Fix handling of the GossipEngine * Use a rounded genesis to obtain sufficient synchrony within the Docker env * Correct Serai d-f names in Docker * Remove an attempt at caching I don't believe would ever hit * Add an already in chain check to block import While the inner should do this for us, we call verify_order on our end *before* inner to ensure sequential import. Accordingly, we need to provide our own check. Removes errors of "non-sequential import" when trying to re-import an existing block. * Update the consensus documentation It was incredibly out of date. * Add a _ to the validator arg in slash * Make the dev profile a local testnet profile Restores a dev profile which only has one validator, locally running. * Reduce Arcs in TendermintMachine, split Signer from SignatureScheme * Update sc_tendermint per previous commit * Restore cache * Remove error case which shouldn't be an error * Stop returning errors on already existing blocks entirely * Correct Dave, Eve, and Ferdie to not run as validators * Rename dev to devnet --dev still works thanks to the |. Acheieves a personal preference of mine with some historical meaning. * Add message expiry to the Tendermint gossip * Localize the LibP2P protocol to the blockchain Follows convention by doing so. Theoretically enables running multiple blockchains over a single LibP2P connection. * Add a version to sp-runtime in tendermint-machine * Add missing trait * Bump Substrate dependency Fixes #147. * Implement Schnorr half-aggregation from https://eprint.iacr.org/2021/350.pdf Relevant to https://github.com/serai-dex/serai/issues/99. * cargo update (tendermint) * Move from polling loops to a pure IO model for sc_tendermint's gossip * Correct protocol name handling * Use futures mpsc instead of tokio * Timeout futures * Move from a yielding loop to select in tendermint-machine * Update Substrate to the new TendermintHandle * Use futures pin instead of tokio * Only recheck blocks with non-fatal inherent transaction errors * Update to the latest substrate * Separate the block processing time from the latency * Add notes to the runtime * Don't spam slash Also adds a slash condition of failing to propose. * Support running TendermintMachine when not a validator This supports validators who leave the current set, without crashing their nodes, along with nodes trying to become validators (who will now seamlessly transition in). * Properly define and pass around the block size * Correct the Duration timing The proposer will build it, send it, then process it (on the first round). Accordingly, it's / 3, not / 2, as / 2 only accounted for the latter events. * Correct time-adjustment code on round skip * Have the machine respond to advances made by an external sync loop * Clean up time code in tendermint-machine * BlockData and RoundData structs * Rename Round to RoundNumber * Move BlockData to a new file * Move Round to an Option due to the pseudo-uninitialized state we create Before the addition of RoundData, we always created the round, and on .round(0), simply created it again. With RoundData, and the changes to the time code, we used round 0, time 0, the latter being incorrect yet not an issue due to lack of misuse. Now, if we do misuse it, it'll panic. * Clear the Queue instead of draining and filtering There shouldn't ever be a message which passes the filter under the current design. * BlockData::new * Move more code into block.rs Introduces type-aliases to obtain Data/Message/SignedMessage solely from a Network object. Fixes a bug regarding stepping when you're not an active validator. * Have verify_precommit_signature return if it verified the signature Also fixes a bug where invalid precommit signatures were left standing and therefore contributing to commits. * Remove the precommit signature hash It cached signatures per-block. Precommit signatures are bound to each round. This would lead to forming invalid commits when a commit should be formed. Under debug, the machine would catch that and panic. On release, it'd have everyone who wasn't a validator fail to continue syncing. * Slight doc changes Also flattens the message handling function by replacing an if containing all following code in the function with an early return for the else case. * Always produce notifications for finalized blocks via origin overrides * Correct weird formatting * Update to the latest tendermint-machine * Manually step the Tendermint machine when we synced a block over the network * Ignore finality notifications for old blocks * Remove a TODO resolved in 8c51bc011d03c8d54ded05011e7f4d1a01e9f873 * Add a TODO comment to slash Enables searching for the case-sensitive phrase and finding it. * cargo fmt * Use a tmp DB for Serai in Docker * Remove panic on slash As we move towards protonet, this can happen (if a node goes offline), yet it happening brings down the entire net right now. * Add log::error on slash * created shared volume between containers * Complete the sh scripts * Pass in the genesis time to Substrate * Correct block announcements They were announced, yet not marked best. * Correct pupulate_end_time It was used as inclusive yet didn't work inclusively. * Correct gossip channel jumping when a block is synced via Substrate * Use a looser check in import_future This triggered so it needs to be accordingly relaxed. * Correct race conditions between add_block and step Also corrects a <= to <. * Update cargo deny * rename genesis-service to genesis * Update Cargo.lock * Correct runtime Cargo.toml whitespace * Correct typo * Document recheck * Misc lints * Fix prev commit * Resolve low-hanging review comments * Mark genesis/entry-dev.sh as executable * Prevent a commit from including the same signature multiple times Yanks tendermint-machine 0.1.0 accordingly. * Update to latest nightly clippy * Improve documentation * Use clearer variable names * Add log statements * Pair more log statements * Clean TendermintAuthority::authority as possible Merges it into new. It has way too many arguments, yet there's no clear path at consolidation there, unfortunately. Additionally provides better scoping within itself. * Fix #158 Doesn't use lock_import_and_run for reasons commented (lack of async). * Rename guard to lock * Have the devnet use the current time as the genesis Possible since it's only a single node, not requiring synchronization. * Fix gossiping I really don't know what side effect this avoids and I can't say I care at this point. * Misc lints Co-authored-by: vrx00 Co-authored-by: TheArchitect108 --- Cargo.lock | 199 +++--- Cargo.toml | 6 +- deny.toml | 5 +- deploy/docker-compose.yml | 42 +- deploy/genesis/Dockerfile | 5 + deploy/genesis/entry-dev.sh | 7 + deploy/serai/Dockerfile | 1 - deploy/serai/scripts/entry-dev.sh | 6 +- docs/Serai.md | 6 +- docs/protocol/Consensus.md | 43 +- substrate/consensus/Cargo.toml | 38 -- substrate/consensus/src/algorithm.rs | 27 - substrate/consensus/src/lib.rs | 124 ---- substrate/node/Cargo.toml | 55 +- substrate/node/src/chain_spec.rs | 106 ++- substrate/node/src/command.rs | 26 +- substrate/node/src/lib.rs | 3 - substrate/node/src/main.rs | 1 - substrate/node/src/service.rs | 205 ++++-- substrate/runtime/Cargo.toml | 49 +- substrate/runtime/src/lib.rs | 70 +- substrate/tendermint/client/Cargo.toml | 48 ++ .../{consensus => tendermint/client}/LICENSE | 0 .../tendermint/client/src/authority/gossip.rs | 67 ++ .../client/src/authority/import_future.rs | 72 ++ .../tendermint/client/src/authority/mod.rs | 494 ++++++++++++++ .../tendermint/client/src/block_import.rs | 182 +++++ substrate/tendermint/client/src/lib.rs | 163 +++++ substrate/tendermint/client/src/tendermint.rs | 247 +++++++ substrate/tendermint/client/src/validators.rs | 190 ++++++ substrate/tendermint/machine/Cargo.toml | 24 + substrate/tendermint/machine/LICENSE | 21 + substrate/tendermint/machine/README.md | 62 ++ substrate/tendermint/machine/src/block.rs | 143 ++++ substrate/tendermint/machine/src/ext.rs | 274 ++++++++ substrate/tendermint/machine/src/lib.rs | 638 ++++++++++++++++++ .../tendermint/machine/src/message_log.rs | 108 +++ substrate/tendermint/machine/src/round.rs | 83 +++ substrate/tendermint/machine/src/time.rs | 44 ++ substrate/tendermint/machine/tests/ext.rs | 176 +++++ substrate/tendermint/pallet/Cargo.toml | 38 ++ substrate/tendermint/pallet/LICENSE | 15 + substrate/tendermint/pallet/src/lib.rs | 75 ++ substrate/tendermint/primitives/Cargo.toml | 21 + substrate/tendermint/primitives/LICENSE | 15 + substrate/tendermint/primitives/src/lib.rs | 16 + 46 files changed, 3792 insertions(+), 448 deletions(-) create mode 100644 deploy/genesis/Dockerfile create mode 100755 deploy/genesis/entry-dev.sh delete mode 100644 substrate/consensus/Cargo.toml delete mode 100644 substrate/consensus/src/algorithm.rs delete mode 100644 substrate/consensus/src/lib.rs delete mode 100644 substrate/node/src/lib.rs create mode 100644 substrate/tendermint/client/Cargo.toml rename substrate/{consensus => tendermint/client}/LICENSE (100%) create mode 100644 substrate/tendermint/client/src/authority/gossip.rs create mode 100644 substrate/tendermint/client/src/authority/import_future.rs create mode 100644 substrate/tendermint/client/src/authority/mod.rs create mode 100644 substrate/tendermint/client/src/block_import.rs create mode 100644 substrate/tendermint/client/src/lib.rs create mode 100644 substrate/tendermint/client/src/tendermint.rs create mode 100644 substrate/tendermint/client/src/validators.rs create mode 100644 substrate/tendermint/machine/Cargo.toml create mode 100644 substrate/tendermint/machine/LICENSE create mode 100644 substrate/tendermint/machine/README.md create mode 100644 substrate/tendermint/machine/src/block.rs create mode 100644 substrate/tendermint/machine/src/ext.rs create mode 100644 substrate/tendermint/machine/src/lib.rs create mode 100644 substrate/tendermint/machine/src/message_log.rs create mode 100644 substrate/tendermint/machine/src/round.rs create mode 100644 substrate/tendermint/machine/src/time.rs create mode 100644 substrate/tendermint/machine/tests/ext.rs create mode 100644 substrate/tendermint/pallet/Cargo.toml create mode 100644 substrate/tendermint/pallet/LICENSE create mode 100644 substrate/tendermint/pallet/src/lib.rs create mode 100644 substrate/tendermint/primitives/Cargo.toml create mode 100644 substrate/tendermint/primitives/LICENSE create mode 100644 substrate/tendermint/primitives/src/lib.rs diff --git a/Cargo.lock b/Cargo.lock index b12b74e3..76ffe271 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -2663,21 +2663,6 @@ dependencies = [ "sp-weights", ] -[[package]] -name = "frame-system-benchmarking" -version = "4.0.0-dev" -source = "git+https://github.com/serai-dex/substrate#881cfbc59c8b65bcccc9fa6187e5096ac3594e3a" -dependencies = [ - "frame-benchmarking", - "frame-support", - "frame-system", - "parity-scale-codec", - "scale-info", - "sp-core", - "sp-runtime", - "sp-std", -] - [[package]] name = "frame-system-rpc-runtime-api" version = "4.0.0-dev" @@ -5206,6 +5191,40 @@ dependencies = [ "sp-std", ] +[[package]] +name = "pallet-session" +version = "4.0.0-dev" +source = "git+https://github.com/serai-dex/substrate#881cfbc59c8b65bcccc9fa6187e5096ac3594e3a" +dependencies = [ + "frame-support", + "frame-system", + "impl-trait-for-tuples", + "log", + "pallet-timestamp", + "parity-scale-codec", + "scale-info", + "sp-core", + "sp-io", + "sp-runtime", + "sp-session", + "sp-staking", + "sp-std", + "sp-trie", +] + +[[package]] +name = "pallet-tendermint" +version = "0.1.0" +dependencies = [ + "frame-support", + "frame-system", + "parity-scale-codec", + "scale-info", + "sp-application-crypto", + "sp-core", + "sp-std", +] + [[package]] name = "pallet-timestamp" version = "4.0.0-dev" @@ -6701,31 +6720,6 @@ dependencies = [ "thiserror", ] -[[package]] -name = "sc-consensus-pow" -version = "0.10.0-dev" -source = "git+https://github.com/serai-dex/substrate#881cfbc59c8b65bcccc9fa6187e5096ac3594e3a" -dependencies = [ - "async-trait", - "futures", - "futures-timer", - "log", - "parity-scale-codec", - "parking_lot 0.12.1", - "sc-client-api", - "sc-consensus", - "sp-api", - "sp-block-builder", - "sp-blockchain", - "sp-consensus", - "sp-consensus-pow", - "sp-core", - "sp-inherents", - "sp-runtime", - "substrate-prometheus-endpoint", - "thiserror", -] - [[package]] name = "sc-executor" version = "0.10.0-dev" @@ -6928,6 +6922,24 @@ dependencies = [ "thiserror", ] +[[package]] +name = "sc-network-gossip" +version = "0.10.0-dev" +source = "git+https://github.com/serai-dex/substrate#881cfbc59c8b65bcccc9fa6187e5096ac3594e3a" +dependencies = [ + "ahash", + "futures", + "futures-timer", + "libp2p", + "log", + "lru", + "sc-network-common", + "sc-peerset", + "sp-runtime", + "substrate-prometheus-endpoint", + "tracing", +] + [[package]] name = "sc-network-light" version = "0.10.0-dev" @@ -7257,6 +7269,38 @@ dependencies = [ "wasm-timer", ] +[[package]] +name = "sc-tendermint" +version = "0.1.0" +dependencies = [ + "async-trait", + "futures", + "hex", + "log", + "sc-block-builder", + "sc-client-api", + "sc-consensus", + "sc-executor", + "sc-network", + "sc-network-common", + "sc-network-gossip", + "sc-service", + "sc-transaction-pool", + "sp-api", + "sp-application-crypto", + "sp-blockchain", + "sp-consensus", + "sp-core", + "sp-inherents", + "sp-keystore", + "sp-runtime", + "sp-staking", + "sp-tendermint", + "substrate-prometheus-endpoint", + "tendermint-machine", + "tokio", +] + [[package]] name = "sc-tracing" version = "4.0.0-dev" @@ -7559,30 +7603,6 @@ version = "0.5.0" source = "registry+https://github.com/rust-lang/crates.io-index" checksum = "930c0acf610d3fdb5e2ab6213019aaa04e227ebe9547b0649ba599b16d788bd7" -[[package]] -name = "serai-consensus" -version = "0.1.0" -dependencies = [ - "sc-basic-authorship", - "sc-client-api", - "sc-consensus", - "sc-consensus-pow", - "sc-executor", - "sc-network", - "sc-service", - "sc-transaction-pool", - "serai-runtime", - "sp-api", - "sp-consensus", - "sp-consensus-pow", - "sp-core", - "sp-runtime", - "sp-timestamp", - "sp-trie", - "substrate-prometheus-endpoint", - "tokio", -] - [[package]] name = "serai-extension" version = "0.1.0" @@ -7611,33 +7631,43 @@ dependencies = [ name = "serai-node" version = "0.1.0" dependencies = [ + "async-trait", "clap 4.0.26", "frame-benchmarking", "frame-benchmarking-cli", "frame-system", "jsonrpsee", + "log", + "pallet-tendermint", "pallet-transaction-payment", "pallet-transaction-payment-rpc", + "sc-basic-authorship", "sc-cli", "sc-client-api", + "sc-client-db", "sc-consensus", "sc-executor", "sc-keystore", + "sc-network", "sc-rpc", "sc-rpc-api", "sc-service", "sc-telemetry", + "sc-tendermint", "sc-transaction-pool", "sc-transaction-pool-api", - "serai-consensus", "serai-runtime", "sp-api", + "sp-application-crypto", "sp-block-builder", "sp-blockchain", + "sp-consensus", "sp-core", "sp-inherents", "sp-keyring", + "sp-keystore", "sp-runtime", + "sp-tendermint", "sp-timestamp", "substrate-build-script-utils", "substrate-frame-rpc-system", @@ -7672,19 +7702,21 @@ dependencies = [ "frame-executive", "frame-support", "frame-system", - "frame-system-benchmarking", "frame-system-rpc-runtime-api", "hex-literal", "pallet-balances", "pallet-contracts", "pallet-contracts-primitives", "pallet-randomness-collective-flip", + "pallet-session", + "pallet-tendermint", "pallet-timestamp", "pallet-transaction-payment", "pallet-transaction-payment-rpc-runtime-api", "parity-scale-codec", "scale-info", "sp-api", + "sp-application-crypto", "sp-block-builder", "sp-core", "sp-inherents", @@ -7692,6 +7724,7 @@ dependencies = [ "sp-runtime", "sp-session", "sp-std", + "sp-tendermint", "sp-transaction-pool", "sp-version", "substrate-wasm-builder", @@ -8081,18 +8114,6 @@ dependencies = [ "thiserror", ] -[[package]] -name = "sp-consensus-pow" -version = "0.10.0-dev" -source = "git+https://github.com/serai-dex/substrate#881cfbc59c8b65bcccc9fa6187e5096ac3594e3a" -dependencies = [ - "parity-scale-codec", - "sp-api", - "sp-core", - "sp-runtime", - "sp-std", -] - [[package]] name = "sp-core" version = "6.0.0" @@ -8450,6 +8471,15 @@ dependencies = [ "sp-std", ] +[[package]] +name = "sp-tendermint" +version = "0.1.0" +dependencies = [ + "sp-api", + "sp-core", + "sp-std", +] + [[package]] name = "sp-timestamp" version = "4.0.0-dev" @@ -8887,6 +8917,19 @@ dependencies = [ "winapi", ] +[[package]] +name = "tendermint-machine" +version = "0.1.0" +dependencies = [ + "async-trait", + "futures", + "log", + "parity-scale-codec", + "sp-runtime", + "thiserror", + "tokio", +] + [[package]] name = "term" version = "0.7.0" diff --git a/Cargo.toml b/Cargo.toml index b3fdb0c6..4fce7c27 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -21,8 +21,12 @@ members = [ "processor", + "substrate/tendermint/machine", + "substrate/tendermint/primitives", + "substrate/tendermint/client", + "substrate/tendermint/pallet", + "substrate/runtime", - "substrate/consensus", "substrate/node", "contracts/extension", diff --git a/deny.toml b/deny.toml index cea6b938..0909db18 100644 --- a/deny.toml +++ b/deny.toml @@ -45,8 +45,11 @@ exceptions = [ { allow = ["AGPL-3.0"], name = "serai-extension" }, { allow = ["AGPL-3.0"], name = "serai-multisig" }, + { allow = ["AGPL-3.0"], name = "sp-tendermint" }, + { allow = ["AGPL-3.0"], name = "pallet-tendermint" }, + { allow = ["AGPL-3.0"], name = "sc-tendermint" }, + { allow = ["AGPL-3.0"], name = "serai-runtime" }, - { allow = ["AGPL-3.0"], name = "serai-consensus" }, { allow = ["AGPL-3.0"], name = "serai-node" }, ] diff --git a/deploy/docker-compose.yml b/deploy/docker-compose.yml index 6f1e4855..23cae952 100644 --- a/deploy/docker-compose.yml +++ b/deploy/docker-compose.yml @@ -22,9 +22,17 @@ volumes: serai-dave: serai-eve: serai-ferdie: - + genesis-volume: services: + genesis: + image: genesis:dev + build: + context: ./genesis/ + dockerfile: Dockerfile + entrypoint: /entry-dev.sh + volumes: + - genesis-volume:/temp _serai: &serai_defaults @@ -40,6 +48,9 @@ services: entrypoint: /scripts/entry-dev.sh volumes: - "./serai/scripts:/scripts" + - genesis-volume:/temp + depends_on: + - genesis serai-base: <<: *serai_defaults @@ -47,7 +58,7 @@ services: profiles: - base environment: - CHAIN: dev + CHAIN: local NAME: base serai-alice: @@ -60,8 +71,8 @@ services: - cluster-lg - cluster-coins-lg environment: - CHAIN: dev - NAME: Alice + CHAIN: local + NAME: alice VALIDATOR: true serai-bob: @@ -74,8 +85,9 @@ services: - cluster-lg - cluster-coins-lg environment: - CHAIN: dev - NAME: Bob + CHAIN: local + NAME: bob + VALIDATOR: true serai-charlie: <<: *serai_defaults @@ -87,8 +99,9 @@ services: - cluster-lg - cluster-coins-lg environment: - CHAIN: dev - NAME: Charlie + CHAIN: local + NAME: charlie + VALIDATOR: true serai-dave: <<: *serai_defaults @@ -98,8 +111,8 @@ services: - cluster-lg - cluster-coins-lg environment: - CHAIN: dev - NAME: Dave + CHAIN: local + NAME: dave serai-eve: <<: *serai_defaults @@ -109,8 +122,8 @@ services: - cluster-lg - cluster-coins-lg environment: - CHAIN: dev - NAME: Eve + CHAIN: local + NAME: eve serai-ferdie: <<: *serai_defaults @@ -120,8 +133,9 @@ services: - cluster-lg - cluster-coins-lg environment: - CHAIN: dev - NAME: Ferdie + CHAIN: local + NAME: ferdie + # Processor Services # Coin Services diff --git a/deploy/genesis/Dockerfile b/deploy/genesis/Dockerfile new file mode 100644 index 00000000..014642d9 --- /dev/null +++ b/deploy/genesis/Dockerfile @@ -0,0 +1,5 @@ +FROM alpine + +COPY entry-dev.sh / + +ENTRYPOINT ["entry-dev.sh"] diff --git a/deploy/genesis/entry-dev.sh b/deploy/genesis/entry-dev.sh new file mode 100755 index 00000000..46ff6d60 --- /dev/null +++ b/deploy/genesis/entry-dev.sh @@ -0,0 +1,7 @@ +#!/bin/sh + +date +%s > /temp/genesis +GENESIS=$(cat /temp/genesis) +echo "Genesis: $GENESIS" + +tail -f /dev/null diff --git a/deploy/serai/Dockerfile b/deploy/serai/Dockerfile index 05815234..57cebf79 100644 --- a/deploy/serai/Dockerfile +++ b/deploy/serai/Dockerfile @@ -33,7 +33,6 @@ RUN --mount=type=cache,target=/usr/local/cargo/git \ --mount=type=cache,target=/serai/target/release/lib* \ cargo build --release - # Prepare Image FROM ubuntu:latest as image LABEL description="STAGE 2: Copy and Run" diff --git a/deploy/serai/scripts/entry-dev.sh b/deploy/serai/scripts/entry-dev.sh index 5e8353b9..2947351c 100755 --- a/deploy/serai/scripts/entry-dev.sh +++ b/deploy/serai/scripts/entry-dev.sh @@ -1,6 +1,8 @@ #!/bin/bash + +export GENESIS=$(cat /temp/genesis) if [[ -z $VALIDATOR ]]; then - serai-node --chain $CHAIN --name $NAME + serai-node --tmp --chain $CHAIN --name $NAME else - serai-node --chain $CHAIN --name $NAME --validator + serai-node --tmp --chain $CHAIN --$NAME fi diff --git a/docs/Serai.md b/docs/Serai.md index 23b2269b..683bc897 100644 --- a/docs/Serai.md +++ b/docs/Serai.md @@ -1,8 +1,8 @@ # Serai -Serai is a decentralization execution layer whose validators form multisig -wallets for various connected networks, offering secure decentralized custody of -foreign assets to applications built on it. +Serai is a decentralized execution layer whose validators form multisig wallets +for various connected networks, offering secure decentralized custody of foreign +assets to applications built on it. Serai is exemplified by Serai DEX, an automated-market-maker (AMM) decentralized exchange, allowing swapping BTC, ETH, USDC, DAI, and XMR. It is the premier diff --git a/docs/protocol/Consensus.md b/docs/protocol/Consensus.md index ddb42e96..46616ec3 100644 --- a/docs/protocol/Consensus.md +++ b/docs/protocol/Consensus.md @@ -1,25 +1,32 @@ -# Oraclization (message) - -`Oraclization` messages are published by the current block producer and communicate an external event being communicated to the native chain. This is presumably some other cryptocurrency, such as BTC, being sent to the multisig wallet, triggering a privileged call enabling relevant actions. - -# Report (message) - -`Report` reports a validator for malicious or invalid behavior. This may be publishing a false `Oraclization` or failing to participate as expected. These apply a penalty to the validator's assigned rewards, which is distinct from the bond which must be kept as a multiple of 1m. If the amount deducted exceeds their assigned rewards, they are scheduled for removal with an appropriately reduced bond. - # Consensus -Consensus is a modified Aura implementation with the following notes: +### Inherent Transactions -- Stateful nodes exist in two forms. Contextless and contextual. -- Context is inserted by external programs which are run under the same umbrella as the node and trusted. -- Contextless nodes do not perform verification beyond technical validity on `Oraclization` and `Report`. -- Contextual nodes do perform verification on `Oraclization` and `Report` and will reject transactions which do not represent the actual context. -- If a block is finalized under Aura, contextual checks are always stated to be passing, even if the local context conflicts with it. +Inherent transactions are a feature of Substrate enabling block producers to +include transactions without overhead. This enables forming a leader protocol +for including various forms of information on chain, such as In Instruction. By +having a single node include the data, we prevent having pointless replicas on +chain. -Since validators will not accept a block which breaks context, it will never be finalized, bypassing the contextual checks. If validators do finalize a block which seemingly breaks context, the majority of validators are saying it doesn't, signifying a locally invalid context state (perhaps simply one which is behind). By disabling contextual checks accordingly, nodes can still keep up to date with the chain and validate/participate in other contextual areas (assuming only one local contextual area is invalid). +In order to ensure the validity of the inherent transactions, the consensus +process validates them. Under Substrate, a block with inherents is checked by +all nodes, and independently accepted or rejected. Under Serai, a block with +inherents is checked by the validators, and if a BFT majority of validators +agree it's legitimate, it is, regardless of the node's perception. -By moving context based checks into consensus itself, we allow transforming the `Oraclization` and `Report` messages into a leader protocol. Instead of every validator publishing their own message and waiting for the chain's implementation to note 66% of the weight agrees on the duplicated messages, the validators agreeing on the block, which already happens under BFT consensus, ensures message accuracy. +### Consensus -Aura may be further optimizable by moving to either BLS or FROST signatures. BLS is easy to work with yet has a significance performance overhead. Considering we already have FROST, it may be ideal to use, yet it is a 2-round protocol which exponentially scales for key generation. While GRANDPA, an alternative consensus protocol, is 2-round and therefore could be seamlessly extended with FROST, it's not used here as it finalizes multiple blocks at a time. Given the contextual validity checks, it's simplest to finalize each block on their own to prevent malicious/improper chains from growing too large. +Serai uses Tendermint to obtain consensus on its blockchain. Tendermint details +both block production and finalization, finalizing each block as it's produced. -If the complexity challenge can be overcame, BABE's VRF selecting a block producer should be used to limit DoS attacks. The main issue is that BABE is traditionally partnered with GRANDPA and represents a more complex system than Aura. Further research is needed here. +Validators operate contextually. They are expected to know how to create +inherent transactions and actually do so, additionally verifying inherent +transactions proposed by other nodes. Verification comes from ensuring perfect +consistency with what the validator would've proposed themselves. + +While Substrate prefers block production and finalization to be distinct, such +a model would allow unchecked inherent transactions to proliferate on Serai. +Since inherent transactions detail the flow of external funds in relation to +Serai, any operations on such blocks would be unsafe to a potentially fatal +degree. Accordingly, re-bundling the two to ensure the only data in the system +is that which has been fully checked was decided as the best move forward. diff --git a/substrate/consensus/Cargo.toml b/substrate/consensus/Cargo.toml deleted file mode 100644 index 892b648e..00000000 --- a/substrate/consensus/Cargo.toml +++ /dev/null @@ -1,38 +0,0 @@ -[package] -name = "serai-consensus" -version = "0.1.0" -description = "Serai consensus module" -license = "AGPL-3.0-only" -repository = "https://github.com/serai-dex/serai/tree/develop/substrate/consensus" -authors = ["Luke Parker "] -edition = "2021" -publish = false - -[package.metadata.docs.rs] -all-features = true -rustdoc-args = ["--cfg", "docsrs"] - -[dependencies] -sp-core = { git = "https://github.com/serai-dex/substrate" } -sp-trie = { git = "https://github.com/serai-dex/substrate" } -sp-timestamp = { git = "https://github.com/serai-dex/substrate" } -sc-consensus = { git = "https://github.com/serai-dex/substrate" } -sp-consensus = { git = "https://github.com/serai-dex/substrate" } -sc-transaction-pool = { git = "https://github.com/serai-dex/substrate" } -sc-basic-authorship = { git = "https://github.com/serai-dex/substrate" } -sc-consensus-pow = { git = "https://github.com/serai-dex/substrate" } -sp-consensus-pow = { git = "https://github.com/serai-dex/substrate" } - -sc-network = { git = "https://github.com/serai-dex/substrate" } -sc-service = { git = "https://github.com/serai-dex/substrate", features = ["wasmtime"] } -sc-executor = { git = "https://github.com/serai-dex/substrate", features = ["wasmtime"] } -sp-runtime = { git = "https://github.com/serai-dex/substrate" } - -substrate-prometheus-endpoint = { git = "https://github.com/serai-dex/substrate" } - -sc-client-api = { git = "https://github.com/serai-dex/substrate" } -sp-api = { git = "https://github.com/serai-dex/substrate" } - -serai-runtime = { path = "../runtime" } - -tokio = "1" diff --git a/substrate/consensus/src/algorithm.rs b/substrate/consensus/src/algorithm.rs deleted file mode 100644 index 9d51e4e1..00000000 --- a/substrate/consensus/src/algorithm.rs +++ /dev/null @@ -1,27 +0,0 @@ -use sp_core::U256; - -use sc_consensus_pow::{Error, PowAlgorithm}; -use sp_consensus_pow::Seal; - -use sp_runtime::{generic::BlockId, traits::Block as BlockT}; - -#[derive(Clone)] -pub struct AcceptAny; -impl PowAlgorithm for AcceptAny { - type Difficulty = U256; - - fn difficulty(&self, _: B::Hash) -> Result> { - Ok(U256::one()) - } - - fn verify( - &self, - _: &BlockId, - _: &B::Hash, - _: Option<&[u8]>, - _: &Seal, - _: Self::Difficulty, - ) -> Result> { - Ok(true) - } -} diff --git a/substrate/consensus/src/lib.rs b/substrate/consensus/src/lib.rs deleted file mode 100644 index 763e6ff4..00000000 --- a/substrate/consensus/src/lib.rs +++ /dev/null @@ -1,124 +0,0 @@ -use std::{marker::Sync, sync::Arc, time::Duration}; - -use substrate_prometheus_endpoint::Registry; - -use sc_consensus_pow as sc_pow; -use sc_executor::NativeElseWasmExecutor; -use sc_service::TaskManager; - -use serai_runtime::{self, opaque::Block, RuntimeApi}; - -mod algorithm; - -pub struct ExecutorDispatch; -impl sc_executor::NativeExecutionDispatch for ExecutorDispatch { - #[cfg(feature = "runtime-benchmarks")] - type ExtendHostFunctions = frame_benchmarking::benchmarking::HostFunctions; - #[cfg(not(feature = "runtime-benchmarks"))] - type ExtendHostFunctions = (); - - fn dispatch(method: &str, data: &[u8]) -> Option> { - serai_runtime::api::dispatch(method, data) - } - - fn native_version() -> sc_executor::NativeVersion { - serai_runtime::native_version() - } -} - -pub type FullClient = - sc_service::TFullClient>; - -type Db = sp_trie::PrefixedMemoryDB; - -pub fn import_queue + 'static>( - task_manager: &TaskManager, - client: Arc, - select_chain: S, - registry: Option<&Registry>, -) -> Result, sp_consensus::Error> { - let pow_block_import = Box::new(sc_pow::PowBlockImport::new( - client.clone(), - client, - algorithm::AcceptAny, - 0, - select_chain, - |_, _| async { Ok(sp_timestamp::InherentDataProvider::from_system_time()) }, - )); - - sc_pow::import_queue( - pow_block_import, - None, - algorithm::AcceptAny, - &task_manager.spawn_essential_handle(), - registry, - ) -} - -// Produce a block every 5 seconds -async fn produce< - Block: sp_api::BlockT, - Algorithm: sc_pow::PowAlgorithm + Send + Sync + 'static, - C: sp_api::ProvideRuntimeApi + 'static, - Link: sc_consensus::JustificationSyncLink + 'static, - P: Send + 'static, ->( - worker: sc_pow::MiningHandle, -) where - sp_api::TransactionFor: Send + 'static, -{ - loop { - let worker_clone = worker.clone(); - std::thread::spawn(move || { - tokio::runtime::Runtime::new().unwrap().handle().block_on(async { - worker_clone.submit(vec![]).await; - }); - }); - tokio::time::sleep(Duration::from_secs(6)).await; - } -} - -// If we're an authority, produce blocks -pub fn authority + 'static>( - task_manager: &TaskManager, - client: Arc, - network: Arc::Hash>>, - pool: Arc>, - select_chain: S, - registry: Option<&Registry>, -) { - let proposer = sc_basic_authorship::ProposerFactory::new( - task_manager.spawn_handle(), - client.clone(), - pool, - registry, - None, - ); - - let pow_block_import = Box::new(sc_pow::PowBlockImport::new( - client.clone(), - client.clone(), - algorithm::AcceptAny, - 0, // Block to start checking inherents at - select_chain.clone(), - move |_, _| async { Ok(sp_timestamp::InherentDataProvider::from_system_time()) }, - )); - - let (worker, worker_task) = sc_pow::start_mining_worker( - pow_block_import, - client, - select_chain, - algorithm::AcceptAny, - proposer, - network.clone(), - network, - None, - move |_, _| async { Ok(sp_timestamp::InherentDataProvider::from_system_time()) }, - Duration::from_secs(6), - Duration::from_secs(2), - ); - - task_manager.spawn_essential_handle().spawn_blocking("pow", None, worker_task); - - task_manager.spawn_essential_handle().spawn("producer", None, produce(worker)); -} diff --git a/substrate/node/Cargo.toml b/substrate/node/Cargo.toml index 58c50785..22dcbbcf 100644 --- a/substrate/node/Cargo.toml +++ b/substrate/node/Cargo.toml @@ -12,42 +12,54 @@ publish = false name = "serai-node" [dependencies] -clap = { version = "4", features = ["derive"] } +async-trait = "0.1" + +log = "0.4" + +clap = { version = "4", features = ["derive"] } +jsonrpsee = { version = "0.15", features = ["server"] } -sc-cli = { git = "https://github.com/serai-dex/substrate", features = ["wasmtime"] } sp-core = { git = "https://github.com/serai-dex/substrate" } -sc-executor = { git = "https://github.com/serai-dex/substrate", features = ["wasmtime"] } -sc-service = { git = "https://github.com/serai-dex/substrate", features = ["wasmtime"] } -sc-telemetry = { git = "https://github.com/serai-dex/substrate" } +sp-application-crypto = { git = "https://github.com/serai-dex/substrate" } +sp-keystore = { git = "https://github.com/serai-dex/substrate" } +sp-keyring = { git = "https://github.com/serai-dex/substrate" } +sp-inherents = { git = "https://github.com/serai-dex/substrate" } +sp-timestamp = { git = "https://github.com/serai-dex/substrate" } +sp-runtime = { git = "https://github.com/serai-dex/substrate" } +sp-blockchain = { git = "https://github.com/serai-dex/substrate" } +sp-api = { git = "https://github.com/serai-dex/substrate" } +sp-block-builder = { git = "https://github.com/serai-dex/substrate" } +sp-consensus = { git = "https://github.com/serai-dex/substrate" } + sc-keystore = { git = "https://github.com/serai-dex/substrate" } sc-transaction-pool = { git = "https://github.com/serai-dex/substrate" } sc-transaction-pool-api = { git = "https://github.com/serai-dex/substrate" } -sc-consensus = { git = "https://github.com/serai-dex/substrate" } +sc-basic-authorship = { git = "https://github.com/serai-dex/substrate" } +sc-executor = { git = "https://github.com/serai-dex/substrate" } +sc-service = { git = "https://github.com/serai-dex/substrate" } +sc-client-db = { git = "https://github.com/serai-dex/substrate" } sc-client-api = { git = "https://github.com/serai-dex/substrate" } -sp-runtime = { git = "https://github.com/serai-dex/substrate" } -sp-timestamp = { git = "https://github.com/serai-dex/substrate" } -sp-inherents = { git = "https://github.com/serai-dex/substrate" } -sp-keyring = { git = "https://github.com/serai-dex/substrate" } +sc-network = { git = "https://github.com/serai-dex/substrate" } +sc-consensus = { git = "https://github.com/serai-dex/substrate" } + +sc-telemetry = { git = "https://github.com/serai-dex/substrate" } +sc-cli = { git = "https://github.com/serai-dex/substrate" } + frame-system = { git = "https://github.com/serai-dex/substrate" } +frame-benchmarking = { git = "https://github.com/serai-dex/substrate" } +frame-benchmarking-cli = { git = "https://github.com/serai-dex/substrate" } pallet-transaction-payment = { git = "https://github.com/serai-dex/substrate", default-features = false } -# These dependencies are used for the node template's RPCs -jsonrpsee = { version = "0.15", features = ["server"] } sc-rpc = { git = "https://github.com/serai-dex/substrate" } -sp-api = { git = "https://github.com/serai-dex/substrate" } sc-rpc-api = { git = "https://github.com/serai-dex/substrate" } -sp-blockchain = { git = "https://github.com/serai-dex/substrate" } -sp-block-builder = { git = "https://github.com/serai-dex/substrate" } + substrate-frame-rpc-system = { git = "https://github.com/serai-dex/substrate" } pallet-transaction-payment-rpc = { git = "https://github.com/serai-dex/substrate" } -# These dependencies are used for runtime benchmarking -frame-benchmarking = { git = "https://github.com/serai-dex/substrate" } -frame-benchmarking-cli = { git = "https://github.com/serai-dex/substrate" } - -# Local dependencies -serai-consensus = { path = "../consensus" } +sp-tendermint = { path = "../tendermint/primitives" } +pallet-tendermint = { path = "../tendermint/pallet", default-features = false } serai-runtime = { path = "../runtime" } +sc-tendermint = { path = "../tendermint/client" } [build-dependencies] substrate-build-script-utils = { git = "https://github.com/serai-dex/substrate.git" } @@ -57,5 +69,6 @@ default = [] runtime-benchmarks = [ "frame-benchmarking/runtime-benchmarks", "frame-benchmarking-cli/runtime-benchmarks", + "serai-runtime/runtime-benchmarks" ] diff --git a/substrate/node/src/chain_spec.rs b/substrate/node/src/chain_spec.rs index 001061b9..71517c0b 100644 --- a/substrate/node/src/chain_spec.rs +++ b/substrate/node/src/chain_spec.rs @@ -1,33 +1,40 @@ use sc_service::ChainType; -use sp_runtime::traits::Verify; -use sp_core::{sr25519, Pair, Public}; +use sp_core::{Pair as PairTrait, sr25519::Pair}; +use pallet_tendermint::crypto::Public; -use sp_runtime::traits::IdentifyAccount; - -use serai_runtime::{WASM_BINARY, AccountId, Signature, GenesisConfig, SystemConfig, BalancesConfig}; +use serai_runtime::{ + WASM_BINARY, AccountId, opaque::SessionKeys, GenesisConfig, SystemConfig, BalancesConfig, + SessionConfig, +}; pub type ChainSpec = sc_service::GenericChainSpec; -type AccountPublic = ::Signer; -fn get_from_seed(seed: &'static str) -> ::Public { - TPublic::Pair::from_string(&format!("//{}", seed), None).unwrap().public() +fn insecure_pair_from_name(name: &'static str) -> Pair { + Pair::from_string(&format!("//{}", name), None).unwrap() } -fn get_account_id_from_seed(seed: &'static str) -> AccountId -where - AccountPublic: From<::Public>, -{ - AccountPublic::from(get_from_seed::(seed)).into_account() +fn account_id_from_name(name: &'static str) -> AccountId { + insecure_pair_from_name(name).public() } -fn testnet_genesis(wasm_binary: &[u8], endowed_accounts: Vec) -> GenesisConfig { +fn testnet_genesis( + wasm_binary: &[u8], + validators: &[&'static str], + endowed_accounts: Vec, +) -> GenesisConfig { + let session_key = |name| { + let key = account_id_from_name(name); + (key, key, SessionKeys { tendermint: Public::from(key) }) + }; + GenesisConfig { system: SystemConfig { code: wasm_binary.to_vec() }, balances: BalancesConfig { balances: endowed_accounts.iter().cloned().map(|k| (k, 1 << 60)).collect(), }, transaction_payment: Default::default(), + session: SessionConfig { keys: validators.iter().map(|name| session_key(*name)).collect() }, } } @@ -38,24 +45,69 @@ pub fn development_config() -> Result { // Name "Development Network", // ID - "dev", + "devnet", ChainType::Development, || { testnet_genesis( wasm_binary, + &["Alice"], vec![ - get_account_id_from_seed::("Alice"), - get_account_id_from_seed::("Bob"), - get_account_id_from_seed::("Charlie"), - get_account_id_from_seed::("Dave"), - get_account_id_from_seed::("Eve"), - get_account_id_from_seed::("Ferdie"), - get_account_id_from_seed::("Alice//stash"), - get_account_id_from_seed::("Bob//stash"), - get_account_id_from_seed::("Charlie//stash"), - get_account_id_from_seed::("Dave//stash"), - get_account_id_from_seed::("Eve//stash"), - get_account_id_from_seed::("Ferdie//stash"), + account_id_from_name("Alice"), + account_id_from_name("Bob"), + account_id_from_name("Charlie"), + account_id_from_name("Dave"), + account_id_from_name("Eve"), + account_id_from_name("Ferdie"), + account_id_from_name("Alice//stash"), + account_id_from_name("Bob//stash"), + account_id_from_name("Charlie//stash"), + account_id_from_name("Dave//stash"), + account_id_from_name("Eve//stash"), + account_id_from_name("Ferdie//stash"), + ], + ) + }, + // Bootnodes + vec![], + // Telemetry + None, + // Protocol ID + Some("serai"), + // Fork ID + None, + // Properties + None, + // Extensions + None, + )) +} + +pub fn testnet_config() -> Result { + let wasm_binary = WASM_BINARY.ok_or("Testnet wasm not available")?; + + Ok(ChainSpec::from_genesis( + // Name + "Local Test Network", + // ID + "local", + ChainType::Local, + || { + testnet_genesis( + wasm_binary, + &["Alice", "Bob", "Charlie"], + vec![ + account_id_from_name("Alice"), + account_id_from_name("Bob"), + account_id_from_name("Charlie"), + account_id_from_name("Dave"), + account_id_from_name("Eve"), + account_id_from_name("Ferdie"), + account_id_from_name("Alice//stash"), + account_id_from_name("Bob//stash"), + account_id_from_name("Charlie//stash"), + account_id_from_name("Dave//stash"), + account_id_from_name("Eve//stash"), + account_id_from_name("Ferdie//stash"), ], ) }, diff --git a/substrate/node/src/command.rs b/substrate/node/src/command.rs index c313d1a5..43c67a3f 100644 --- a/substrate/node/src/command.rs +++ b/substrate/node/src/command.rs @@ -29,7 +29,7 @@ impl SubstrateCli for Cli { } fn support_url() -> String { - "serai.exchange".to_string() + "https://github.com/serai-dex/serai/issues/new".to_string() } fn copyright_start_year() -> i32 { @@ -38,7 +38,8 @@ impl SubstrateCli for Cli { fn load_spec(&self, id: &str) -> Result, String> { match id { - "dev" => Ok(Box::new(chain_spec::development_config()?)), + "dev" | "devnet" => Ok(Box::new(chain_spec::development_config()?)), + "local" => Ok(Box::new(chain_spec::testnet_config()?)), _ => panic!("Unknown network ID"), } } @@ -60,23 +61,23 @@ pub fn run() -> sc_cli::Result<()> { Some(Subcommand::CheckBlock(cmd)) => cli.create_runner(cmd)?.async_run(|config| { let PartialComponents { client, task_manager, import_queue, .. } = - service::new_partial(&config)?; + service::new_partial(&config)?.1; Ok((cmd.run(client, import_queue), task_manager)) }), Some(Subcommand::ExportBlocks(cmd)) => cli.create_runner(cmd)?.async_run(|config| { - let PartialComponents { client, task_manager, .. } = service::new_partial(&config)?; + let PartialComponents { client, task_manager, .. } = service::new_partial(&config)?.1; Ok((cmd.run(client, config.database), task_manager)) }), Some(Subcommand::ExportState(cmd)) => cli.create_runner(cmd)?.async_run(|config| { - let PartialComponents { client, task_manager, .. } = service::new_partial(&config)?; + let PartialComponents { client, task_manager, .. } = service::new_partial(&config)?.1; Ok((cmd.run(client, config.chain_spec), task_manager)) }), Some(Subcommand::ImportBlocks(cmd)) => cli.create_runner(cmd)?.async_run(|config| { let PartialComponents { client, task_manager, import_queue, .. } = - service::new_partial(&config)?; + service::new_partial(&config)?.1; Ok((cmd.run(client, import_queue), task_manager)) }), @@ -85,14 +86,15 @@ pub fn run() -> sc_cli::Result<()> { } Some(Subcommand::Revert(cmd)) => cli.create_runner(cmd)?.async_run(|config| { - let PartialComponents { client, task_manager, backend, .. } = service::new_partial(&config)?; + let PartialComponents { client, task_manager, backend, .. } = + service::new_partial(&config)?.1; Ok((cmd.run(client, backend, None), task_manager)) }), Some(Subcommand::Benchmark(cmd)) => cli.create_runner(cmd)?.sync_run(|config| match cmd { BenchmarkCmd::Pallet(cmd) => cmd.run::(config), - BenchmarkCmd::Block(cmd) => cmd.run(service::new_partial(&config)?.client), + BenchmarkCmd::Block(cmd) => cmd.run(service::new_partial(&config)?.1.client), #[cfg(not(feature = "runtime-benchmarks"))] BenchmarkCmd::Storage(_) => { @@ -101,12 +103,12 @@ pub fn run() -> sc_cli::Result<()> { #[cfg(feature = "runtime-benchmarks")] BenchmarkCmd::Storage(cmd) => { - let PartialComponents { client, backend, .. } = service::new_partial(&config)?; + let PartialComponents { client, backend, .. } = service::new_partial(&config)?.1; cmd.run(config, client, backend.expose_db(), backend.expose_storage()) } BenchmarkCmd::Overhead(cmd) => { - let client = service::new_partial(&config)?.client; + let client = service::new_partial(&config)?.1.client; cmd.run( config, client.clone(), @@ -117,7 +119,7 @@ pub fn run() -> sc_cli::Result<()> { } BenchmarkCmd::Extrinsic(cmd) => { - let PartialComponents { client, .. } = service::new_partial(&config)?; + let client = service::new_partial(&config)?.1.client; cmd.run( client.clone(), inherent_benchmark_data()?, @@ -134,7 +136,7 @@ pub fn run() -> sc_cli::Result<()> { } None => cli.create_runner(&cli.run)?.run_node_until_exit(|config| async { - service::new_full(config).map_err(sc_cli::Error::Service) + service::new_full(config).await.map_err(sc_cli::Error::Service) }), } } diff --git a/substrate/node/src/lib.rs b/substrate/node/src/lib.rs deleted file mode 100644 index f117b8aa..00000000 --- a/substrate/node/src/lib.rs +++ /dev/null @@ -1,3 +0,0 @@ -pub mod chain_spec; -pub mod rpc; -pub mod service; diff --git a/substrate/node/src/main.rs b/substrate/node/src/main.rs index 57565f62..af50c833 100644 --- a/substrate/node/src/main.rs +++ b/substrate/node/src/main.rs @@ -1,5 +1,4 @@ mod chain_spec; -#[macro_use] mod service; mod command_helper; diff --git a/substrate/node/src/service.rs b/substrate/node/src/service.rs index 6832fd1d..6179c0e2 100644 --- a/substrate/node/src/service.rs +++ b/substrate/node/src/service.rs @@ -1,25 +1,106 @@ -use std::sync::Arc; +use std::{ + error::Error, + boxed::Box, + sync::Arc, + time::{UNIX_EPOCH, SystemTime, Duration}, + str::FromStr, +}; + +use sp_runtime::traits::{Block as BlockTrait}; +use sp_inherents::CreateInherentDataProviders; +use sp_consensus::DisableProofRecording; +use sp_api::ProvideRuntimeApi; + +use sc_executor::{NativeVersion, NativeExecutionDispatch, NativeElseWasmExecutor}; +use sc_transaction_pool::FullPool; +use sc_network::NetworkService; +use sc_service::{error::Error as ServiceError, Configuration, TaskManager, TFullClient}; + +use sc_client_api::BlockBackend; -use sc_service::{error::Error as ServiceError, Configuration, TaskManager}; -use sc_executor::NativeElseWasmExecutor; use sc_telemetry::{Telemetry, TelemetryWorker}; -use serai_runtime::{self, opaque::Block, RuntimeApi}; -pub(crate) use serai_consensus::{ExecutorDispatch, FullClient}; +pub(crate) use sc_tendermint::{ + TendermintClientMinimal, TendermintValidator, TendermintImport, TendermintAuthority, + TendermintSelectChain, import_queue, +}; +use serai_runtime::{self, BLOCK_SIZE, TARGET_BLOCK_TIME, opaque::Block, RuntimeApi}; type FullBackend = sc_service::TFullBackend; -type FullSelectChain = sc_consensus::LongestChain; +pub type FullClient = TFullClient>; type PartialComponents = sc_service::PartialComponents< FullClient, FullBackend, - FullSelectChain, + TendermintSelectChain, sc_consensus::DefaultImportQueue, sc_transaction_pool::FullPool, Option, >; -pub fn new_partial(config: &Configuration) -> Result { +pub struct ExecutorDispatch; +impl NativeExecutionDispatch for ExecutorDispatch { + #[cfg(feature = "runtime-benchmarks")] + type ExtendHostFunctions = frame_benchmarking::benchmarking::HostFunctions; + #[cfg(not(feature = "runtime-benchmarks"))] + type ExtendHostFunctions = (); + + fn dispatch(method: &str, data: &[u8]) -> Option> { + serai_runtime::api::dispatch(method, data) + } + + fn native_version() -> NativeVersion { + serai_runtime::native_version() + } +} + +pub struct Cidp; +#[async_trait::async_trait] +impl CreateInherentDataProviders for Cidp { + type InherentDataProviders = (sp_timestamp::InherentDataProvider,); + async fn create_inherent_data_providers( + &self, + _: ::Hash, + _: (), + ) -> Result> { + Ok((sp_timestamp::InherentDataProvider::from_system_time(),)) + } +} + +pub struct TendermintValidatorFirm; +impl TendermintClientMinimal for TendermintValidatorFirm { + // TODO: This is passed directly to propose, which warns not to use the hard limit as finalize + // may grow the block. We don't use storage proofs and use the Executive finalize_block. Is that + // guaranteed not to grow the block? + const PROPOSED_BLOCK_SIZE_LIMIT: usize = { BLOCK_SIZE as usize }; + // 3 seconds + const BLOCK_PROCESSING_TIME_IN_SECONDS: u32 = { (TARGET_BLOCK_TIME / 2 / 1000) as u32 }; + // 1 second + const LATENCY_TIME_IN_SECONDS: u32 = { (TARGET_BLOCK_TIME / 2 / 3 / 1000) as u32 }; + + type Block = Block; + type Backend = sc_client_db::Backend; + type Api = >::Api; + type Client = FullClient; +} + +impl TendermintValidator for TendermintValidatorFirm { + type CIDP = Cidp; + type Environment = sc_basic_authorship::ProposerFactory< + FullPool, + Self::Backend, + Self::Client, + DisableProofRecording, + >; + + type Network = Arc::Hash>>; +} + +pub fn new_partial( + config: &Configuration, +) -> Result<(TendermintImport, PartialComponents), ServiceError> { + debug_assert_eq!(TARGET_BLOCK_TIME, 6000); + if config.keystore_remote.is_some() { return Err(ServiceError::Other("Remote Keystores are not supported".to_string())); } @@ -55,8 +136,6 @@ pub fn new_partial(config: &Configuration) -> Result Result Result { - let sc_service::PartialComponents { - client, - backend, - mut task_manager, - import_queue, - keystore_container, - select_chain, - other: mut telemetry, - transaction_pool, - } = new_partial(&config)?; +pub async fn new_full(mut config: Configuration) -> Result { + let ( + authority, + sc_service::PartialComponents { + client, + backend, + mut task_manager, + import_queue, + keystore_container, + select_chain: _, + other: mut telemetry, + transaction_pool, + }, + ) = new_partial(&config)?; + + let is_authority = config.role.is_authority(); + let genesis = client.block_hash(0).unwrap().unwrap(); + let tendermint_protocol = sc_tendermint::protocol_name(genesis, config.chain_spec.fork_id()); + if is_authority { + config + .network + .extra_sets + .push(sc_tendermint::set_config(tendermint_protocol.clone(), BLOCK_SIZE.into())); + } let (network, system_rpc_tx, tx_handler_controller, network_starter) = sc_service::build_network(sc_service::BuildNetworkParams { @@ -116,9 +212,6 @@ pub fn new_full(config: Configuration) -> Result { ); } - let role = config.role.clone(); - let prometheus_registry = config.prometheus_registry().cloned(); - let rpc_extensions_builder = { let client = client.clone(); let pool = transaction_pool.clone(); @@ -133,6 +226,13 @@ pub fn new_full(config: Configuration) -> Result { }) }; + let genesis_time = if config.chain_spec.id() != "devnet" { + UNIX_EPOCH + Duration::from_secs(u64::from_str(&std::env::var("GENESIS").unwrap()).unwrap()) + } else { + SystemTime::now() + }; + + let registry = config.prometheus_registry().cloned(); sc_service::spawn_tasks(sc_service::SpawnTasksParams { network: network.clone(), client: client.clone(), @@ -147,14 +247,27 @@ pub fn new_full(config: Configuration) -> Result { telemetry: telemetry.as_mut(), })?; - if role.is_authority() { - serai_consensus::authority( - &task_manager, - client, - network, - transaction_pool, - select_chain, - prometheus_registry.as_ref(), + if is_authority { + task_manager.spawn_essential_handle().spawn( + "tendermint", + None, + TendermintAuthority::new( + genesis_time, + tendermint_protocol, + authority, + keystore_container.keystore(), + Cidp, + task_manager.spawn_essential_handle(), + sc_basic_authorship::ProposerFactory::new( + task_manager.spawn_handle(), + client, + transaction_pool, + registry.as_ref(), + telemetry.map(|telemtry| telemtry.handle()), + ), + network, + None, + ), ); } diff --git a/substrate/runtime/Cargo.toml b/substrate/runtime/Cargo.toml index 048039c8..bfa4b172 100644 --- a/substrate/runtime/Cargo.toml +++ b/substrate/runtime/Cargo.toml @@ -18,6 +18,7 @@ codec = { package = "parity-scale-codec", version = "3", default-features = fals scale-info = { version = "2", default-features = false, features = ["derive"] } sp-core = { git = "https://github.com/serai-dex/substrate", default-features = false } +sp-application-crypto = { git = "https://github.com/serai-dex/substrate", default-features = false } sp-std = { git = "https://github.com/serai-dex/substrate", default-features = false } sp-version = { git = "https://github.com/serai-dex/substrate", default-features = false } sp-inherents = { git = "https://github.com/serai-dex/substrate", default-features = false } @@ -28,26 +29,27 @@ sp-block-builder = { git = "https://github.com/serai-dex/substrate", default-fea sp-runtime = { git = "https://github.com/serai-dex/substrate", default-features = false } sp-api = { git = "https://github.com/serai-dex/substrate", default-features = false } -frame-support = { git = "https://github.com/serai-dex/substrate", default-features = false } -frame-system = { git = "https://github.com/serai-dex/substrate", default-features = false } -frame-executive = { git = "https://github.com/serai-dex/substrate", default-features = false } +sp-tendermint = { path = "../tendermint/primitives", default-features = false } + +frame-system = { git = "https://github.com/serai-dex/substrate", default-features = false } +frame-support = { git = "https://github.com/serai-dex/substrate", default-features = false } +frame-executive = { git = "https://github.com/serai-dex/substrate", default-features = false } +frame-benchmarking = { git = "https://github.com/serai-dex/substrate", default-features = false, optional = true } -pallet-randomness-collective-flip = { git = "https://github.com/serai-dex/substrate", default-features = false } pallet-timestamp = { git = "https://github.com/serai-dex/substrate", default-features = false } +pallet-randomness-collective-flip = { git = "https://github.com/serai-dex/substrate", default-features = false } pallet-balances = { git = "https://github.com/serai-dex/substrate", default-features = false } pallet-transaction-payment = { git = "https://github.com/serai-dex/substrate", default-features = false } pallet-contracts-primitives = { git = "https://github.com/serai-dex/substrate", default-features = false } pallet-contracts = { git = "https://github.com/serai-dex/substrate", default-features = false } -# Used for the node template's RPCs +pallet-session = { git = "https://github.com/serai-dex/substrate", default-features = false } +pallet-tendermint = { path = "../tendermint/pallet", default-features = false } + frame-system-rpc-runtime-api = { git = "https://github.com/serai-dex/substrate", default-features = false } pallet-transaction-payment-rpc-runtime-api = { git = "https://github.com/serai-dex/substrate", default-features = false } -# Used for runtime benchmarking -frame-benchmarking = { git = "https://github.com/serai-dex/substrate", default-features = false, optional = true } -frame-system-benchmarking = { git = "https://github.com/serai-dex/substrate", default-features = false, optional = true } - [build-dependencies] substrate-wasm-builder = { git = "https://github.com/serai-dex/substrate" } @@ -57,6 +59,7 @@ std = [ "scale-info/std", "sp-core/std", + "sp-application-crypto/std", "sp-std/std", "sp-version/std", "sp-inherents/std", @@ -67,30 +70,40 @@ std = [ "sp-runtime/std", "sp-api/std", - "frame-support/std", - "frame-system-rpc-runtime-api/std", + "sp-tendermint/std", + "frame-system/std", + "frame-support/std", "frame-executive/std", - "pallet-randomness-collective-flip/std", "pallet-timestamp/std", + "pallet-randomness-collective-flip/std", "pallet-balances/std", "pallet-transaction-payment/std", - "pallet-transaction-payment-rpc-runtime-api/std", - "pallet-contracts/std", - "pallet-contracts-primitives/std", + "pallet-contracts/std", + "pallet-contracts-primitives/std", + + "pallet-session/std", + "pallet-tendermint/std", + + "frame-system-rpc-runtime-api/std", + "pallet-transaction-payment-rpc-runtime-api/std", ] runtime-benchmarks = [ "hex-literal", + "sp-runtime/runtime-benchmarks", - "frame-benchmarking/runtime-benchmarks", - "frame-support/runtime-benchmarks", - "frame-system-benchmarking", + "frame-system/runtime-benchmarks", + "frame-support/runtime-benchmarks", + "frame-benchmarking/runtime-benchmarks", + "pallet-timestamp/runtime-benchmarks", "pallet-balances/runtime-benchmarks", + + "pallet-tendermint/runtime-benchmarks", ] default = ["std"] diff --git a/substrate/runtime/src/lib.rs b/substrate/runtime/src/lib.rs index 8b57f2d1..9747950b 100644 --- a/substrate/runtime/src/lib.rs +++ b/substrate/runtime/src/lib.rs @@ -4,10 +4,11 @@ #[cfg(feature = "std")] include!(concat!(env!("OUT_DIR"), "/wasm_binary.rs")); -use sp_core::{crypto::KeyTypeId, OpaqueMetadata}; +use sp_core::OpaqueMetadata; +pub use sp_core::sr25519::{Public, Signature}; use sp_runtime::{ - create_runtime_str, generic, impl_opaque_keys, - traits::{IdentityLookup, BlakeTwo256, Block as BlockT}, + create_runtime_str, generic, impl_opaque_keys, KeyTypeId, + traits::{Convert, OpaqueKeys, IdentityLookup, BlakeTwo256, Block as BlockT}, transaction_validity::{TransactionSource, TransactionValidity}, ApplyExtrinsicResult, Perbill, }; @@ -31,14 +32,13 @@ pub use pallet_timestamp::Call as TimestampCall; pub use pallet_balances::Call as BalancesCall; use pallet_transaction_payment::CurrencyAdapter; +use pallet_session::PeriodicSessions; + /// An index to a block. pub type BlockNumber = u32; -/// Signature type -pub type Signature = sp_core::sr25519::Signature; - /// Account ID type, equivalent to a public key -pub type AccountId = sp_core::sr25519::Public; +pub type AccountId = Public; /// Balance of an account. pub type Balance = u64; @@ -59,15 +59,21 @@ pub mod opaque { pub type BlockId = generic::BlockId; impl_opaque_keys! { - pub struct SessionKeys {} + pub struct SessionKeys { + pub tendermint: Tendermint, + } } } +use opaque::SessionKeys; + #[sp_version::runtime_version] pub const VERSION: RuntimeVersion = RuntimeVersion { spec_name: create_runtime_str!("serai"), + // TODO: "core"? impl_name: create_runtime_str!("turoctocrab"), authoring_version: 1, + // TODO: 1? Do we prefer some level of compatibility or our own path? spec_version: 100, impl_version: 1, apis: RUNTIME_API_VERSIONS, @@ -75,11 +81,13 @@ pub const VERSION: RuntimeVersion = RuntimeVersion { state_version: 1, }; -pub const MILLISECS_PER_BLOCK: u64 = 6000; -pub const SLOT_DURATION: u64 = MILLISECS_PER_BLOCK; +// 1 MB +pub const BLOCK_SIZE: u32 = 1024 * 1024; +// 6 seconds +pub const TARGET_BLOCK_TIME: u64 = 6000; /// Measured in blocks. -pub const MINUTES: BlockNumber = 60_000 / (MILLISECS_PER_BLOCK as BlockNumber); +pub const MINUTES: BlockNumber = 60_000 / (TARGET_BLOCK_TIME as BlockNumber); pub const HOURS: BlockNumber = MINUTES * 60; pub const DAYS: BlockNumber = HOURS * 24; @@ -106,7 +114,7 @@ parameter_types! { // 1 MB block size limit pub BlockLength: frame_system::limits::BlockLength = - frame_system::limits::BlockLength::max_with_normal_ratio(1024 * 1024, NORMAL_DISPATCH_RATIO); + frame_system::limits::BlockLength::max_with_normal_ratio(BLOCK_SIZE, NORMAL_DISPATCH_RATIO); pub BlockWeights: frame_system::limits::BlockWeights = frame_system::limits::BlockWeights::with_sensible_defaults( (2u64 * WEIGHT_PER_SECOND).set_proof_size(u64::MAX), @@ -160,7 +168,7 @@ impl pallet_randomness_collective_flip::Config for Runtime {} impl pallet_timestamp::Config for Runtime { type Moment = u64; type OnTimestampSet = (); - type MinimumPeriod = ConstU64<{ SLOT_DURATION / 2 }>; + type MinimumPeriod = ConstU64<{ TARGET_BLOCK_TIME / 2 }>; type WeightInfo = (); } @@ -208,6 +216,30 @@ impl pallet_contracts::Config for Runtime { type MaxStorageKeyLen = ConstU32<128>; } +impl pallet_tendermint::Config for Runtime {} + +const SESSION_LENGTH: BlockNumber = 5 * DAYS; +type Sessions = PeriodicSessions, ConstU32<{ SESSION_LENGTH }>>; + +pub struct IdentityValidatorIdOf; +impl Convert> for IdentityValidatorIdOf { + fn convert(key: Public) -> Option { + Some(key) + } +} + +impl pallet_session::Config for Runtime { + type RuntimeEvent = RuntimeEvent; + type ValidatorId = AccountId; + type ValidatorIdOf = IdentityValidatorIdOf; + type ShouldEndSession = Sessions; + type NextSessionRotation = Sessions; + type SessionManager = (); + type SessionHandler = ::KeyTypeIdProviders; + type Keys = SessionKeys; + type WeightInfo = pallet_session::weights::SubstrateWeight; +} + pub type Address = AccountId; pub type Header = generic::Header; pub type Block = generic::Block; @@ -244,6 +276,8 @@ construct_runtime!( Balances: pallet_balances, TransactionPayment: pallet_transaction_payment, Contracts: pallet_contracts, + Session: pallet_session, + Tendermint: pallet_tendermint, } ); @@ -331,6 +365,16 @@ sp_api::impl_runtime_apis! { } } + impl sp_tendermint::TendermintApi for Runtime { + fn current_session() -> u32 { + Tendermint::session() + } + + fn validators() -> Vec { + Session::validators() + } + } + impl frame_system_rpc_runtime_api::AccountNonceApi for Runtime { fn account_nonce(account: AccountId) -> Index { System::account_nonce(account) diff --git a/substrate/tendermint/client/Cargo.toml b/substrate/tendermint/client/Cargo.toml new file mode 100644 index 00000000..0698f633 --- /dev/null +++ b/substrate/tendermint/client/Cargo.toml @@ -0,0 +1,48 @@ +[package] +name = "sc-tendermint" +version = "0.1.0" +description = "Tendermint client for Substrate" +license = "AGPL-3.0-only" +repository = "https://github.com/serai-dex/serai/tree/develop/substrate/tendermint/client" +authors = ["Luke Parker "] +edition = "2021" +publish = false + +[package.metadata.docs.rs] +all-features = true +rustdoc-args = ["--cfg", "docsrs"] + +[dependencies] +async-trait = "0.1" + +hex = "0.4" +log = "0.4" + +futures = "0.3" +tokio = { version = "1", features = ["sync", "rt"] } + +sp-core = { git = "https://github.com/serai-dex/substrate" } +sp-application-crypto = { git = "https://github.com/serai-dex/substrate" } +sp-keystore = { git = "https://github.com/serai-dex/substrate" } +sp-inherents = { git = "https://github.com/serai-dex/substrate" } +sp-staking = { git = "https://github.com/serai-dex/substrate" } +sp-blockchain = { git = "https://github.com/serai-dex/substrate" } +sp-runtime = { git = "https://github.com/serai-dex/substrate" } +sp-api = { git = "https://github.com/serai-dex/substrate" } +sp-consensus = { git = "https://github.com/serai-dex/substrate" } + +sp-tendermint = { path = "../primitives" } + +sc-transaction-pool = { git = "https://github.com/serai-dex/substrate" } +sc-executor = { git = "https://github.com/serai-dex/substrate" } +sc-network-common = { git = "https://github.com/serai-dex/substrate" } +sc-network = { git = "https://github.com/serai-dex/substrate" } +sc-network-gossip = { git = "https://github.com/serai-dex/substrate" } +sc-service = { git = "https://github.com/serai-dex/substrate" } +sc-client-api = { git = "https://github.com/serai-dex/substrate" } +sc-block-builder = { git = "https://github.com/serai-dex/substrate" } +sc-consensus = { git = "https://github.com/serai-dex/substrate" } + +substrate-prometheus-endpoint = { git = "https://github.com/serai-dex/substrate" } + +tendermint-machine = { path = "../machine", features = ["substrate"] } diff --git a/substrate/consensus/LICENSE b/substrate/tendermint/client/LICENSE similarity index 100% rename from substrate/consensus/LICENSE rename to substrate/tendermint/client/LICENSE diff --git a/substrate/tendermint/client/src/authority/gossip.rs b/substrate/tendermint/client/src/authority/gossip.rs new file mode 100644 index 00000000..42e46566 --- /dev/null +++ b/substrate/tendermint/client/src/authority/gossip.rs @@ -0,0 +1,67 @@ +use std::sync::{Arc, RwLock}; + +use sp_core::Decode; +use sp_runtime::traits::{Hash, Header, Block}; + +use sc_network::PeerId; +use sc_network_gossip::{Validator, ValidatorContext, ValidationResult}; + +use tendermint_machine::{ext::SignatureScheme, SignedMessage}; + +use crate::{TendermintValidator, validators::TendermintValidators}; + +#[derive(Clone)] +pub(crate) struct TendermintGossip { + number: Arc>, + signature_scheme: TendermintValidators, +} + +impl TendermintGossip { + pub(crate) fn new(number: Arc>, signature_scheme: TendermintValidators) -> Self { + TendermintGossip { number, signature_scheme } + } + + pub(crate) fn topic(number: u64) -> ::Hash { + <<::Header as Header>::Hashing as Hash>::hash( + &[b"Tendermint Block Topic".as_ref(), &number.to_le_bytes()].concat(), + ) + } +} + +impl Validator for TendermintGossip { + fn validate( + &self, + _: &mut dyn ValidatorContext, + _: &PeerId, + data: &[u8], + ) -> ValidationResult<::Hash> { + let msg = match SignedMessage::< + u16, + T::Block, + as SignatureScheme>::Signature, + >::decode(&mut &*data) + { + Ok(msg) => msg, + Err(_) => return ValidationResult::Discard, + }; + + if msg.block().0 < *self.number.read().unwrap() { + return ValidationResult::Discard; + } + + // Verify the signature here so we don't carry invalid messages in our gossip layer + // This will cause double verification of the signature, yet that's a minimal cost + if !msg.verify_signature(&self.signature_scheme) { + return ValidationResult::Discard; + } + + ValidationResult::ProcessAndKeep(Self::topic(msg.block().0)) + } + + fn message_expired<'a>( + &'a self, + ) -> Box::Hash, &[u8]) -> bool + 'a> { + let number = self.number.clone(); + Box::new(move |topic, _| topic != Self::topic(*number.read().unwrap())) + } +} diff --git a/substrate/tendermint/client/src/authority/import_future.rs b/substrate/tendermint/client/src/authority/import_future.rs new file mode 100644 index 00000000..c8eda9ee --- /dev/null +++ b/substrate/tendermint/client/src/authority/import_future.rs @@ -0,0 +1,72 @@ +use std::{ + pin::Pin, + sync::RwLock, + task::{Poll, Context}, + future::Future, +}; + +use sp_runtime::traits::{Header, Block}; + +use sp_consensus::Error; +use sc_consensus::{BlockImportStatus, BlockImportError, Link}; + +use sc_service::ImportQueue; + +use tendermint_machine::ext::BlockError; + +use crate::TendermintImportQueue; + +// Custom helpers for ImportQueue in order to obtain the result of a block's importing +struct ValidateLink(Option<(B::Hash, Result<(), BlockError>)>); +impl Link for ValidateLink { + fn blocks_processed( + &mut self, + imported: usize, + count: usize, + mut results: Vec<( + Result::Number>, BlockImportError>, + B::Hash, + )>, + ) { + assert!(imported <= 1); + assert_eq!(count, 1); + self.0 = Some(( + results[0].1, + match results.swap_remove(0).0 { + Ok(_) => Ok(()), + Err(BlockImportError::Other(Error::Other(err))) => Err( + err.downcast::().map(|boxed| *boxed.as_ref()).unwrap_or(BlockError::Fatal), + ), + _ => Err(BlockError::Fatal), + }, + )); + } +} + +pub(crate) struct ImportFuture<'a, B: Block, T: Send>( + B::Hash, + RwLock<&'a mut TendermintImportQueue>, +); +impl<'a, B: Block, T: Send> ImportFuture<'a, B, T> { + pub(crate) fn new( + hash: B::Hash, + queue: &'a mut TendermintImportQueue, + ) -> ImportFuture { + ImportFuture(hash, RwLock::new(queue)) + } +} + +impl<'a, B: Block, T: Send> Future for ImportFuture<'a, B, T> { + type Output = Result<(), BlockError>; + + fn poll(self: Pin<&mut Self>, ctx: &mut Context<'_>) -> Poll { + let mut link = ValidateLink(None); + self.1.write().unwrap().poll_actions(ctx, &mut link); + if let Some(res) = link.0 { + assert_eq!(res.0, self.0); + Poll::Ready(res.1) + } else { + Poll::Pending + } + } +} diff --git a/substrate/tendermint/client/src/authority/mod.rs b/substrate/tendermint/client/src/authority/mod.rs new file mode 100644 index 00000000..778f50d4 --- /dev/null +++ b/substrate/tendermint/client/src/authority/mod.rs @@ -0,0 +1,494 @@ +use std::{ + sync::{Arc, RwLock}, + time::{UNIX_EPOCH, SystemTime, Duration}, + collections::HashSet, +}; + +use async_trait::async_trait; + +use log::{debug, warn, error}; + +use futures::{ + SinkExt, StreamExt, + lock::Mutex, + channel::mpsc::{self, UnboundedSender}, +}; + +use sp_core::{Encode, Decode, traits::SpawnEssentialNamed}; +use sp_keystore::CryptoStore; +use sp_runtime::{ + traits::{Header, Block}, + Digest, +}; +use sp_blockchain::HeaderBackend; +use sp_api::BlockId; + +use sp_consensus::{Error, BlockOrigin, Proposer, Environment}; +use sc_consensus::import_queue::IncomingBlock; + +use sc_service::ImportQueue; +use sc_client_api::{BlockBackend, Finalizer, BlockchainEvents}; +use sc_network::{ProtocolName, NetworkBlock}; +use sc_network_gossip::GossipEngine; + +use substrate_prometheus_endpoint::Registry; + +use tendermint_machine::{ + ext::{BlockError, BlockNumber, Commit, SignatureScheme, Network}, + SignedMessage, TendermintMachine, TendermintHandle, +}; + +use crate::{ + CONSENSUS_ID, TendermintValidator, + validators::{TendermintSigner, TendermintValidators}, + tendermint::TendermintImport, +}; + +mod gossip; +use gossip::TendermintGossip; + +mod import_future; +use import_future::ImportFuture; + +// Data for an active validator +// This is distinct as even when we aren't an authority, we still create stubbed Authority objects +// as it's only Authority which implements tendermint_machine::ext::Network. Network has +// verify_commit provided, and even non-authorities have to verify commits +struct ActiveAuthority { + signer: TendermintSigner, + + // The number of the Block we're working on producing + block_in_progress: Arc>, + // Notification channel for when we start on a new block + new_block_event: UnboundedSender<()>, + // Outgoing message queue, placed here as the GossipEngine itself can't be + gossip: UnboundedSender< + SignedMessage as SignatureScheme>::Signature>, + >, + + // Block producer + env: Arc>, + announce: T::Network, +} + +/// Tendermint Authority. Participates in the block proposal and voting process. +pub struct TendermintAuthority { + import: TendermintImport, + active: Option>, +} + +// Get a block to propose after the specified header +// If stub is true, no time will be spent adding transactions to it (beyond what's required), +// making it as minimal as possible (a stub) +// This is so we can create proposals when syncing, respecting tendermint-machine's API boundaries, +// without spending the entire block processing time trying to include transactions (since we know +// our proposal is meaningless and we'll just be syncing a new block anyways) +async fn get_proposal( + env: &Arc>, + import: &TendermintImport, + header: &::Header, + stub: bool, +) -> T::Block { + let proposer = + env.lock().await.init(header).await.expect("Failed to create a proposer for the new block"); + + proposer + .propose( + import.inherent_data(*header.parent_hash()).await, + Digest::default(), + if stub { + Duration::ZERO + } else { + // The first processing time is to build the block + // The second is for it to be downloaded (assumes a block won't take longer to download + // than it'll take to process) + // The third is for it to actually be processed + Duration::from_secs((T::BLOCK_PROCESSING_TIME_IN_SECONDS / 3).into()) + }, + Some(T::PROPOSED_BLOCK_SIZE_LIMIT), + ) + .await + .expect("Failed to crate a new block proposal") + .block +} + +impl TendermintAuthority { + // Authority which is capable of verifying commits + pub(crate) fn stub(import: TendermintImport) -> Self { + Self { import, active: None } + } + + async fn get_proposal(&self, header: &::Header) -> T::Block { + get_proposal(&self.active.as_ref().unwrap().env, &self.import, header, false).await + } + + /// Create and run a new Tendermint Authority, proposing and voting on blocks. + /// This should be spawned on a task as it will not return until the P2P stack shuts down. + #[allow(clippy::too_many_arguments, clippy::new_ret_no_self)] + pub async fn new( + genesis: SystemTime, + protocol: ProtocolName, + import: TendermintImport, + keys: Arc, + providers: T::CIDP, + spawner: impl SpawnEssentialNamed, + env: T::Environment, + network: T::Network, + registry: Option<&Registry>, + ) { + // This should only have a single value, yet a bounded channel with a capacity of 1 would cause + // a firm bound. It's not worth having a backlog crash the node since we aren't constrained + let (new_block_event_send, mut new_block_event_recv) = mpsc::unbounded(); + let (msg_send, mut msg_recv) = mpsc::unbounded(); + + // Move the env into an Arc + let env = Arc::new(Mutex::new(env)); + + // Scoped so the temporary variables used here don't leak + let (block_in_progress, mut gossip, TendermintHandle { mut step, mut messages, machine }) = { + // Get the info necessary to spawn the machine + let info = import.client.info(); + + // Header::Number: TryInto doesn't implement Debug and can't be unwrapped + let last_block: u64 = match info.finalized_number.try_into() { + Ok(best) => best, + Err(_) => panic!("BlockNumber exceeded u64"), + }; + let last_hash = info.finalized_hash; + + let last_time = { + // Convert into a Unix timestamp + let genesis = genesis.duration_since(UNIX_EPOCH).unwrap().as_secs(); + + // Get the last block's time by grabbing its commit and reading the time from that + Commit::>::decode( + &mut import + .client + .justifications(last_hash) + .unwrap() + .map(|justifications| justifications.get(CONSENSUS_ID).cloned().unwrap()) + .unwrap_or_default() + .as_ref(), + ) + .map(|commit| commit.end_time) + // The commit provides the time its block ended at + // The genesis time is when the network starts + // Accordingly, the end of the genesis block is a block time after the genesis time + .unwrap_or_else(|_| genesis + u64::from(Self::block_time())) + }; + + let next_block = last_block + 1; + // Shared references between us and the Tendermint machine (and its actions via its Network + // trait) + let block_in_progress = Arc::new(RwLock::new(next_block)); + + // Write the providers into the import so it can verify inherents + *import.providers.write().await = Some(providers); + + let authority = Self { + import: import.clone(), + active: Some(ActiveAuthority { + signer: TendermintSigner(keys, import.validators.clone()), + + block_in_progress: block_in_progress.clone(), + new_block_event: new_block_event_send, + gossip: msg_send, + + env: env.clone(), + announce: network.clone(), + }), + }; + + // Get our first proposal + let proposal = authority + .get_proposal(&import.client.header(BlockId::Hash(last_hash)).unwrap().unwrap()) + .await; + + // Create the gossip network + // This has to be spawning the machine, else gossip fails for some reason + let gossip = GossipEngine::new( + network, + protocol, + Arc::new(TendermintGossip::new(block_in_progress.clone(), import.validators.clone())), + registry, + ); + + ( + block_in_progress, + gossip, + TendermintMachine::new(authority, BlockNumber(last_block), last_time, proposal).await, + ) + }; + spawner.spawn_essential("machine", Some("tendermint"), Box::pin(machine.run())); + + // Start receiving messages about the Tendermint process for this block + let mut gossip_recv = + gossip.messages_for(TendermintGossip::::topic(*block_in_progress.read().unwrap())); + + // Get finality events from Substrate + let mut finality = import.client.finality_notification_stream(); + + loop { + futures::select_biased! { + // GossipEngine closed down + _ = gossip => { + debug!( + target: "tendermint", + "GossipEngine shut down. {}", + "Is the node shutting down?" + ); + break; + }, + + // Synced a block from the network + notif = finality.next() => { + if let Some(notif) = notif { + let number = match (*notif.header.number()).try_into() { + Ok(number) => number, + Err(_) => panic!("BlockNumber exceeded u64"), + }; + + // There's a race condition between the machine add_block and this + // Both wait for a write lock on this ref and don't release it until after updating it + // accordingly + { + let mut block_in_progress = block_in_progress.write().unwrap(); + if number < *block_in_progress { + continue; + } + let next_block = number + 1; + *block_in_progress = next_block; + gossip_recv = gossip.messages_for(TendermintGossip::::topic(next_block)); + } + + let justifications = import.client.justifications(notif.hash).unwrap().unwrap(); + step.send(( + BlockNumber(number), + Commit::decode(&mut justifications.get(CONSENSUS_ID).unwrap().as_ref()).unwrap(), + // This will fail if syncing occurs radically faster than machine stepping takes + // TODO: Set true when initial syncing + get_proposal(&env, &import, ¬if.header, false).await + )).await.unwrap(); + } else { + debug!( + target: "tendermint", + "Finality notification stream closed down. {}", + "Is the node shutting down?" + ); + break; + } + }, + + // Machine accomplished a new block + new_block = new_block_event_recv.next() => { + if new_block.is_some() { + gossip_recv = gossip.messages_for( + TendermintGossip::::topic(*block_in_progress.read().unwrap()) + ); + } else { + debug!( + target: "tendermint", + "Block notification stream shut down. {}", + "Is the node shutting down?" + ); + break; + } + }, + + // Message to broadcast + msg = msg_recv.next() => { + if let Some(msg) = msg { + let topic = TendermintGossip::::topic(msg.block().0); + gossip.gossip_message(topic, msg.encode(), false); + } else { + debug!( + target: "tendermint", + "Machine's message channel shut down. {}", + "Is the node shutting down?" + ); + break; + } + }, + + // Received a message + msg = gossip_recv.next() => { + if let Some(msg) = msg { + messages.send( + match SignedMessage::decode(&mut msg.message.as_ref()) { + Ok(msg) => msg, + Err(e) => { + // This is guaranteed to be valid thanks to to the gossip validator, assuming + // that pipeline is correct. This doesn't panic as a hedge + error!(target: "tendermint", "Couldn't decode valid message: {}", e); + continue; + } + } + ).await.unwrap(); + } else { + debug!( + target: "tendermint", + "Gossip channel shut down. {}", + "Is the node shutting down?" + ); + break; + } + } + } + } + } +} + +#[async_trait] +impl Network for TendermintAuthority { + type ValidatorId = u16; + type SignatureScheme = TendermintValidators; + type Weights = TendermintValidators; + type Block = T::Block; + + const BLOCK_PROCESSING_TIME: u32 = T::BLOCK_PROCESSING_TIME_IN_SECONDS; + const LATENCY_TIME: u32 = T::LATENCY_TIME_IN_SECONDS; + + fn signer(&self) -> TendermintSigner { + self.active.as_ref().unwrap().signer.clone() + } + + fn signature_scheme(&self) -> TendermintValidators { + self.import.validators.clone() + } + + fn weights(&self) -> TendermintValidators { + self.import.validators.clone() + } + + async fn broadcast( + &mut self, + msg: SignedMessage as SignatureScheme>::Signature>, + ) { + if self.active.as_mut().unwrap().gossip.unbounded_send(msg).is_err() { + warn!( + target: "tendermint", + "Attempted to broadcast a message except the gossip channel is closed. {}", + "Is the node shutting down?" + ); + } + } + + async fn slash(&mut self, validator: u16) { + // TODO + error!("slashing {}, if this is a local network, this shouldn't happen", validator); + } + + // The Tendermint machine will call add_block for any block which is committed to, regardless of + // validity. To determine validity, it expects a validate function, which Substrate doesn't + // directly offer, and an add function. In order to comply with Serai's modified view of inherent + // transactions, validate MUST check inherents, yet add_block must not. + // + // In order to acquire a validate function, any block proposed by a legitimate proposer is + // imported. This performs full validation and makes the block available as a tip. While this + // would be incredibly unsafe thanks to the unchecked inherents, it's defined as a tip with less + // work, despite being a child of some parent. This means it won't be moved to nor operated on by + // the node. + // + // When Tendermint completes, the block is finalized, setting it as the tip regardless of work. + async fn validate(&mut self, block: &T::Block) -> Result<(), BlockError> { + let hash = block.hash(); + let (header, body) = block.clone().deconstruct(); + let parent = *header.parent_hash(); + let number = *header.number(); + + // Can happen when we sync a block while also acting as a validator + if number <= self.import.client.info().best_number { + debug!(target: "tendermint", "Machine proposed a block for a slot we've already synced"); + Err(BlockError::Temporal)?; + } + + let mut queue_write = self.import.queue.write().await; + *self.import.importing_block.write().unwrap() = Some(hash); + + queue_write.as_mut().unwrap().import_blocks( + BlockOrigin::ConsensusBroadcast, // TODO: Use BlockOrigin::Own when it's our block + vec![IncomingBlock { + hash, + header: Some(header), + body: Some(body), + indexed_body: None, + justifications: None, + origin: None, // TODO + allow_missing_state: false, + skip_execution: false, + import_existing: self.import.recheck.read().unwrap().contains(&hash), + state: None, + }], + ); + + ImportFuture::new(hash, queue_write.as_mut().unwrap()).await?; + + // Sanity checks that a child block can have less work than its parent + { + let info = self.import.client.info(); + assert_eq!(info.best_hash, parent); + assert_eq!(info.finalized_hash, parent); + assert_eq!(info.best_number, number - 1u8.into()); + assert_eq!(info.finalized_number, number - 1u8.into()); + } + + Ok(()) + } + + async fn add_block( + &mut self, + block: T::Block, + commit: Commit>, + ) -> T::Block { + // Prevent import_block from being called while we run + let _lock = self.import.sync_lock.lock().await; + + // Check if we already imported this externally + if self.import.client.justifications(block.hash()).unwrap().is_some() { + debug!(target: "tendermint", "Machine produced a commit after we already synced it"); + } else { + let hash = block.hash(); + let justification = (CONSENSUS_ID, commit.encode()); + debug_assert!(self.import.verify_justification(hash, &justification).is_ok()); + + let raw_number = *block.header().number(); + let number: u64 = match raw_number.try_into() { + Ok(number) => number, + Err(_) => panic!("BlockNumber exceeded u64"), + }; + + let active = self.active.as_mut().unwrap(); + let mut block_in_progress = active.block_in_progress.write().unwrap(); + // This will hold true unless we received, and handled, a notification for the block before + // its justification was made available + debug_assert_eq!(number, *block_in_progress); + + // Finalize the block + self + .import + .client + .finalize_block(hash, Some(justification), true) + .map_err(|_| Error::InvalidJustification) + .unwrap(); + + // Tell the loop we received a block and to move to the next + *block_in_progress = number + 1; + if active.new_block_event.unbounded_send(()).is_err() { + warn!( + target: "tendermint", + "Attempted to send a new number to the gossip handler except it's closed. {}", + "Is the node shutting down?" + ); + } + + // Announce the block to the network so new clients can sync properly + active.announce.announce_block(hash, None); + active.announce.new_best_block_imported(hash, raw_number); + } + + // Clear any blocks for the previous slot which we were willing to recheck + *self.import.recheck.write().unwrap() = HashSet::new(); + + self.get_proposal(block.header()).await + } +} diff --git a/substrate/tendermint/client/src/block_import.rs b/substrate/tendermint/client/src/block_import.rs new file mode 100644 index 00000000..4282f669 --- /dev/null +++ b/substrate/tendermint/client/src/block_import.rs @@ -0,0 +1,182 @@ +use std::{marker::PhantomData, sync::Arc, collections::HashMap}; + +use async_trait::async_trait; + +use sp_api::BlockId; +use sp_runtime::traits::{Header, Block}; +use sp_blockchain::{BlockStatus, HeaderBackend, Backend as BlockchainBackend}; +use sp_consensus::{Error, CacheKeyId, BlockOrigin, SelectChain}; + +use sc_consensus::{BlockCheckParams, BlockImportParams, ImportResult, BlockImport, Verifier}; + +use sc_client_api::{Backend, BlockBackend}; + +use crate::{TendermintValidator, tendermint::TendermintImport}; + +impl TendermintImport { + fn check_already_in_chain(&self, hash: ::Hash) -> bool { + let id = BlockId::Hash(hash); + // If it's in chain, with justifications, return it's already on chain + // If it's in chain, without justifications, continue the block import process to import its + // justifications + // This can be triggered if the validators add a block, without justifications, yet the p2p + // process then broadcasts it with its justifications + (self.client.status(id).unwrap() == BlockStatus::InChain) && + self.client.justifications(hash).unwrap().is_some() + } +} + +#[async_trait] +impl BlockImport for TendermintImport +where + Arc: BlockImport, + as BlockImport>::Error: Into, +{ + type Error = Error; + type Transaction = T::BackendTransaction; + + // TODO: Is there a DoS where you send a block without justifications, causing it to error, + // yet adding it to the blacklist in the process preventing further syncing? + async fn check_block( + &mut self, + mut block: BlockCheckParams, + ) -> Result { + if self.check_already_in_chain(block.hash) { + return Ok(ImportResult::AlreadyInChain); + } + self.verify_order(block.parent_hash, block.number)?; + + // Does not verify origin here as origin only applies to unfinalized blocks + // We don't have context on if this block has justifications or not + + block.allow_missing_state = false; + block.allow_missing_parent = false; + + self.client.check_block(block).await.map_err(Into::into) + } + + async fn import_block( + &mut self, + mut block: BlockImportParams, + new_cache: HashMap>, + ) -> Result { + // Don't allow multiple blocks to be imported at once + let _lock = self.sync_lock.lock().await; + + if self.check_already_in_chain(block.header.hash()) { + return Ok(ImportResult::AlreadyInChain); + } + + self.check(&mut block).await?; + self.client.import_block(block, new_cache).await.map_err(Into::into) + } +} + +#[async_trait] +impl Verifier for TendermintImport +where + Arc: BlockImport, + as BlockImport>::Error: Into, +{ + async fn verify( + &mut self, + mut block: BlockImportParams, + ) -> Result<(BlockImportParams, Option)>>), String> { + block.origin = match block.origin { + BlockOrigin::Genesis => BlockOrigin::Genesis, + BlockOrigin::NetworkBroadcast => BlockOrigin::NetworkBroadcast, + + // Re-map NetworkInitialSync to NetworkBroadcast so it still triggers notifications + // Tendermint will listen to the finality stream. If we sync a block we're running a machine + // for, it'll force the machine to move ahead. We can only do that if there actually are + // notifications + // + // Then Serai also runs data indexing code based on block addition, so ensuring it always + // emits events ensures we always perform our necessary indexing (albeit with a race + // condition since Substrate will eventually prune the block's state, potentially before + // indexing finishes when syncing) + // + // The alternative to this would be editing Substrate directly, which would be a lot less + // fragile, manually triggering the notifications (which may be possible with code intended + // for testing), writing our own notification system, or implementing lock_import_and_run + // on our end, letting us directly set the notifications, so we're not beholden to when + // Substrate decides to call notify_finalized + // + // lock_import_and_run unfortunately doesn't allow async code and generally isn't feasible to + // work with though. We also couldn't use it to prevent Substrate from creating + // notifications, so it only solves half the problem. We'd *still* have to keep this patch, + // with all its fragility, unless we edit Substrate or move the entire block import flow here + BlockOrigin::NetworkInitialSync => BlockOrigin::NetworkBroadcast, + // Also re-map File so bootstraps also trigger notifications, enabling using bootstraps + BlockOrigin::File => BlockOrigin::NetworkBroadcast, + + // We do not want this block, which hasn't been confirmed, to be broadcast over the net + // Substrate will generate notifications unless it's Genesis, which this isn't, InitialSync, + // which changes telemetry behavior, or File, which is... close enough + BlockOrigin::ConsensusBroadcast => BlockOrigin::File, + BlockOrigin::Own => BlockOrigin::File, + }; + + if self.check_already_in_chain(block.header.hash()) { + return Ok((block, None)); + } + + self.check(&mut block).await.map_err(|e| format!("{}", e))?; + Ok((block, None)) + } +} + +/// Tendermint's Select Chain, where the best chain is defined as the most recently finalized +/// block. +/// +/// leaves panics on call due to not being applicable under Tendermint. Any provided answer would +/// have conflicts best left unraised. +// +// SelectChain, while provided by Substrate and part of PartialComponents, isn't used by Substrate +// It's common between various block-production/finality crates, yet Substrate as a system doesn't +// rely on it, which is good, because its definition is explicitly incompatible with Tendermint +// +// leaves is supposed to return all leaves of the blockchain. While Tendermint maintains that view, +// an honest node will only build on the most recently finalized block, so it is a 'leaf' despite +// having descendants +// +// best_chain will always be this finalized block, yet Substrate explicitly defines it as one of +// the above leaves, which this finalized block is explicitly not included in. Accordingly, we +// can never provide a compatible decision +// +// Since PartialComponents expects it, an implementation which does its best is provided. It panics +// if leaves is called, yet returns the finalized chain tip for best_chain, as that's intended to +// be the header to build upon +pub struct TendermintSelectChain>(Arc, PhantomData); + +impl> Clone for TendermintSelectChain { + fn clone(&self) -> Self { + TendermintSelectChain(self.0.clone(), PhantomData) + } +} + +impl> TendermintSelectChain { + pub fn new(backend: Arc) -> TendermintSelectChain { + TendermintSelectChain(backend, PhantomData) + } +} + +#[async_trait] +impl> SelectChain for TendermintSelectChain { + async fn leaves(&self) -> Result, Error> { + panic!("Substrate definition of leaves is incompatible with Tendermint") + } + + async fn best_chain(&self) -> Result { + Ok( + self + .0 + .blockchain() + // There should always be a finalized block + .header(BlockId::Hash(self.0.blockchain().last_finalized().unwrap())) + // There should not be an error in retrieving it and since it's finalized, it should exist + .unwrap() + .unwrap(), + ) + } +} diff --git a/substrate/tendermint/client/src/lib.rs b/substrate/tendermint/client/src/lib.rs new file mode 100644 index 00000000..90c50244 --- /dev/null +++ b/substrate/tendermint/client/src/lib.rs @@ -0,0 +1,163 @@ +use std::sync::Arc; + +use sp_core::crypto::KeyTypeId; +use sp_inherents::CreateInherentDataProviders; +use sp_runtime::traits::{Header, Block}; +use sp_blockchain::HeaderBackend; +use sp_api::{StateBackend, StateBackendFor, TransactionFor, ApiExt, ProvideRuntimeApi}; +use sp_consensus::{Error, Environment}; + +use sc_client_api::{BlockBackend, Backend, Finalizer, BlockchainEvents}; +use sc_block_builder::BlockBuilderApi; +use sc_consensus::{BlockImport, BasicQueue}; + +use sc_network_common::config::NonDefaultSetConfig; +use sc_network::{ProtocolName, NetworkBlock}; +use sc_network_gossip::Network; + +use sp_tendermint::TendermintApi; + +use substrate_prometheus_endpoint::Registry; + +mod validators; + +pub(crate) mod tendermint; +pub use tendermint::TendermintImport; + +mod block_import; +pub use block_import::TendermintSelectChain; + +pub(crate) mod authority; +pub use authority::TendermintAuthority; + +pub const CONSENSUS_ID: [u8; 4] = *b"tend"; +pub(crate) const KEY_TYPE_ID: KeyTypeId = KeyTypeId(CONSENSUS_ID); + +const PROTOCOL_NAME: &str = "/tendermint/1"; + +pub fn protocol_name>(genesis: Hash, fork: Option<&str>) -> ProtocolName { + let mut name = format!("/{}", hex::encode(genesis.as_ref())); + if let Some(fork) = fork { + name += &format!("/{}", fork); + } + name += PROTOCOL_NAME; + name.into() +} + +pub fn set_config(protocol: ProtocolName, block_size: u64) -> NonDefaultSetConfig { + // The extra 512 bytes is for the additional data part of Tendermint + // Even with BLS, that should just be 161 bytes in the worst case, for a perfect messaging scheme + // While 256 bytes would suffice there, it's unknown if any LibP2P overhead exists nor if + // anything here will be perfect. Considering this is miniscule compared to the block size, it's + // better safe than sorry. + let mut cfg = NonDefaultSetConfig::new(protocol, block_size + 512); + cfg.allow_non_reserved(25, 25); + cfg +} + +/// Trait consolidating all generics required by sc_tendermint for processing. +pub trait TendermintClient: Send + Sync + 'static { + const PROPOSED_BLOCK_SIZE_LIMIT: usize; + const BLOCK_PROCESSING_TIME_IN_SECONDS: u32; + const LATENCY_TIME_IN_SECONDS: u32; + + type Block: Block; + type Backend: Backend + 'static; + + /// TransactionFor + type BackendTransaction: Send + Sync + 'static; + /// StateBackendFor + type StateBackend: StateBackend< + <::Header as Header>::Hashing, + Transaction = Self::BackendTransaction, + >; + // Client::Api + type Api: ApiExt + + BlockBuilderApi + + TendermintApi; + type Client: Send + + Sync + + HeaderBackend + + BlockBackend + + BlockImport + + Finalizer + + BlockchainEvents + + ProvideRuntimeApi + + 'static; +} + +/// Trait implementable on firm types to automatically provide a full TendermintClient impl. +pub trait TendermintClientMinimal: Send + Sync + 'static { + const PROPOSED_BLOCK_SIZE_LIMIT: usize; + const BLOCK_PROCESSING_TIME_IN_SECONDS: u32; + const LATENCY_TIME_IN_SECONDS: u32; + + type Block: Block; + type Backend: Backend + 'static; + type Api: ApiExt + BlockBuilderApi + TendermintApi; + type Client: Send + + Sync + + HeaderBackend + + BlockBackend + + BlockImport> + + Finalizer + + BlockchainEvents + + ProvideRuntimeApi + + 'static; +} + +impl TendermintClient for T +where + >::Api: + BlockBuilderApi + TendermintApi, + TransactionFor: Send + Sync + 'static, +{ + const PROPOSED_BLOCK_SIZE_LIMIT: usize = T::PROPOSED_BLOCK_SIZE_LIMIT; + const BLOCK_PROCESSING_TIME_IN_SECONDS: u32 = T::BLOCK_PROCESSING_TIME_IN_SECONDS; + const LATENCY_TIME_IN_SECONDS: u32 = T::LATENCY_TIME_IN_SECONDS; + + type Block = T::Block; + type Backend = T::Backend; + + type BackendTransaction = TransactionFor; + type StateBackend = StateBackendFor; + type Api = >::Api; + type Client = T::Client; +} + +/// Trait consolidating additional generics required by sc_tendermint for authoring. +pub trait TendermintValidator: TendermintClient { + type CIDP: CreateInherentDataProviders + 'static; + type Environment: Send + Sync + Environment + 'static; + + type Network: Clone + + Send + + Sync + + Network + + NetworkBlock<::Hash, <::Header as Header>::Number> + + 'static; +} + +pub type TendermintImportQueue = BasicQueue; + +/// Create an import queue, additionally returning the Tendermint Import object iself, enabling +/// creating an author later as well. +pub fn import_queue( + spawner: &impl sp_core::traits::SpawnEssentialNamed, + client: Arc, + registry: Option<&Registry>, +) -> (TendermintImport, TendermintImportQueue) +where + Arc: BlockImport, + as BlockImport>::Error: Into, +{ + let import = TendermintImport::::new(client); + + let boxed = Box::new(import.clone()); + // Use None for the justification importer since justifications always come with blocks + // Therefore, they're never imported after the fact, which is what mandates an importer + let queue = || BasicQueue::new(import.clone(), boxed.clone(), None, spawner, registry); + + *futures::executor::block_on(import.queue.write()) = Some(queue()); + (import.clone(), queue()) +} diff --git a/substrate/tendermint/client/src/tendermint.rs b/substrate/tendermint/client/src/tendermint.rs new file mode 100644 index 00000000..6e1b6f9e --- /dev/null +++ b/substrate/tendermint/client/src/tendermint.rs @@ -0,0 +1,247 @@ +use std::{ + sync::{Arc, RwLock}, + collections::HashSet, +}; + +use log::{debug, warn}; + +use tokio::sync::{Mutex, RwLock as AsyncRwLock}; + +use sp_core::Decode; +use sp_runtime::{ + traits::{Header, Block}, + Justification, +}; +use sp_inherents::{InherentData, InherentDataProvider, CreateInherentDataProviders}; +use sp_blockchain::HeaderBackend; +use sp_api::{BlockId, ProvideRuntimeApi}; + +use sp_consensus::Error; +use sc_consensus::{ForkChoiceStrategy, BlockImportParams}; + +use sc_block_builder::BlockBuilderApi; + +use tendermint_machine::ext::{BlockError, Commit, Network}; + +use crate::{ + CONSENSUS_ID, TendermintClient, TendermintValidator, validators::TendermintValidators, + TendermintImportQueue, authority::TendermintAuthority, +}; + +type InstantiatedTendermintImportQueue = TendermintImportQueue< + ::Block, + ::BackendTransaction, +>; + +/// Tendermint import handler. +pub struct TendermintImport { + // Lock ensuring only one block is imported at a time + pub(crate) sync_lock: Arc>, + + pub(crate) validators: TendermintValidators, + + pub(crate) providers: Arc>>, + pub(crate) importing_block: Arc::Hash>>>, + + // A set of blocks which we're willing to recheck + // We reject blocks with invalid inherents, yet inherents can be fatally flawed or solely + // perceived as flawed + // If we solely perceive them as flawed, we mark them as eligible for being checked again. Then, + // if they're proposed again, we see if our perception has changed + pub(crate) recheck: Arc::Hash>>>, + + pub(crate) client: Arc, + pub(crate) queue: Arc>>>, +} + +impl Clone for TendermintImport { + fn clone(&self) -> Self { + TendermintImport { + sync_lock: self.sync_lock.clone(), + + validators: self.validators.clone(), + + providers: self.providers.clone(), + importing_block: self.importing_block.clone(), + recheck: self.recheck.clone(), + + client: self.client.clone(), + queue: self.queue.clone(), + } + } +} + +impl TendermintImport { + pub(crate) fn new(client: Arc) -> TendermintImport { + TendermintImport { + sync_lock: Arc::new(Mutex::new(())), + + validators: TendermintValidators::new(client.clone()), + + providers: Arc::new(AsyncRwLock::new(None)), + importing_block: Arc::new(RwLock::new(None)), + recheck: Arc::new(RwLock::new(HashSet::new())), + + client, + queue: Arc::new(AsyncRwLock::new(None)), + } + } + + pub(crate) async fn inherent_data(&self, parent: ::Hash) -> InherentData { + match self + .providers + .read() + .await + .as_ref() + .unwrap() + .create_inherent_data_providers(parent, ()) + .await + { + Ok(providers) => match providers.create_inherent_data() { + Ok(data) => Some(data), + Err(err) => { + warn!(target: "tendermint", "Failed to create inherent data: {}", err); + None + } + }, + Err(err) => { + warn!(target: "tendermint", "Failed to create inherent data providers: {}", err); + None + } + } + .unwrap_or_else(InherentData::new) + } + + async fn check_inherents( + &self, + hash: ::Hash, + block: T::Block, + ) -> Result<(), Error> { + let inherent_data = self.inherent_data(*block.header().parent_hash()).await; + let err = self + .client + .runtime_api() + .check_inherents(&BlockId::Hash(self.client.info().finalized_hash), block, inherent_data) + .map_err(|_| Error::Other(BlockError::Fatal.into()))?; + + if err.ok() { + self.recheck.write().unwrap().remove(&hash); + Ok(()) + } else if err.fatal_error() { + Err(Error::Other(BlockError::Fatal.into())) + } else { + debug!(target: "tendermint", "Proposed block has temporally wrong inherents"); + self.recheck.write().unwrap().insert(hash); + Err(Error::Other(BlockError::Temporal.into())) + } + } + + // Ensure this is part of a sequential import + pub(crate) fn verify_order( + &self, + parent: ::Hash, + number: <::Header as Header>::Number, + ) -> Result<(), Error> { + let info = self.client.info(); + if (info.finalized_hash != parent) || ((info.finalized_number + 1u16.into()) != number) { + Err(Error::Other("non-sequential import".into()))?; + } + Ok(()) + } + + // Do not allow blocks from the traditional network to be broadcast + // Only allow blocks from Tendermint + // Tendermint's propose message could be rewritten as a seal OR Tendermint could produce blocks + // which this checks the proposer slot for, and then tells the Tendermint machine + // While those would be more seamless with Substrate, there's no actual benefit to doing so + fn verify_origin(&self, hash: ::Hash) -> Result<(), Error> { + if let Some(tm_hash) = *self.importing_block.read().unwrap() { + if hash == tm_hash { + return Ok(()); + } + } + Err(Error::Other("block created outside of tendermint".into())) + } + + // Errors if the justification isn't valid + pub(crate) fn verify_justification( + &self, + hash: ::Hash, + justification: &Justification, + ) -> Result<(), Error> { + if justification.0 != CONSENSUS_ID { + Err(Error::InvalidJustification)?; + } + + let commit: Commit> = + Commit::decode(&mut justification.1.as_ref()).map_err(|_| Error::InvalidJustification)?; + // Create a stubbed TendermintAuthority so we can verify the commit + if !TendermintAuthority::stub(self.clone()).verify_commit(hash, &commit) { + Err(Error::InvalidJustification)?; + } + Ok(()) + } + + // Verifies the justifications aren't malformed, not that the block is justified + // Errors if justifications is neither empty nor a single Tendermint justification + // If the block does have a justification, finalized will be set to true + fn verify_justifications( + &self, + block: &mut BlockImportParams, + ) -> Result<(), Error> { + if !block.finalized { + if let Some(justifications) = &block.justifications { + let mut iter = justifications.iter(); + let next = iter.next(); + if next.is_none() || iter.next().is_some() { + Err(Error::InvalidJustification)?; + } + self.verify_justification(block.header.hash(), next.unwrap())?; + block.finalized = true; + } + } + Ok(()) + } + + pub(crate) async fn check( + &self, + block: &mut BlockImportParams, + ) -> Result<(), Error> { + if block.finalized { + if block.fork_choice != Some(ForkChoiceStrategy::Custom(false)) { + // Since we alw1ays set the fork choice, this means something else marked the block as + // finalized, which shouldn't be possible. Ensuring nothing else is setting blocks as + // finalized helps ensure our security + panic!("block was finalized despite not setting the fork choice"); + } + return Ok(()); + } + + // Set the block as a worse choice + block.fork_choice = Some(ForkChoiceStrategy::Custom(false)); + + self.verify_order(*block.header.parent_hash(), *block.header.number())?; + self.verify_justifications(block)?; + + // If the block wasn't finalized, verify the origin and validity of its inherents + if !block.finalized { + let hash = block.header.hash(); + self.verify_origin(hash)?; + self + .check_inherents(hash, T::Block::new(block.header.clone(), block.body.clone().unwrap())) + .await?; + } + + // Additionally check these fields are empty + // They *should* be unused, so requiring their emptiness prevents malleability and ensures + // nothing slips through + if !block.post_digests.is_empty() { + Err(Error::Other("post-digests included".into()))?; + } + if !block.auxiliary.is_empty() { + Err(Error::Other("auxiliary included".into()))?; + } + + Ok(()) + } +} diff --git a/substrate/tendermint/client/src/validators.rs b/substrate/tendermint/client/src/validators.rs new file mode 100644 index 00000000..d60f179e --- /dev/null +++ b/substrate/tendermint/client/src/validators.rs @@ -0,0 +1,190 @@ +use core::ops::Deref; +use std::sync::{Arc, RwLock}; + +use async_trait::async_trait; + +use sp_core::Decode; +use sp_application_crypto::{ + RuntimePublic as PublicTrait, + sr25519::{Public, Signature}, +}; +use sp_keystore::CryptoStore; + +use sp_staking::SessionIndex; +use sp_api::{BlockId, ProvideRuntimeApi}; + +use sc_client_api::HeaderBackend; + +use tendermint_machine::ext::{BlockNumber, RoundNumber, Weights, Signer, SignatureScheme}; + +use sp_tendermint::TendermintApi; + +use crate::{KEY_TYPE_ID, TendermintClient}; + +struct TendermintValidatorsStruct { + session: SessionIndex, + + total_weight: u64, + weights: Vec, + + lookup: Vec, +} + +impl TendermintValidatorsStruct { + fn from_module(client: &Arc) -> Self { + let last = client.info().finalized_hash; + let api = client.runtime_api(); + let session = api.current_session(&BlockId::Hash(last)).unwrap(); + let validators = api.validators(&BlockId::Hash(last)).unwrap(); + + Self { + session, + + // TODO + total_weight: validators.len().try_into().unwrap(), + weights: vec![1; validators.len()], + + lookup: validators, + } + } +} + +// Wrap every access of the validators struct in something which forces calling refresh +struct Refresh { + client: Arc, + _refresh: Arc>, +} + +impl Refresh { + // If the session has changed, re-create the struct with the data on it + fn refresh(&self) { + let session = self._refresh.read().unwrap().session; + let current_block = BlockId::Hash(self.client.info().finalized_hash); + if session != self.client.runtime_api().current_session(¤t_block).unwrap() { + *self._refresh.write().unwrap() = TendermintValidatorsStruct::from_module::(&self.client); + } + } +} + +impl Deref for Refresh { + type Target = RwLock; + fn deref(&self) -> &RwLock { + self.refresh(); + &self._refresh + } +} + +/// Tendermint validators observer, providing data on the active validators. +pub struct TendermintValidators(Refresh); +impl Clone for TendermintValidators { + fn clone(&self) -> Self { + Self(Refresh { _refresh: self.0._refresh.clone(), client: self.0.client.clone() }) + } +} + +impl TendermintValidators { + pub(crate) fn new(client: Arc) -> TendermintValidators { + TendermintValidators(Refresh { + _refresh: Arc::new(RwLock::new(TendermintValidatorsStruct::from_module::(&client))), + client, + }) + } +} + +pub struct TendermintSigner( + pub(crate) Arc, + pub(crate) TendermintValidators, +); + +impl Clone for TendermintSigner { + fn clone(&self) -> Self { + Self(self.0.clone(), self.1.clone()) + } +} + +impl TendermintSigner { + async fn get_public_key(&self) -> Public { + let pubs = self.0.sr25519_public_keys(KEY_TYPE_ID).await; + if pubs.is_empty() { + self.0.sr25519_generate_new(KEY_TYPE_ID, None).await.unwrap() + } else { + pubs[0] + } + } +} + +#[async_trait] +impl Signer for TendermintSigner { + type ValidatorId = u16; + type Signature = Signature; + + async fn validator_id(&self) -> Option { + let key = self.get_public_key().await; + for (i, k) in (*self.1 .0).read().unwrap().lookup.iter().enumerate() { + if k == &key { + return Some(u16::try_from(i).unwrap()); + } + } + None + } + + async fn sign(&self, msg: &[u8]) -> Signature { + Signature::decode( + &mut self + .0 + .sign_with(KEY_TYPE_ID, &self.get_public_key().await.into(), msg) + .await + .unwrap() + .unwrap() + .as_ref(), + ) + .unwrap() + } +} + +impl SignatureScheme for TendermintValidators { + type ValidatorId = u16; + type Signature = Signature; + type AggregateSignature = Vec; + type Signer = TendermintSigner; + + fn verify(&self, validator: u16, msg: &[u8], sig: &Signature) -> bool { + self.0.read().unwrap().lookup[usize::try_from(validator).unwrap()].verify(&msg, sig) + } + + fn aggregate(sigs: &[Signature]) -> Vec { + sigs.to_vec() + } + + fn verify_aggregate(&self, validators: &[u16], msg: &[u8], sigs: &Vec) -> bool { + if validators.len() != sigs.len() { + return false; + } + for (v, sig) in validators.iter().zip(sigs.iter()) { + if !self.verify(*v, msg, sig) { + return false; + } + } + true + } +} + +impl Weights for TendermintValidators { + type ValidatorId = u16; + + fn total_weight(&self) -> u64 { + self.0.read().unwrap().total_weight + } + + fn weight(&self, id: u16) -> u64 { + self.0.read().unwrap().weights[usize::try_from(id).unwrap()] + } + + // TODO: https://github.com/serai-dex/serai/issues/159 + fn proposer(&self, number: BlockNumber, round: RoundNumber) -> u16 { + u16::try_from( + (number.0 + u64::from(round.0)) % u64::try_from(self.0.read().unwrap().lookup.len()).unwrap(), + ) + .unwrap() + } +} diff --git a/substrate/tendermint/machine/Cargo.toml b/substrate/tendermint/machine/Cargo.toml new file mode 100644 index 00000000..e918a88f --- /dev/null +++ b/substrate/tendermint/machine/Cargo.toml @@ -0,0 +1,24 @@ +[package] +name = "tendermint-machine" +version = "0.1.0" +description = "An implementation of the Tendermint state machine in Rust" +license = "MIT" +repository = "https://github.com/serai-dex/serai/tree/develop/substrate/tendermint/machine" +authors = ["Luke Parker "] +edition = "2021" + +[dependencies] +async-trait = "0.1" +thiserror = "1" + +log = "0.4" + +parity-scale-codec = { version = "3.2", features = ["derive"] } + +futures = "0.3" +tokio = { version = "1", features = ["macros", "sync", "time", "rt"] } + +sp-runtime = { git = "https://github.com/serai-dex/substrate", version = "6.0.0", optional = true } + +[features] +substrate = ["sp-runtime"] diff --git a/substrate/tendermint/machine/LICENSE b/substrate/tendermint/machine/LICENSE new file mode 100644 index 00000000..f05b748b --- /dev/null +++ b/substrate/tendermint/machine/LICENSE @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) 2022 Luke Parker + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/substrate/tendermint/machine/README.md b/substrate/tendermint/machine/README.md new file mode 100644 index 00000000..ac497bb9 --- /dev/null +++ b/substrate/tendermint/machine/README.md @@ -0,0 +1,62 @@ +# Tendermint + +An implementation of the Tendermint state machine in Rust. + +This is solely the state machine, intended to be mapped to any arbitrary system. +It supports an arbitrary signature scheme, weighting, and block definition +accordingly. It is not intended to work with the Cosmos SDK, solely to be an +implementation of the [academic protocol](https://arxiv.org/pdf/1807.04938.pdf). + +### Caveats + +- Only SCALE serialization is supported currently. Ideally, everything from + SCALE to borsh to bincode would be supported. SCALE was chosen due to this + being under Serai, which uses Substrate, which uses SCALE. Accordingly, when + deciding which of the three (mutually incompatible) options to support... + +- tokio is explicitly used for the asynchronous task which runs the Tendermint + machine. Ideally, `futures-rs` would be used enabling any async runtime to be + used. + +- It is possible for `add_block` to be called on a block which failed (or never + went through in the first place) validation. This is a break from the paper + which is accepted here. This is for two reasons. + + 1) Serai needing this functionality. + 2) If a block is committed which is invalid, either there's a malicious + majority now defining consensus OR the local node is malicious by virtue of + being faulty. Considering how either represents a fatal circumstance, + except with regards to system like Serai which have their own logic for + pseudo-valid blocks, it is accepted as a possible behavior with the caveat + any consumers must be aware of it. No machine will vote nor precommit to a + block it considers invalid, so for a network with an honest majority, this + is a non-issue. + +### Paper + +The [paper](https://arxiv.org/abs/1807.04938) describes the algorithm with +pseudocode on page 6. This pseudocode isn't directly implementable, nor does it +specify faulty behavior. Instead, it's solely a series of conditions which +trigger events in order to successfully achieve consensus. + +The included pseudocode segments can be minimally described as follows: + +``` +01-09 Init +10-10 StartRound(0) +11-21 StartRound +22-27 Fresh proposal +28-33 Proposal building off a valid round with prevotes +34-35 2f+1 prevote -> schedule timeout prevote +36-43 First proposal with prevotes -> precommit Some +44-46 2f+1 nil prevote -> precommit nil +47-48 2f+1 precommit -> schedule timeout precommit +49-54 First proposal with precommits -> finalize +55-56 f+1 round > local round, jump +57-60 on timeout propose +61-64 on timeout prevote +65-67 on timeout precommit +``` + +The corresponding Rust code implementing these tasks are marked with their +related line numbers. diff --git a/substrate/tendermint/machine/src/block.rs b/substrate/tendermint/machine/src/block.rs new file mode 100644 index 00000000..92923ae9 --- /dev/null +++ b/substrate/tendermint/machine/src/block.rs @@ -0,0 +1,143 @@ +use std::{ + sync::Arc, + collections::{HashSet, HashMap}, +}; + +use crate::{ + time::CanonicalInstant, + ext::{RoundNumber, BlockNumber, Block, Network}, + round::RoundData, + message_log::MessageLog, + Step, Data, DataFor, Message, MessageFor, +}; + +pub(crate) struct BlockData { + pub(crate) number: BlockNumber, + pub(crate) validator_id: Option, + pub(crate) proposal: N::Block, + + pub(crate) log: MessageLog, + pub(crate) slashes: HashSet, + // We track the end times of each round for two reasons: + // 1) Knowing the start time of the next round + // 2) Validating precommits, which include the end time of the round which produced it + // This HashMap contains the end time of the round we're currently in and every round prior + pub(crate) end_time: HashMap, + + pub(crate) round: Option>, + + pub(crate) locked: Option<(RoundNumber, ::Id)>, + pub(crate) valid: Option<(RoundNumber, N::Block)>, +} + +impl BlockData { + pub(crate) fn new( + weights: Arc, + number: BlockNumber, + validator_id: Option, + proposal: N::Block, + ) -> BlockData { + BlockData { + number, + validator_id, + proposal, + + log: MessageLog::new(weights), + slashes: HashSet::new(), + end_time: HashMap::new(), + + // The caller of BlockData::new is expected to be populated after by the caller + round: None, + + locked: None, + valid: None, + } + } + + pub(crate) fn round(&self) -> &RoundData { + self.round.as_ref().unwrap() + } + + pub(crate) fn round_mut(&mut self) -> &mut RoundData { + self.round.as_mut().unwrap() + } + + // Populate the end time up to the specified round + // This is generally used when moving to the next round, where this will only populate one time, + // yet is also used when jumping rounds (when 33% of the validators are on a round ahead of us) + pub(crate) fn populate_end_time(&mut self, round: RoundNumber) { + // Starts from the current round since we only start the current round once we have have all + // the prior time data + for r in (self.round().number.0 + 1) ..= round.0 { + self.end_time.insert( + RoundNumber(r), + RoundData::::new(RoundNumber(r), self.end_time[&RoundNumber(r - 1)]).end_time(), + ); + } + } + + // Start a new round. Optionally takes in the time for when this is the first round, and the time + // isn't simply the time of the prior round (yet rather the prior block). Returns the proposal + // data, if we are the proposer. + pub(crate) fn new_round( + &mut self, + round: RoundNumber, + proposer: N::ValidatorId, + time: Option, + ) -> Option> { + debug_assert_eq!(round.0 == 0, time.is_some()); + + // If this is the first round, we don't have a prior round's end time to use as the start + // We use the passed in time instead + // If this isn't the first round, ensure we have the prior round's end time by populating the + // map with all rounds till this round + // This can happen we jump from round x to round x+n, where n != 1 + // The paper says to do so whenever you observe a sufficient amount of peers on a higher round + if round.0 != 0 { + self.populate_end_time(round); + } + + // 11-13 + self.round = Some(RoundData::::new( + round, + time.unwrap_or_else(|| self.end_time[&RoundNumber(round.0 - 1)]), + )); + self.end_time.insert(round, self.round().end_time()); + + // 14-21 + if Some(proposer) == self.validator_id { + let (round, block) = if let Some((round, block)) = &self.valid { + (Some(*round), block.clone()) + } else { + (None, self.proposal.clone()) + }; + Some(Data::Proposal(round, block)) + } else { + self.round_mut().set_timeout(Step::Propose); + None + } + } + + // Transform Data into an actual Message, using the contextual data from this block + pub(crate) fn message(&mut self, data: DataFor) -> Option> { + debug_assert_eq!( + self.round().step, + match data.step() { + Step::Propose | Step::Prevote => Step::Propose, + Step::Precommit => Step::Prevote, + }, + ); + // Tendermint always sets the round's step to whatever it just broadcasted + // Consolidate all of those here to ensure they aren't missed by an oversight + // 27, 33, 41, 46, 60, 64 + self.round_mut().step = data.step(); + + // Only return a message to if we're actually a current validator + self.validator_id.map(|validator_id| Message { + sender: validator_id, + block: self.number, + round: self.round().number, + data, + }) + } +} diff --git a/substrate/tendermint/machine/src/ext.rs b/substrate/tendermint/machine/src/ext.rs new file mode 100644 index 00000000..daa684c3 --- /dev/null +++ b/substrate/tendermint/machine/src/ext.rs @@ -0,0 +1,274 @@ +use core::{hash::Hash, fmt::Debug}; +use std::{sync::Arc, collections::HashSet}; + +use async_trait::async_trait; +use thiserror::Error; + +use parity_scale_codec::{Encode, Decode}; + +use crate::{SignedMessageFor, commit_msg}; + +/// An alias for a series of traits required for a type to be usable as a validator ID, +/// automatically implemented for all types satisfying those traits. +pub trait ValidatorId: + Send + Sync + Clone + Copy + PartialEq + Eq + Hash + Debug + Encode + Decode +{ +} +impl ValidatorId + for V +{ +} + +/// An alias for a series of traits required for a type to be usable as a signature, +/// automatically implemented for all types satisfying those traits. +pub trait Signature: Send + Sync + Clone + PartialEq + Debug + Encode + Decode {} +impl Signature for S {} + +// Type aliases which are distinct according to the type system + +/// A struct containing a Block Number, wrapped to have a distinct type. +#[derive(Clone, Copy, PartialEq, Eq, Hash, Debug, Encode, Decode)] +pub struct BlockNumber(pub u64); +/// A struct containing a round number, wrapped to have a distinct type. +#[derive(Clone, Copy, PartialEq, Eq, Hash, Debug, Encode, Decode)] +pub struct RoundNumber(pub u32); + +/// A signer for a validator. +#[async_trait] +pub trait Signer: Send + Sync { + // Type used to identify validators. + type ValidatorId: ValidatorId; + /// Signature type. + type Signature: Signature; + + /// Returns the validator's current ID. Returns None if they aren't a current validator. + async fn validator_id(&self) -> Option; + /// Sign a signature with the current validator's private key. + async fn sign(&self, msg: &[u8]) -> Self::Signature; +} + +#[async_trait] +impl Signer for Arc { + type ValidatorId = S::ValidatorId; + type Signature = S::Signature; + + async fn validator_id(&self) -> Option { + self.as_ref().validator_id().await + } + + async fn sign(&self, msg: &[u8]) -> Self::Signature { + self.as_ref().sign(msg).await + } +} + +/// A signature scheme used by validators. +pub trait SignatureScheme: Send + Sync { + // Type used to identify validators. + type ValidatorId: ValidatorId; + /// Signature type. + type Signature: Signature; + /// Type representing an aggregate signature. This would presumably be a BLS signature, + /// yet even with Schnorr signatures + /// [half-aggregation is possible](https://eprint.iacr.org/2021/350). + /// It could even be a threshold signature scheme, though that's currently unexpected. + type AggregateSignature: Signature; + + /// Type representing a signer of this scheme. + type Signer: Signer; + + /// Verify a signature from the validator in question. + #[must_use] + fn verify(&self, validator: Self::ValidatorId, msg: &[u8], sig: &Self::Signature) -> bool; + + /// Aggregate signatures. + fn aggregate(sigs: &[Self::Signature]) -> Self::AggregateSignature; + /// Verify an aggregate signature for the list of signers. + #[must_use] + fn verify_aggregate( + &self, + signers: &[Self::ValidatorId], + msg: &[u8], + sig: &Self::AggregateSignature, + ) -> bool; +} + +impl SignatureScheme for Arc { + type ValidatorId = S::ValidatorId; + type Signature = S::Signature; + type AggregateSignature = S::AggregateSignature; + type Signer = S::Signer; + + fn verify(&self, validator: Self::ValidatorId, msg: &[u8], sig: &Self::Signature) -> bool { + self.as_ref().verify(validator, msg, sig) + } + + fn aggregate(sigs: &[Self::Signature]) -> Self::AggregateSignature { + S::aggregate(sigs) + } + + #[must_use] + fn verify_aggregate( + &self, + signers: &[Self::ValidatorId], + msg: &[u8], + sig: &Self::AggregateSignature, + ) -> bool { + self.as_ref().verify_aggregate(signers, msg, sig) + } +} + +/// A commit for a specific block. The list of validators have weight exceeding the threshold for +/// a valid commit. +#[derive(Clone, PartialEq, Debug, Encode, Decode)] +pub struct Commit { + /// End time of the round which created this commit, used as the start time of the next block. + pub end_time: u64, + /// Validators participating in the signature. + pub validators: Vec, + /// Aggregate signature. + pub signature: S::AggregateSignature, +} + +/// Weights for the validators present. +pub trait Weights: Send + Sync { + type ValidatorId: ValidatorId; + + /// Total weight of all validators. + fn total_weight(&self) -> u64; + /// Weight for a specific validator. + fn weight(&self, validator: Self::ValidatorId) -> u64; + /// Threshold needed for BFT consensus. + fn threshold(&self) -> u64 { + ((self.total_weight() * 2) / 3) + 1 + } + /// Threshold preventing BFT consensus. + fn fault_thresold(&self) -> u64 { + (self.total_weight() - self.threshold()) + 1 + } + + /// Weighted round robin function. + fn proposer(&self, block: BlockNumber, round: RoundNumber) -> Self::ValidatorId; +} + +impl Weights for Arc { + type ValidatorId = W::ValidatorId; + + fn total_weight(&self) -> u64 { + self.as_ref().total_weight() + } + + fn weight(&self, validator: Self::ValidatorId) -> u64 { + self.as_ref().weight(validator) + } + + fn proposer(&self, block: BlockNumber, round: RoundNumber) -> Self::ValidatorId { + self.as_ref().proposer(block, round) + } +} + +/// Simplified error enum representing a block's validity. +#[derive(Clone, Copy, PartialEq, Eq, Debug, Error, Encode, Decode)] +pub enum BlockError { + /// Malformed block which is wholly invalid. + #[error("invalid block")] + Fatal, + /// Valid block by syntax, with semantics which may or may not be valid yet are locally + /// considered invalid. If a block fails to validate with this, a slash will not be triggered. + #[error("invalid block under local view")] + Temporal, +} + +/// Trait representing a Block. +pub trait Block: Send + Sync + Clone + PartialEq + Debug + Encode + Decode { + // Type used to identify blocks. Presumably a cryptographic hash of the block. + type Id: Send + Sync + Copy + Clone + PartialEq + AsRef<[u8]> + Debug + Encode + Decode; + + /// Return the deterministic, unique ID for this block. + fn id(&self) -> Self::Id; +} + +#[cfg(feature = "substrate")] +impl Block for B { + type Id = B::Hash; + fn id(&self) -> B::Hash { + self.hash() + } +} + +/// Trait representing the distributed system Tendermint is providing consensus over. +#[async_trait] +pub trait Network: Send + Sync { + // Type used to identify validators. + type ValidatorId: ValidatorId; + /// Signature scheme used by validators. + type SignatureScheme: SignatureScheme; + /// Object representing the weights of validators. + type Weights: Weights; + /// Type used for ordered blocks of information. + type Block: Block; + + /// Maximum block processing time in seconds. This should include both the actual processing time + /// and the time to download the block. + const BLOCK_PROCESSING_TIME: u32; + /// Network latency time in seconds. + const LATENCY_TIME: u32; + + /// The block time is defined as the processing time plus three times the latency. + fn block_time() -> u32 { + Self::BLOCK_PROCESSING_TIME + (3 * Self::LATENCY_TIME) + } + + /// Return a handle on the signer in use, usable for the entire lifetime of the machine. + fn signer(&self) -> ::Signer; + /// Return a handle on the signing scheme in use, usable for the entire lifetime of the machine. + fn signature_scheme(&self) -> Self::SignatureScheme; + /// Return a handle on the validators' weights, usable for the entire lifetime of the machine. + fn weights(&self) -> Self::Weights; + + /// Verify a commit for a given block. Intended for use when syncing or when not an active + /// validator. + #[must_use] + fn verify_commit( + &self, + id: ::Id, + commit: &Commit, + ) -> bool { + if commit.validators.iter().collect::>().len() != commit.validators.len() { + return false; + } + + if !self.signature_scheme().verify_aggregate( + &commit.validators, + &commit_msg(commit.end_time, id.as_ref()), + &commit.signature, + ) { + return false; + } + + let weights = self.weights(); + commit.validators.iter().map(|v| weights.weight(*v)).sum::() >= weights.threshold() + } + + /// Broadcast a message to the other validators. If authenticated channels have already been + /// established, this will double-authenticate. Switching to unauthenticated channels in a system + /// already providing authenticated channels is not recommended as this is a minor, temporal + /// inefficiency while downgrading channels may have wider implications. + async fn broadcast(&mut self, msg: SignedMessageFor); + + /// Trigger a slash for the validator in question who was definitively malicious. + /// The exact process of triggering a slash is undefined and left to the network as a whole. + async fn slash(&mut self, validator: Self::ValidatorId); + + /// Validate a block. + async fn validate(&mut self, block: &Self::Block) -> Result<(), BlockError>; + /// Add a block, returning the proposal for the next one. It's possible a block, which was never + /// validated or even failed validation, may be passed here if a supermajority of validators did + /// consider it valid and created a commit for it. This deviates from the paper which will have a + /// local node refuse to decide on a block it considers invalid. This library acknowledges the + /// network did decide on it, leaving handling of it to the network, and outside of this scope. + async fn add_block( + &mut self, + block: Self::Block, + commit: Commit, + ) -> Self::Block; +} diff --git a/substrate/tendermint/machine/src/lib.rs b/substrate/tendermint/machine/src/lib.rs new file mode 100644 index 00000000..487f4cbf --- /dev/null +++ b/substrate/tendermint/machine/src/lib.rs @@ -0,0 +1,638 @@ +use core::fmt::Debug; + +use std::{ + sync::Arc, + time::{SystemTime, Instant, Duration}, + collections::VecDeque, +}; + +use log::debug; + +use parity_scale_codec::{Encode, Decode}; + +use futures::{ + FutureExt, StreamExt, + future::{self, Fuse}, + channel::mpsc, +}; +use tokio::time::sleep; + +mod time; +use time::{sys_time, CanonicalInstant}; + +mod round; + +mod block; +use block::BlockData; + +pub(crate) mod message_log; + +/// Traits and types of the external network being integrated with to provide consensus over. +pub mod ext; +use ext::*; + +pub(crate) fn commit_msg(end_time: u64, id: &[u8]) -> Vec { + [&end_time.to_le_bytes(), id].concat().to_vec() +} + +#[derive(Clone, Copy, PartialEq, Eq, Hash, Debug, Encode, Decode)] +enum Step { + Propose, + Prevote, + Precommit, +} + +#[derive(Clone, Debug, Encode, Decode)] +enum Data { + Proposal(Option, B), + Prevote(Option), + Precommit(Option<(B::Id, S)>), +} + +impl PartialEq for Data { + fn eq(&self, other: &Data) -> bool { + match (self, other) { + (Data::Proposal(valid_round, block), Data::Proposal(valid_round2, block2)) => { + (valid_round == valid_round2) && (block == block2) + } + (Data::Prevote(id), Data::Prevote(id2)) => id == id2, + (Data::Precommit(None), Data::Precommit(None)) => true, + (Data::Precommit(Some((id, _))), Data::Precommit(Some((id2, _)))) => id == id2, + _ => false, + } + } +} + +impl Data { + fn step(&self) -> Step { + match self { + Data::Proposal(..) => Step::Propose, + Data::Prevote(..) => Step::Prevote, + Data::Precommit(..) => Step::Precommit, + } + } +} + +#[derive(Clone, PartialEq, Debug, Encode, Decode)] +struct Message { + sender: V, + + block: BlockNumber, + round: RoundNumber, + + data: Data, +} + +/// A signed Tendermint consensus message to be broadcast to the other validators. +#[derive(Clone, PartialEq, Debug, Encode, Decode)] +pub struct SignedMessage { + msg: Message, + sig: S, +} + +impl SignedMessage { + /// Number of the block this message is attempting to add to the chain. + pub fn block(&self) -> BlockNumber { + self.msg.block + } + + #[must_use] + pub fn verify_signature>( + &self, + signer: &Scheme, + ) -> bool { + signer.verify(self.msg.sender, &self.msg.encode(), &self.sig) + } +} + +#[derive(Clone, Copy, PartialEq, Eq, Debug)] +enum TendermintError { + Malicious(V), + Temporal, +} + +// Type aliases to abstract over generic hell +pub(crate) type DataFor = + Data<::Block, <::SignatureScheme as SignatureScheme>::Signature>; +pub(crate) type MessageFor = Message< + ::ValidatorId, + ::Block, + <::SignatureScheme as SignatureScheme>::Signature, +>; +/// Type alias to the SignedMessage type for a given Network +pub type SignedMessageFor = SignedMessage< + ::ValidatorId, + ::Block, + <::SignatureScheme as SignatureScheme>::Signature, +>; + +/// A machine executing the Tendermint protocol. +pub struct TendermintMachine { + network: N, + signer: ::Signer, + validators: N::SignatureScheme, + weights: Arc, + + queue: VecDeque>, + msg_recv: mpsc::UnboundedReceiver>, + step_recv: mpsc::UnboundedReceiver<(BlockNumber, Commit, N::Block)>, + + block: BlockData, +} + +pub type StepSender = mpsc::UnboundedSender<( + BlockNumber, + Commit<::SignatureScheme>, + ::Block, +)>; + +pub type MessageSender = mpsc::UnboundedSender>; + +/// A Tendermint machine and its channel to receive messages from the gossip layer over. +pub struct TendermintHandle { + /// Channel to trigger the machine to move to the next block. + /// Takes in the the previous block's commit, along with the new proposal. + pub step: StepSender, + /// Channel to send messages received from the P2P layer. + pub messages: MessageSender, + /// Tendermint machine to be run on an asynchronous task. + pub machine: TendermintMachine, +} + +impl TendermintMachine { + // Broadcast the given piece of data + // Tendermint messages always specify their block/round, yet Tendermint only ever broadcasts for + // the current block/round. Accordingly, instead of manually fetching those at every call-site, + // this function can simply pass the data to the block which can contextualize it + fn broadcast(&mut self, data: DataFor) { + if let Some(msg) = self.block.message(data) { + // Push it on to the queue. This is done so we only handle one message at a time, and so we + // can handle our own message before broadcasting it. That way, we fail before before + // becoming malicious + self.queue.push_back(msg); + } + } + + // Start a new round. Returns true if we were the proposer + fn round(&mut self, round: RoundNumber, time: Option) -> bool { + if let Some(data) = + self.block.new_round(round, self.weights.proposer(self.block.number, round), time) + { + self.broadcast(data); + true + } else { + false + } + } + + // 53-54 + async fn reset(&mut self, end_round: RoundNumber, proposal: N::Block) { + // Ensure we have the end time data for the last round + self.block.populate_end_time(end_round); + + // Sleep until this round ends + let round_end = self.block.end_time[&end_round]; + sleep(round_end.instant().saturating_duration_since(Instant::now())).await; + + // Clear our outbound message queue + self.queue = VecDeque::new(); + + // Create the new block + self.block = BlockData::new( + self.weights.clone(), + BlockNumber(self.block.number.0 + 1), + self.signer.validator_id().await, + proposal, + ); + + // Start the first round + self.round(RoundNumber(0), Some(round_end)); + } + + async fn reset_by_commit(&mut self, commit: Commit, proposal: N::Block) { + let mut round = self.block.round().number; + // If this commit is for a round we don't have, jump up to it + while self.block.end_time[&round].canonical() < commit.end_time { + round.0 += 1; + self.block.populate_end_time(round); + } + // If this commit is for a prior round, find it + while self.block.end_time[&round].canonical() > commit.end_time { + if round.0 == 0 { + panic!("commit isn't for this machine's next block"); + } + round.0 -= 1; + } + debug_assert_eq!(self.block.end_time[&round].canonical(), commit.end_time); + + self.reset(round, proposal).await; + } + + async fn slash(&mut self, validator: N::ValidatorId) { + if !self.block.slashes.contains(&validator) { + debug!(target: "tendermint", "Slashing validator {:?}", validator); + self.block.slashes.insert(validator); + self.network.slash(validator).await; + } + } + + /// Create a new Tendermint machine, from the specified point, with the specified block as the + /// one to propose next. This will return a channel to send messages from the gossip layer and + /// the machine itself. The machine should have `run` called from an asynchronous task. + #[allow(clippy::new_ret_no_self)] + pub async fn new( + network: N, + last_block: BlockNumber, + last_time: u64, + proposal: N::Block, + ) -> TendermintHandle { + let (msg_send, msg_recv) = mpsc::unbounded(); + let (step_send, step_recv) = mpsc::unbounded(); + TendermintHandle { + step: step_send, + messages: msg_send, + machine: { + let sys_time = sys_time(last_time); + // If the last block hasn't ended yet, sleep until it has + sleep(sys_time.duration_since(SystemTime::now()).unwrap_or(Duration::ZERO)).await; + + let signer = network.signer(); + let validators = network.signature_scheme(); + let weights = Arc::new(network.weights()); + let validator_id = signer.validator_id().await; + // 01-10 + let mut machine = TendermintMachine { + network, + signer, + validators, + weights: weights.clone(), + + queue: VecDeque::new(), + msg_recv, + step_recv, + + block: BlockData::new(weights, BlockNumber(last_block.0 + 1), validator_id, proposal), + }; + + // The end time of the last block is the start time for this one + // The Commit explicitly contains the end time, so loading the last commit will provide + // this. The only exception is for the genesis block, which doesn't have a commit + // Using the genesis time in place will cause this block to be created immediately + // after it, without the standard amount of separation (so their times will be + // equivalent or minimally offset) + // For callers wishing to avoid this, they should pass (0, GENESIS + N::block_time()) + machine.round(RoundNumber(0), Some(CanonicalInstant::new(last_time))); + machine + }, + } + } + + pub async fn run(mut self) { + loop { + // Also create a future for if the queue has a message + // Does not pop_front as if another message has higher priority, its future will be handled + // instead in this loop, and the popped value would be dropped with the next iteration + // While no other message has a higher priority right now, this is a safer practice + let mut queue_future = + if self.queue.is_empty() { Fuse::terminated() } else { future::ready(()).fuse() }; + + if let Some((broadcast, msg)) = futures::select_biased! { + // Handle a new block occuring externally (an external sync loop) + // Has the highest priority as it makes all other futures here irrelevant + msg = self.step_recv.next() => { + if let Some((block_number, commit, proposal)) = msg { + // Commit is for a block we've already moved past + if block_number != self.block.number { + continue; + } + self.reset_by_commit(commit, proposal).await; + None + } else { + break; + } + }, + + // Handle our messages + _ = queue_future => { + Some((true, self.queue.pop_front().unwrap())) + }, + + // Handle any timeouts + step = self.block.round().timeout_future().fuse() => { + // Remove the timeout so it doesn't persist, always being the selected future due to bias + // While this does enable the timeout to be entered again, the timeout setting code will + // never attempt to add a timeout after its timeout has expired + self.block.round_mut().timeouts.remove(&step); + // Only run if it's still the step in question + if self.block.round().step == step { + match step { + Step::Propose => { + // Slash the validator for not proposing when they should've + debug!(target: "tendermint", "Validator didn't propose when they should have"); + self.slash( + self.weights.proposer(self.block.number, self.block.round().number) + ).await; + self.broadcast(Data::Prevote(None)); + }, + Step::Prevote => self.broadcast(Data::Precommit(None)), + Step::Precommit => { + self.round(RoundNumber(self.block.round().number.0 + 1), None); + continue; + } + } + } + None + }, + + // Handle any received messages + msg = self.msg_recv.next() => { + if let Some(msg) = msg { + if !msg.verify_signature(&self.validators) { + continue; + } + Some((false, msg.msg)) + } else { + break; + } + } + } { + let res = self.message(msg.clone()).await; + if res.is_err() && broadcast { + panic!("honest node had invalid behavior"); + } + + match res { + Ok(None) => (), + Ok(Some(block)) => { + let mut validators = vec![]; + let mut sigs = vec![]; + // Get all precommits for this round + for (validator, msgs) in &self.block.log.log[&msg.round] { + if let Some(Data::Precommit(Some((id, sig)))) = msgs.get(&Step::Precommit) { + // If this precommit was for this block, include it + if id == &block.id() { + validators.push(*validator); + sigs.push(sig.clone()); + } + } + } + + let commit = Commit { + end_time: self.block.end_time[&msg.round].canonical(), + validators, + signature: N::SignatureScheme::aggregate(&sigs), + }; + debug_assert!(self.network.verify_commit(block.id(), &commit)); + + let proposal = self.network.add_block(block, commit).await; + self.reset(msg.round, proposal).await; + } + Err(TendermintError::Malicious(validator)) => self.slash(validator).await, + Err(TendermintError::Temporal) => (), + } + + if broadcast { + let sig = self.signer.sign(&msg.encode()).await; + self.network.broadcast(SignedMessage { msg, sig }).await; + } + } + } + } + + // Returns Ok(true) if this was a Precommit which had its signature validated + // Returns Ok(false) if it wasn't a Precommit or the signature wasn't validated yet + // Returns Err if the signature was invalid + fn verify_precommit_signature( + &self, + sender: N::ValidatorId, + round: RoundNumber, + data: &DataFor, + ) -> Result> { + if let Data::Precommit(Some((id, sig))) = data { + // Also verify the end_time of the commit + // Only perform this verification if we already have the end_time + // Else, there's a DoS where we receive a precommit for some round infinitely in the future + // which forces us to calculate every end time + if let Some(end_time) = self.block.end_time.get(&round) { + if !self.validators.verify(sender, &commit_msg(end_time.canonical(), id.as_ref()), sig) { + debug!(target: "tendermint", "Validator produced an invalid commit signature"); + Err(TendermintError::Malicious(sender))?; + } + return Ok(true); + } + } + Ok(false) + } + + async fn message( + &mut self, + msg: MessageFor, + ) -> Result, TendermintError> { + if msg.block != self.block.number { + Err(TendermintError::Temporal)?; + } + + // If this is a precommit, verify its signature + self.verify_precommit_signature(msg.sender, msg.round, &msg.data)?; + + // Only let the proposer propose + if matches!(msg.data, Data::Proposal(..)) && + (msg.sender != self.weights.proposer(msg.block, msg.round)) + { + debug!(target: "tendermint", "Validator who wasn't the proposer proposed"); + Err(TendermintError::Malicious(msg.sender))?; + }; + + if !self.block.log.log(msg.clone())? { + return Ok(None); + } + + // All functions, except for the finalizer and the jump, are locked to the current round + + // Run the finalizer to see if it applies + // 49-52 + if matches!(msg.data, Data::Proposal(..)) || matches!(msg.data, Data::Precommit(_)) { + let proposer = self.weights.proposer(self.block.number, msg.round); + + // Get the proposal + if let Some(Data::Proposal(_, block)) = self.block.log.get(msg.round, proposer, Step::Propose) + { + // Check if it has gotten a sufficient amount of precommits + // Use a junk signature since message equality disregards the signature + if self.block.log.has_consensus( + msg.round, + Data::Precommit(Some((block.id(), self.signer.sign(&[]).await))), + ) { + return Ok(Some(block.clone())); + } + } + } + + // Else, check if we need to jump ahead + #[allow(clippy::comparison_chain)] + if msg.round.0 < self.block.round().number.0 { + // Prior round, disregard if not finalizing + return Ok(None); + } else if msg.round.0 > self.block.round().number.0 { + // 55-56 + // Jump, enabling processing by the below code + if self.block.log.round_participation(msg.round) > self.weights.fault_thresold() { + // If this round already has precommit messages, verify their signatures + let round_msgs = self.block.log.log[&msg.round].clone(); + for (validator, msgs) in &round_msgs { + if let Some(data) = msgs.get(&Step::Precommit) { + if let Ok(res) = self.verify_precommit_signature(*validator, msg.round, data) { + // Ensure this actually verified the signature instead of believing it shouldn't yet + debug_assert!(res); + } else { + // Remove the message so it isn't counted towards forming a commit/included in one + // This won't remove the fact the precommitted for this block hash in the MessageLog + // TODO: Don't even log these in the first place until we jump, preventing needing + // to do this in the first place + self + .block + .log + .log + .get_mut(&msg.round) + .unwrap() + .get_mut(validator) + .unwrap() + .remove(&Step::Precommit); + self.slash(*validator).await; + } + } + } + // If we're the proposer, return now so we re-run processing with our proposal + // If we continue now, it'd just be wasted ops + if self.round(msg.round, None) { + return Ok(None); + } + } else { + // Future round which we aren't ready to jump to, so return for now + return Ok(None); + } + } + + // The paper executes these checks when the step is prevote. Making sure this message warrants + // rerunning these checks is a sane optimization since message instances is a full iteration + // of the round map + if (self.block.round().step == Step::Prevote) && matches!(msg.data, Data::Prevote(_)) { + let (participation, weight) = + self.block.log.message_instances(self.block.round().number, Data::Prevote(None)); + // 34-35 + if participation >= self.weights.threshold() { + self.block.round_mut().set_timeout(Step::Prevote); + } + + // 44-46 + if weight >= self.weights.threshold() { + self.broadcast(Data::Precommit(None)); + return Ok(None); + } + } + + // 47-48 + if matches!(msg.data, Data::Precommit(_)) && + self.block.log.has_participation(self.block.round().number, Step::Precommit) + { + self.block.round_mut().set_timeout(Step::Precommit); + } + + // All further operations require actually having the proposal in question + let proposer = self.weights.proposer(self.block.number, self.block.round().number); + let (vr, block) = if let Some(Data::Proposal(vr, block)) = + self.block.log.get(self.block.round().number, proposer, Step::Propose) + { + (vr, block) + } else { + return Ok(None); + }; + + // 22-33 + if self.block.round().step == Step::Propose { + // Delay error handling (triggering a slash) until after we vote. + let (valid, err) = match self.network.validate(block).await { + Ok(_) => (true, Ok(None)), + Err(BlockError::Temporal) => (false, Ok(None)), + Err(BlockError::Fatal) => (false, { + debug!(target: "tendermint", "Validator proposed a fatally invalid block"); + Err(TendermintError::Malicious(proposer)) + }), + }; + // Create a raw vote which only requires block validity as a basis for the actual vote. + let raw_vote = Some(block.id()).filter(|_| valid); + + // If locked is none, it has a round of -1 according to the protocol. That satisfies + // 23 and 29. If it's some, both are satisfied if they're for the same ID. If it's some + // with different IDs, the function on 22 rejects yet the function on 28 has one other + // condition + let locked = self.block.locked.as_ref().map(|(_, id)| id == &block.id()).unwrap_or(true); + let mut vote = raw_vote.filter(|_| locked); + + if let Some(vr) = vr { + // Malformed message + if vr.0 >= self.block.round().number.0 { + debug!(target: "tendermint", "Validator claimed a round from the future was valid"); + Err(TendermintError::Malicious(msg.sender))?; + } + + if self.block.log.has_consensus(*vr, Data::Prevote(Some(block.id()))) { + // Allow differing locked values if the proposal has a newer valid round + // This is the other condition described above + if let Some((locked_round, _)) = self.block.locked.as_ref() { + vote = vote.or_else(|| raw_vote.filter(|_| locked_round.0 <= vr.0)); + } + + self.broadcast(Data::Prevote(vote)); + return err; + } + } else { + self.broadcast(Data::Prevote(vote)); + return err; + } + + return Ok(None); + } + + if self + .block + .valid + .as_ref() + .map(|(round, _)| round != &self.block.round().number) + .unwrap_or(true) + { + // 36-43 + + // The run once condition is implemented above. Since valid will always be set by this, it + // not being set, or only being set historically, means this has yet to be run + + if self.block.log.has_consensus(self.block.round().number, Data::Prevote(Some(block.id()))) { + match self.network.validate(block).await { + Ok(_) => (), + Err(BlockError::Temporal) => (), + Err(BlockError::Fatal) => { + debug!(target: "tendermint", "Validator proposed a fatally invalid block"); + Err(TendermintError::Malicious(proposer))? + } + }; + + self.block.valid = Some((self.block.round().number, block.clone())); + if self.block.round().step == Step::Prevote { + self.block.locked = Some((self.block.round().number, block.id())); + self.broadcast(Data::Precommit(Some(( + block.id(), + self + .signer + .sign(&commit_msg( + self.block.end_time[&self.block.round().number].canonical(), + block.id().as_ref(), + )) + .await, + )))); + } + } + } + + Ok(None) + } +} diff --git a/substrate/tendermint/machine/src/message_log.rs b/substrate/tendermint/machine/src/message_log.rs new file mode 100644 index 00000000..e914f694 --- /dev/null +++ b/substrate/tendermint/machine/src/message_log.rs @@ -0,0 +1,108 @@ +use std::{sync::Arc, collections::HashMap}; + +use log::debug; + +use crate::{ext::*, RoundNumber, Step, Data, DataFor, MessageFor, TendermintError}; + +type RoundLog = HashMap<::ValidatorId, HashMap>>; +pub(crate) struct MessageLog { + weights: Arc, + precommitted: HashMap::Id>, + pub(crate) log: HashMap>, +} + +impl MessageLog { + pub(crate) fn new(weights: Arc) -> MessageLog { + MessageLog { weights, precommitted: HashMap::new(), log: HashMap::new() } + } + + // Returns true if it's a new message + pub(crate) fn log( + &mut self, + msg: MessageFor, + ) -> Result> { + let round = self.log.entry(msg.round).or_insert_with(HashMap::new); + let msgs = round.entry(msg.sender).or_insert_with(HashMap::new); + + // Handle message replays without issue. It's only multiple messages which is malicious + let step = msg.data.step(); + if let Some(existing) = msgs.get(&step) { + if existing != &msg.data { + debug!( + target: "tendermint", + "Validator sent multiple messages for the same block + round + step" + ); + Err(TendermintError::Malicious(msg.sender))?; + } + return Ok(false); + } + + // If they already precommitted to a distinct hash, error + if let Data::Precommit(Some((hash, _))) = &msg.data { + if let Some(prev) = self.precommitted.get(&msg.sender) { + if hash != prev { + debug!(target: "tendermint", "Validator precommitted to multiple blocks"); + Err(TendermintError::Malicious(msg.sender))?; + } + } + self.precommitted.insert(msg.sender, *hash); + } + + msgs.insert(step, msg.data); + Ok(true) + } + + // For a given round, return the participating weight for this step, and the weight agreeing with + // the data. + pub(crate) fn message_instances(&self, round: RoundNumber, data: DataFor) -> (u64, u64) { + let mut participating = 0; + let mut weight = 0; + for (participant, msgs) in &self.log[&round] { + if let Some(msg) = msgs.get(&data.step()) { + let validator_weight = self.weights.weight(*participant); + participating += validator_weight; + if &data == msg { + weight += validator_weight; + } + } + } + (participating, weight) + } + + // Get the participation in a given round + pub(crate) fn round_participation(&self, round: RoundNumber) -> u64 { + let mut weight = 0; + if let Some(round) = self.log.get(&round) { + for participant in round.keys() { + weight += self.weights.weight(*participant); + } + }; + weight + } + + // Check if a supermajority of nodes have participated on a specific step + pub(crate) fn has_participation(&self, round: RoundNumber, step: Step) -> bool { + let mut participating = 0; + for (participant, msgs) in &self.log[&round] { + if msgs.get(&step).is_some() { + participating += self.weights.weight(*participant); + } + } + participating >= self.weights.threshold() + } + + // Check if consensus has been reached on a specific piece of data + pub(crate) fn has_consensus(&self, round: RoundNumber, data: DataFor) -> bool { + let (_, weight) = self.message_instances(round, data); + weight >= self.weights.threshold() + } + + pub(crate) fn get( + &self, + round: RoundNumber, + sender: N::ValidatorId, + step: Step, + ) -> Option<&DataFor> { + self.log.get(&round).and_then(|round| round.get(&sender).and_then(|msgs| msgs.get(&step))) + } +} diff --git a/substrate/tendermint/machine/src/round.rs b/substrate/tendermint/machine/src/round.rs new file mode 100644 index 00000000..18cc3c55 --- /dev/null +++ b/substrate/tendermint/machine/src/round.rs @@ -0,0 +1,83 @@ +use std::{ + marker::PhantomData, + time::{Duration, Instant}, + collections::HashMap, +}; + +use futures::{FutureExt, future}; +use tokio::time::sleep; + +use crate::{ + time::CanonicalInstant, + Step, + ext::{RoundNumber, Network}, +}; + +pub(crate) struct RoundData { + _network: PhantomData, + pub(crate) number: RoundNumber, + pub(crate) start_time: CanonicalInstant, + pub(crate) step: Step, + pub(crate) timeouts: HashMap, +} + +impl RoundData { + pub(crate) fn new(number: RoundNumber, start_time: CanonicalInstant) -> Self { + RoundData { + _network: PhantomData, + number, + start_time, + step: Step::Propose, + timeouts: HashMap::new(), + } + } + + fn timeout(&self, step: Step) -> CanonicalInstant { + let adjusted_block = N::BLOCK_PROCESSING_TIME * (self.number.0 + 1); + let adjusted_latency = N::LATENCY_TIME * (self.number.0 + 1); + let offset = Duration::from_secs( + (match step { + Step::Propose => adjusted_block + adjusted_latency, + Step::Prevote => adjusted_block + (2 * adjusted_latency), + Step::Precommit => adjusted_block + (3 * adjusted_latency), + }) + .into(), + ); + self.start_time + offset + } + + pub(crate) fn end_time(&self) -> CanonicalInstant { + self.timeout(Step::Precommit) + } + + pub(crate) fn set_timeout(&mut self, step: Step) { + let timeout = self.timeout(step).instant(); + self.timeouts.entry(step).or_insert(timeout); + } + + // Poll all set timeouts, returning the Step whose timeout has just expired + pub(crate) async fn timeout_future(&self) -> Step { + let timeout_future = |step| { + let timeout = self.timeouts.get(&step).copied(); + (async move { + if let Some(timeout) = timeout { + sleep(timeout.saturating_duration_since(Instant::now())).await; + } else { + future::pending::<()>().await; + } + step + }) + .fuse() + }; + let propose_timeout = timeout_future(Step::Propose); + let prevote_timeout = timeout_future(Step::Prevote); + let precommit_timeout = timeout_future(Step::Precommit); + futures::pin_mut!(propose_timeout, prevote_timeout, precommit_timeout); + + futures::select_biased! { + step = propose_timeout => step, + step = prevote_timeout => step, + step = precommit_timeout => step, + } + } +} diff --git a/substrate/tendermint/machine/src/time.rs b/substrate/tendermint/machine/src/time.rs new file mode 100644 index 00000000..3973b147 --- /dev/null +++ b/substrate/tendermint/machine/src/time.rs @@ -0,0 +1,44 @@ +use core::ops::Add; +use std::time::{UNIX_EPOCH, SystemTime, Instant, Duration}; + +#[derive(Clone, Copy, PartialEq, Eq, Debug)] +pub(crate) struct CanonicalInstant { + /// Time since the epoch. + time: u64, + /// An Instant synchronized with the above time. + instant: Instant, +} + +pub(crate) fn sys_time(time: u64) -> SystemTime { + UNIX_EPOCH + Duration::from_secs(time) +} + +impl CanonicalInstant { + pub(crate) fn new(time: u64) -> CanonicalInstant { + // This is imprecise yet should be precise enough, as it'll resolve within a few ms + let instant_now = Instant::now(); + let sys_now = SystemTime::now(); + + // If the time is in the future, this will be off by that much time + let elapsed = sys_now.duration_since(sys_time(time)).unwrap_or(Duration::ZERO); + // Except for the fact this panics here + let synced_instant = instant_now.checked_sub(elapsed).unwrap(); + + CanonicalInstant { time, instant: synced_instant } + } + + pub(crate) fn canonical(&self) -> u64 { + self.time + } + + pub(crate) fn instant(&self) -> Instant { + self.instant + } +} + +impl Add for CanonicalInstant { + type Output = CanonicalInstant; + fn add(self, duration: Duration) -> CanonicalInstant { + CanonicalInstant { time: self.time + duration.as_secs(), instant: self.instant + duration } + } +} diff --git a/substrate/tendermint/machine/tests/ext.rs b/substrate/tendermint/machine/tests/ext.rs new file mode 100644 index 00000000..fc6df581 --- /dev/null +++ b/substrate/tendermint/machine/tests/ext.rs @@ -0,0 +1,176 @@ +use std::{ + sync::Arc, + time::{UNIX_EPOCH, SystemTime, Duration}, +}; + +use async_trait::async_trait; + +use parity_scale_codec::{Encode, Decode}; + +use futures::SinkExt; +use tokio::{sync::RwLock, time::sleep}; + +use tendermint_machine::{ + ext::*, SignedMessageFor, StepSender, MessageSender, TendermintMachine, TendermintHandle, +}; + +type TestValidatorId = u16; +type TestBlockId = [u8; 4]; + +struct TestSigner(u16); +#[async_trait] +impl Signer for TestSigner { + type ValidatorId = TestValidatorId; + type Signature = [u8; 32]; + + async fn validator_id(&self) -> Option { + Some(self.0) + } + + async fn sign(&self, msg: &[u8]) -> [u8; 32] { + let mut sig = [0; 32]; + sig[.. 2].copy_from_slice(&self.0.to_le_bytes()); + sig[2 .. (2 + 30.min(msg.len()))].copy_from_slice(&msg[.. 30.min(msg.len())]); + sig + } +} + +struct TestSignatureScheme; +impl SignatureScheme for TestSignatureScheme { + type ValidatorId = TestValidatorId; + type Signature = [u8; 32]; + type AggregateSignature = Vec<[u8; 32]>; + type Signer = TestSigner; + + #[must_use] + fn verify(&self, validator: u16, msg: &[u8], sig: &[u8; 32]) -> bool { + (sig[.. 2] == validator.to_le_bytes()) && (&sig[2 ..] == &[msg, &[0; 30]].concat()[.. 30]) + } + + fn aggregate(sigs: &[[u8; 32]]) -> Vec<[u8; 32]> { + sigs.to_vec() + } + + #[must_use] + fn verify_aggregate( + &self, + signers: &[TestValidatorId], + msg: &[u8], + sigs: &Vec<[u8; 32]>, + ) -> bool { + assert_eq!(signers.len(), sigs.len()); + for sig in signers.iter().zip(sigs.iter()) { + assert!(self.verify(*sig.0, msg, sig.1)); + } + true + } +} + +struct TestWeights; +impl Weights for TestWeights { + type ValidatorId = TestValidatorId; + + fn total_weight(&self) -> u64 { + 4 + } + fn weight(&self, id: TestValidatorId) -> u64 { + [1; 4][usize::try_from(id).unwrap()] + } + + fn proposer(&self, number: BlockNumber, round: RoundNumber) -> TestValidatorId { + TestValidatorId::try_from((number.0 + u64::from(round.0)) % 4).unwrap() + } +} + +#[derive(Clone, PartialEq, Debug, Encode, Decode)] +struct TestBlock { + id: TestBlockId, + valid: Result<(), BlockError>, +} + +impl Block for TestBlock { + type Id = TestBlockId; + + fn id(&self) -> TestBlockId { + self.id + } +} + +struct TestNetwork(u16, Arc, StepSender)>>>); + +#[async_trait] +impl Network for TestNetwork { + type ValidatorId = TestValidatorId; + type SignatureScheme = TestSignatureScheme; + type Weights = TestWeights; + type Block = TestBlock; + + const BLOCK_PROCESSING_TIME: u32 = 2; + const LATENCY_TIME: u32 = 1; + + fn signer(&self) -> TestSigner { + TestSigner(self.0) + } + + fn signature_scheme(&self) -> TestSignatureScheme { + TestSignatureScheme + } + + fn weights(&self) -> TestWeights { + TestWeights + } + + async fn broadcast(&mut self, msg: SignedMessageFor) { + for (messages, _) in self.1.write().await.iter_mut() { + messages.send(msg.clone()).await.unwrap(); + } + } + + async fn slash(&mut self, _: TestValidatorId) { + dbg!("Slash"); + todo!() + } + + async fn validate(&mut self, block: &TestBlock) -> Result<(), BlockError> { + block.valid + } + + async fn add_block( + &mut self, + block: TestBlock, + commit: Commit, + ) -> TestBlock { + dbg!("Adding ", &block); + assert!(block.valid.is_ok()); + assert!(self.verify_commit(block.id(), &commit)); + TestBlock { id: (u32::from_le_bytes(block.id) + 1).to_le_bytes(), valid: Ok(()) } + } +} + +impl TestNetwork { + async fn new(validators: usize) -> Arc, StepSender)>>> { + let arc = Arc::new(RwLock::new(vec![])); + { + let mut write = arc.write().await; + for i in 0 .. validators { + let i = u16::try_from(i).unwrap(); + let TendermintHandle { messages, machine, step } = TendermintMachine::new( + TestNetwork(i, arc.clone()), + BlockNumber(1), + SystemTime::now().duration_since(UNIX_EPOCH).unwrap().as_secs(), + TestBlock { id: 1u32.to_le_bytes(), valid: Ok(()) }, + ) + .await; + tokio::task::spawn(machine.run()); + write.push((messages, step)); + } + } + arc + } +} + +#[tokio::test] +async fn test() { + TestNetwork::new(4).await; + sleep(Duration::from_secs(30)).await; +} diff --git a/substrate/tendermint/pallet/Cargo.toml b/substrate/tendermint/pallet/Cargo.toml new file mode 100644 index 00000000..4958dbec --- /dev/null +++ b/substrate/tendermint/pallet/Cargo.toml @@ -0,0 +1,38 @@ +[package] +name = "pallet-tendermint" +version = "0.1.0" +description = "Tendermint pallet for Substrate" +license = "AGPL-3.0-only" +repository = "https://github.com/serai-dex/serai/tree/develop/substrate/tendermint/pallet" +authors = ["Luke Parker "] +edition = "2021" + +[package.metadata.docs.rs] +all-features = true +rustdoc-args = ["--cfg", "docsrs"] + +[dependencies] +parity-scale-codec = { version = "3", default-features = false, features = ["derive"] } +scale-info = { version = "2", default-features = false, features = ["derive"] } + +sp-core = { git = "https://github.com/serai-dex/substrate", default-features = false } +sp-std = { git = "https://github.com/serai-dex/substrate", default-features = false } +sp-application-crypto = { git = "https://github.com/serai-dex/substrate", default-features = false } + +frame-system = { git = "https://github.com/serai-dex/substrate", default-features = false } +frame-support = { git = "https://github.com/serai-dex/substrate", default-features = false } + +[features] +std = [ + "sp-application-crypto/std", + + "frame-system/std", + "frame-support/std", +] + +runtime-benchmarks = [ + "frame-system/runtime-benchmarks", + "frame-support/runtime-benchmarks", +] + +default = ["std"] diff --git a/substrate/tendermint/pallet/LICENSE b/substrate/tendermint/pallet/LICENSE new file mode 100644 index 00000000..d6e1814a --- /dev/null +++ b/substrate/tendermint/pallet/LICENSE @@ -0,0 +1,15 @@ +AGPL-3.0-only license + +Copyright (c) 2022 Luke Parker + +This program is free software: you can redistribute it and/or modify +it under the terms of the GNU Affero General Public License Version 3 as +published by the Free Software Foundation. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU Affero General Public License for more details. + +You should have received a copy of the GNU Affero General Public License +along with this program. If not, see . diff --git a/substrate/tendermint/pallet/src/lib.rs b/substrate/tendermint/pallet/src/lib.rs new file mode 100644 index 00000000..54d86fe1 --- /dev/null +++ b/substrate/tendermint/pallet/src/lib.rs @@ -0,0 +1,75 @@ +#![cfg_attr(not(feature = "std"), no_std)] + +#[frame_support::pallet] +pub mod pallet { + use sp_std::vec::Vec; + use sp_core::sr25519::Public; + + use frame_support::pallet_prelude::*; + use frame_support::traits::{ConstU32, OneSessionHandler}; + + type MaxValidators = ConstU32<{ u16::MAX as u32 }>; + + #[pallet::config] + pub trait Config: frame_system::Config {} + + #[pallet::pallet] + #[pallet::generate_store(pub(super) trait Store)] + pub struct Pallet(PhantomData); + + #[pallet::storage] + #[pallet::getter(fn session)] + pub type Session = StorageValue<_, u32, ValueQuery>; + + #[pallet::storage] + #[pallet::getter(fn validators)] + pub type Validators = StorageValue<_, BoundedVec, ValueQuery>; + + pub mod crypto { + use sp_application_crypto::{KeyTypeId, app_crypto, sr25519}; + app_crypto!(sr25519, KeyTypeId(*b"tend")); + + impl sp_application_crypto::BoundToRuntimeAppPublic for crate::Pallet { + type Public = Public; + } + + sp_application_crypto::with_pair! { + pub type AuthorityPair = Pair; + } + pub type AuthoritySignature = Signature; + pub type AuthorityId = Public; + } + + impl OneSessionHandler for Pallet { + type Key = crypto::Public; + + // TODO + fn on_genesis_session<'a, I: 'a>(_validators: I) + where + I: Iterator, + V: 'a, + { + } + + fn on_new_session<'a, I: 'a>(changed: bool, validators: I, _queued: I) + where + I: Iterator, + V: 'a, + { + if !changed { + return; + } + + Session::::put(Self::session() + 1); + Validators::::put( + BoundedVec::try_from(validators.map(|(_, key)| key.into()).collect::>()) + .unwrap(), + ); + } + + // TODO + fn on_disabled(_validator_index: u32) {} + } +} + +pub use pallet::*; diff --git a/substrate/tendermint/primitives/Cargo.toml b/substrate/tendermint/primitives/Cargo.toml new file mode 100644 index 00000000..6200add5 --- /dev/null +++ b/substrate/tendermint/primitives/Cargo.toml @@ -0,0 +1,21 @@ +[package] +name = "sp-tendermint" +version = "0.1.0" +description = "Tendermint primitives for Substrate" +license = "AGPL-3.0-only" +repository = "https://github.com/serai-dex/serai/tree/develop/substrate/tendermint/primitives" +authors = ["Luke Parker "] +edition = "2021" + +[package.metadata.docs.rs] +all-features = true +rustdoc-args = ["--cfg", "docsrs"] + +[dependencies] +sp-core = { git = "https://github.com/serai-dex/substrate", default-features = false } +sp-std = { git = "https://github.com/serai-dex/substrate", default-features = false } +sp-api = { git = "https://github.com/serai-dex/substrate", default-features = false } + +[features] +std = ["sp-core/std", "sp-std/std", "sp-api/std"] +default = ["std"] diff --git a/substrate/tendermint/primitives/LICENSE b/substrate/tendermint/primitives/LICENSE new file mode 100644 index 00000000..d6e1814a --- /dev/null +++ b/substrate/tendermint/primitives/LICENSE @@ -0,0 +1,15 @@ +AGPL-3.0-only license + +Copyright (c) 2022 Luke Parker + +This program is free software: you can redistribute it and/or modify +it under the terms of the GNU Affero General Public License Version 3 as +published by the Free Software Foundation. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU Affero General Public License for more details. + +You should have received a copy of the GNU Affero General Public License +along with this program. If not, see . diff --git a/substrate/tendermint/primitives/src/lib.rs b/substrate/tendermint/primitives/src/lib.rs new file mode 100644 index 00000000..b5a1d6b2 --- /dev/null +++ b/substrate/tendermint/primitives/src/lib.rs @@ -0,0 +1,16 @@ +#![cfg_attr(not(feature = "std"), no_std)] + +use sp_core::sr25519::Public; +use sp_std::vec::Vec; + +sp_api::decl_runtime_apis! { + /// TendermintApi trait for runtimes to implement. + pub trait TendermintApi { + /// Current session number. A session is NOT a fixed length of blocks, yet rather a continuous + /// set of validators. + fn current_session() -> u32; + + /// Current validators. + fn validators() -> Vec; + } +}