Compare commits

...

20 commits

Author SHA1 Message Date
474ff9ed6f
cuprated: relay rules ()
* add relay rules

* add comments

* fmt

* sort imports

* review fixes
2025-04-11 23:02:06 +01:00
eceb74f183
cuprated: fix switching to main chain after a reorg ()
* fix switching to main chain after a reorg

* fmt

* sort imports
2025-04-10 21:59:14 +01:00
91099846d6
fix clippy warnings ()
fix clippy
2025-04-10 21:58:56 +01:00
Brandon Trussell
24265ac43c
Use blank peer list if saved peer list cannot be read. ()
* Use blank peer list if saved peer list cannot be read.

* Format corrections

* The delete operation is not needed.
2025-04-10 14:25:46 +01:00
hinto-janai
95aca1d4a5
ci: fix release.yml ()
gen config
2025-04-09 16:01:19 +01:00
hinto-janai
b169557ff2
ci: fix deny ()
`cargo update -p tokio`
2025-04-09 15:07:37 +01:00
hinto-janai
56d3459782
cuprated: v0.0.2 version + changelog ()
* update

* !!

* update

* version
2025-04-09 15:05:45 +01:00
hinto-janai
159016f10e
cuprated: update killswitch timestamp for v0.0.2 ()
* update

* Update binaries/cuprated/src/killswitch.rs

Co-authored-by: Boog900 <boog900@tutanota.com>

---------

Co-authored-by: Boog900 <boog900@tutanota.com>
2025-04-09 15:03:56 +01:00
hinto-janai
51b56b0a8b
cuprated/database: fix error mappings + msg ()
* add

* dbi

* err

* log and panic
2025-04-09 01:18:01 +01:00
hinto-janai
550d8598e4
books: user-book for cuprated 0.0.2 ()
* title

* fixes

* add `--version` table

* add types

* config docs

* add macro docs

* fix

* case

* Update binaries/cuprated/src/config.rs

---------

Co-authored-by: Boog900 <boog900@tutanota.com>
2025-04-09 01:12:05 +01:00
hinto-janai
3ef6a96d04
ci: fix clippy ()
* cargo clippy

* fix

* remove `needless_continue`

* readd `continue`
2025-04-08 20:13:11 +01:00
hinto-janai
d3b7ca3e65
cuprated: RPC handlers ()
* import diffs

* small fixes, hardfork changes

* lints

* hard_fork

* apply diffs

* review fixes

* binaries/cuprated/src/rpc/request: `pub(super)` -> `pub(crate)`

* add `BlockChainContextService`, `on_get_block_hash`

* map `tower::BoxError` to `anyhow::Error`

* get_block

* connection_info

* hard_fork_info

* set_bans

* get_bans

* banned

* flush_transaction_pool

* get_output_histogram

* get_coinbase_tx_sum

* get_version

* get_fee_estimate

* get_alternate_chains

* relay_tx

* response_base: `fn` -> `const`

* get_transaction_pool_backlog

* prune

* calc_pow

* add_aux_pow

* get_tx_ids_loose

* generate_blocks

* get_info

* sync_info

* get_miner_data

* `BlockchainManagerRequest` docs

* docs, `ConnectionInfo`, `AddressType`

* sig docs, remove `HardForks` request

* clean imports

* fix `on_get_block_hash`, `generate_blocks`, `get_block_headers_range`

* fix `get_info`, `banned`

* fix `sync_info`

* fix `get_miner_data`

* initial `add_aux_pow` impl

* fix `calculate_pow`

* add_aux_pow

* `get_output_distribution`

* checkup

* `find_nonce()` + `add_aux_pow` async wrapper

* fixes

* `helper::block_header`

* review fixes

* fixes

* doc fix

* p2p: remove tmp `AddressBookRequest::NextNeededPruningSeed`

* lint/todo fixes

* fix bans

* merge diffs from https://github.com/Cuprate/cuprate/pull/272

* `cuprate_types::rpc`, `from` module for `cuprate_rpc_types`

* `rpc-types` -> `types` pt. 2

* type fixes, move fn to `-helper`

* clippy fix

* rpc: move json-rpc types away from macros

* !!

* move types, fix orphan impl + cyclic dependency

* architecture book

* fix json-rpc handlers

* remove `::<N>`

* fix clippy

* fix type defaults, use `Hex`

* return defaults, hex test

* json_rpc: get_block_template

* `/get_transactions`

* `/is_key_image_spent`

* !!

* `/get_transactions` hex

* most of `/send_raw_transaction`

* `/send_raw_transaction`, `/save_bc`, response_base

* `/peerlist`

* `/get_transaction_pool`

* `/get_transaction_pool_stats`

* finish other draft

* get_blocks_by_height, shared::get_outs

* `/get_o_indexes.bin`

* `/get_output_distribution.bin`

* clippy

* `/get_blocks.bin`

* rpc-interface: add restricted invariant comments

* restricted json-rpc error

* get_output_distribution

* module cleanup

* txpool: all_hashes

* `HexVec`

* fix `get_txid` for `/get_outs`

miner transaction was not accounted for

* fix doc tests

* fix conflict

* json-rpc fixes

* `get_transaction_pool_hashes` fix

* rpc/interface: fix cargo hack

* review fixes

* cargo hack fix

* use `monero_address`

* Update binaries/cuprated/src/rpc/handlers/json_rpc.rs

Co-authored-by: Boog900 <boog900@tutanota.com>

* Update binaries/cuprated/src/rpc/handlers/json_rpc.rs

Co-authored-by: Boog900 <boog900@tutanota.com>

* review fixes

* fix `get_hashes`

* fix `is_key_image_spent`

* fix key image types

* fixes

* fix book

* output timelock fix + `blockchain_context()`

* fix

* fix

* fix

* fix getblocks.bin

* `cuprate_types` doc

* output fix

* fixme

* rct output fix

* fix cast

* clippy

---------

Co-authored-by: Boog900 <boog900@tutanota.com>
2025-04-08 17:09:43 +01:00
hinto-janai
8292da4e06
cuprated: fast_sync_hashes.bin -> 3384832 ()
3384832
2025-04-08 16:19:00 +01:00
03363e393d
cuprated: auto config docs ()
* auto config docs

* fmt & change arg

* fix clippy

* add comment_out functionality

* remove config files + gen in mdbook build

* review fixes

* Update books/user/src/config.md

Co-authored-by: hinto-janai <hinto.janai@protonmail.com>

---------

Co-authored-by: hinto-janai <hinto.janai@protonmail.com>
2025-04-05 15:33:56 +01:00
hinto-janai
57cd96ed6c
storage: replace println with tracing ()
* replace `println` with `tracing`

* fix

* remove feature

* !
2025-04-02 16:08:28 +01:00
hinto-janai
3c86c5ed76
cuprated: disable STDIN if not terminal ()
* check terminal

* await ctrl_c

* add exit msg

* rm
2025-03-23 00:57:28 +00:00
hinto-janai
f60aa82420
ci: fix deny ()
update
2025-03-21 20:57:52 +00:00
b97bbab593
Cuprated: Fix reorgs ()
* fix reorgs

* fix off by 1 and add a test

* commit file

* docs

* remove unneeded fn

* fix tests
2025-03-21 17:52:10 +00:00
c5cbe51300
fast-sync: add tests ()
fast sync tests
2025-03-19 15:52:05 +00:00
jermanuts
e84f5c151f
Readme and user book: Fix links ()
* User book: https link to CCS

* Readme: fix contribution and security link

* rust link https
2025-03-13 22:11:04 +00:00
165 changed files with 9561 additions and 3916 deletions

View file

@ -61,7 +61,11 @@ jobs:
# All archives have these files.
cp LICENSE-AGPL target/release/LICENSE
cp binaries/cuprated/config/Cuprated.toml target/release/
if [ "$RUNNER_OS" == "Windows" ]; then
target/release/cuprated.exe --generate-config > target/release/Cuprated.toml
else
target/release/cuprated --generate-config > target/release/Cuprated.toml
fi
OS=${{ matrix.os }}

1
.gitignore vendored
View file

@ -3,3 +3,4 @@ target/
monerod
books/*/book
fast_sync_hashes.bin
/books/user/Cuprated.toml

51
Cargo.lock generated
View file

@ -354,9 +354,9 @@ dependencies = [
[[package]]
name = "cc"
version = "1.2.4"
version = "1.2.17"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9157bbaa6b165880c27a4293a474c91cdcf265cc68cc829bf10be0964a391caf"
checksum = "1fcb57c740ae1daf453ae85f16e37396f672b039e00d9d866e07ddb24e328e3a"
dependencies = [
"shlex",
]
@ -743,6 +743,7 @@ dependencies = [
"serde",
"tempfile",
"thiserror",
"tracing",
]
[[package]]
@ -756,6 +757,7 @@ dependencies = [
"rayon",
"serde",
"tower 0.5.1",
"tracing",
]
[[package]]
@ -764,6 +766,7 @@ version = "0.5.0"
dependencies = [
"bytes",
"cuprate-fixed-bytes",
"cuprate-hex",
"hex",
"paste",
"ref-cast",
@ -784,7 +787,10 @@ dependencies = [
"cuprate-types",
"hex",
"monero-serai",
"proptest",
"tempfile",
"tokio",
"tokio-test",
"tower 0.5.1",
]
@ -816,6 +822,15 @@ dependencies = [
"windows",
]
[[package]]
name = "cuprate-hex"
version = "0.0.0"
dependencies = [
"hex",
"serde",
"serde_json",
]
[[package]]
name = "cuprate-json-rpc"
version = "0.0.0"
@ -943,9 +958,15 @@ version = "0.0.0"
dependencies = [
"cuprate-epee-encoding",
"cuprate-fixed-bytes",
"cuprate-helper",
"cuprate-hex",
"cuprate-p2p-core",
"cuprate-test-utils",
"cuprate-types",
"hex",
"hex-literal",
"paste",
"pretty_assertions",
"serde",
"serde_json",
]
@ -1002,12 +1023,14 @@ dependencies = [
name = "cuprate-types"
version = "0.0.0"
dependencies = [
"bitflags 2.6.0",
"bytes",
"cfg-if",
"cuprate-epee-encoding",
"cuprate-fixed-bytes",
"cuprate-helper",
"cuprate-hex",
"curve25519-dalek",
"hex",
"hex-literal",
"indexmap",
"monero-serai",
@ -1040,7 +1063,7 @@ name = "cuprate-zmq-types"
version = "0.1.0"
dependencies = [
"assert-json-diff",
"cuprate-types",
"cuprate-hex",
"hex",
"serde",
"serde_json",
@ -1048,7 +1071,7 @@ dependencies = [
[[package]]
name = "cuprated"
version = "0.0.1"
version = "0.0.2"
dependencies = [
"anyhow",
"async-trait",
@ -1077,6 +1100,7 @@ dependencies = [
"cuprate-fast-sync",
"cuprate-fixed-bytes",
"cuprate-helper",
"cuprate-hex",
"cuprate-json-rpc",
"cuprate-levin",
"cuprate-p2p",
@ -1095,6 +1119,7 @@ dependencies = [
"hex",
"hex-literal",
"indexmap",
"monero-address",
"monero-serai",
"nu-ansi-term",
"paste",
@ -1106,12 +1131,15 @@ dependencies = [
"serde",
"serde_bytes",
"serde_json",
"strum",
"tempfile",
"thiserror",
"thread_local",
"tokio",
"tokio-stream",
"tokio-util",
"toml",
"toml_edit",
"tower 0.5.1",
"tracing",
"tracing-appender",
@ -2584,15 +2612,14 @@ checksum = "2b15c43186be67a4fd63bee50d0303afffcef381492ebe2c5d87f324e1b8815c"
[[package]]
name = "ring"
version = "0.17.8"
version = "0.17.14"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c17fa4cb658e3583423e915b9f3acc01cceaee1860e33d59ebae66adc3a2dc0d"
checksum = "a4689e6c2294d81e88dc6261c768b63bc4fcdb852be6d1352498b114f61383b7"
dependencies = [
"cc",
"cfg-if",
"getrandom",
"libc",
"spin",
"untrusted",
"windows-sys 0.52.0",
]
@ -3131,9 +3158,9 @@ dependencies = [
[[package]]
name = "tokio"
version = "1.42.0"
version = "1.44.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5cec9b21b0450273377fc97bd4c33a8acffc8c996c987a7c5b319a0083707551"
checksum = "e6b88822cbe49de4185e3a4cbf8321dd487cf5fe0c5c65695fef6346371e9c48"
dependencies = [
"backtrace",
"bytes",
@ -3149,9 +3176,9 @@ dependencies = [
[[package]]
name = "tokio-macros"
version = "2.4.0"
version = "2.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "693d596312e88961bc67d7f1f97af8a70227d9f90c31bba5806eec004978d752"
checksum = "6e06d43f1345a3bcd39f6a56dbb7dcab2ba47e68e8ac134855e7e2bdbaf8cab8"
dependencies = [
"proc-macro2",
"quote",

View file

@ -12,7 +12,6 @@ members = [
# Net
"net/epee-encoding",
"net/fixed-bytes",
"net/levin",
"net/wire",
@ -30,6 +29,11 @@ members = [
"storage/txpool",
"storage/database",
# Types
"types/types",
"types/hex",
"types/fixed-bytes",
# RPC
"rpc/json-rpc",
"rpc/types",
@ -44,7 +48,6 @@ members = [
"helper",
"pruning",
"test-utils",
"types",
]
[profile.release]
@ -77,14 +80,13 @@ cuprate-consensus-context = { path = "consensus/context", default-featur
cuprate-cryptonight = { path = "cryptonight", default-features = false }
cuprate-helper = { path = "helper", default-features = false }
cuprate-epee-encoding = { path = "net/epee-encoding", default-features = false }
cuprate-fixed-bytes = { path = "net/fixed-bytes", default-features = false }
cuprate-levin = { path = "net/levin", default-features = false }
cuprate-wire = { path = "net/wire", default-features = false }
cuprate-async-buffer = { path = "p2p/async-buffer", default-features = false }
cuprate-p2p = { path = "p2p/p2p", default-features = false }
cuprate-p2p-core = { path = "p2p/p2p-core", default-features = false }
cuprate-p2p-bucket = { path = "p2p/p2p-bucket", default-features = false }
cuprate-dandelion-tower = { path = "p2p/dandelion-tower", default-features = false }
cuprate-async-buffer = { path = "p2p/async-buffer", default-features = false }
cuprate-address-book = { path = "p2p/address-book", default-features = false }
cuprate-blockchain = { path = "storage/blockchain", default-features = false }
cuprate-database = { path = "storage/database", default-features = false }
@ -92,7 +94,9 @@ cuprate-database-service = { path = "storage/service", default-featur
cuprate-txpool = { path = "storage/txpool", default-features = false }
cuprate-pruning = { path = "pruning", default-features = false }
cuprate-test-utils = { path = "test-utils", default-features = false }
cuprate-types = { path = "types", default-features = false }
cuprate-types = { path = "types/types", default-features = false }
cuprate-hex = { path = "types/hex", default-features = false }
cuprate-fixed-bytes = { path = "types/fixed-bytes", default-features = false }
cuprate-json-rpc = { path = "rpc/json-rpc", default-features = false }
cuprate-rpc-types = { path = "rpc/types", default-features = false }
cuprate-rpc-interface = { path = "rpc/interface", default-features = false }
@ -121,6 +125,7 @@ futures = { version = "0.3", default-features = false }
hex = { version = "0.4", default-features = false }
hex-literal = { version = "0.4", default-features = false }
indexmap = { version = "2", default-features = false }
monero-address = { git = "https://github.com/Cuprate/serai.git", rev = "e6ae8c2", default-features = false }
monero-serai = { git = "https://github.com/Cuprate/serai.git", rev = "e6ae8c2", default-features = false }
nu-ansi-term = { version = "0.46", default-features = false }
paste = { version = "1", default-features = false }
@ -140,6 +145,7 @@ tokio-stream = { version = "0.1", default-features = false }
tokio = { version = "1", default-features = false }
tower = { git = "https://github.com/Cuprate/tower.git", rev = "6c7faf0", default-features = false } # <https://github.com/tower-rs/tower/pull/796>
toml = { version = "0.8", default-features = false }
toml_edit = { version = "0.22", default-features = false }
tracing-appender = { version = "0.2", default-features = false }
tracing-subscriber = { version = "0.3", default-features = false }
tracing = { version = "0.1", default-features = false }
@ -210,7 +216,6 @@ missing_transmute_annotations = "deny"
mut_mut = "deny"
needless_bitwise_bool = "deny"
needless_character_iteration = "deny"
needless_continue = "deny"
needless_for_each = "deny"
needless_maybe_sized = "deny"
needless_raw_string_hashes = "deny"

View file

@ -31,7 +31,7 @@ TODO: add these sections someday.
## About
Cuprate is an effort to create an alternative [Monero](https://getmonero.org) node implementation
in [Rust](http://rust-lang.org).
in [Rust](https://rust-lang.org).
It will be able to independently validate Monero consensus rules, providing a layer of security and redundancy for the
Monero network.
@ -57,14 +57,14 @@ For crate (library) documentation, see: <https://doc.cuprate.org>. This site hol
## Contributing
See [`CONTRIBUTING.md`](misc/CONTRIBUTING.md).
See [`CONTRIBUTING.md`](/CONTRIBUTING.md).
## Security
Cuprate has a responsible vulnerability disclosure policy, see [`SECURITY.md`](misc/SECURITY.md).
Cuprate has a responsible vulnerability disclosure policy, see [`SECURITY.md`](/SECURITY.md).
## License
The `binaries/` directory is licensed under AGPL-3.0, everything else is licensed under MIT.
See [`LICENSE`](LICENSE) for more details.
See [`LICENSE`](/LICENSE) for more details.

View file

@ -1,6 +1,6 @@
[package]
name = "cuprated"
version = "0.0.1"
version = "0.0.2"
edition = "2021"
description = "The Cuprate Rust Monero node."
license = "AGPL-3.0-only"
@ -9,32 +9,34 @@ repository = "https://github.com/Cuprate/cuprate/tree/main/binaries/cuprated"
[dependencies]
# TODO: after v1.0.0, remove unneeded dependencies.
cuprate-constants = { workspace = true, features = ["build"] }
cuprate-consensus = { workspace = true }
cuprate-fast-sync = { workspace = true }
cuprate-address-book = { workspace = true }
cuprate-async-buffer = { workspace = true }
cuprate-blockchain = { workspace = true }
cuprate-consensus-context = { workspace = true }
cuprate-consensus-rules = { workspace = true }
cuprate-consensus = { workspace = true }
cuprate-constants = { workspace = true, features = ["build", "rpc"] }
cuprate-cryptonight = { workspace = true }
cuprate-helper = { workspace = true, features = ["std", "serde", "time"] }
cuprate-epee-encoding = { workspace = true }
cuprate-fixed-bytes = { workspace = true }
cuprate-levin = { workspace = true }
cuprate-wire = { workspace = true }
cuprate-p2p = { workspace = true }
cuprate-p2p-core = { workspace = true }
cuprate-dandelion-tower = { workspace = true, features = ["txpool"] }
cuprate-async-buffer = { workspace = true }
cuprate-address-book = { workspace = true }
cuprate-blockchain = { workspace = true }
cuprate-database-service = { workspace = true, features = ["serde"] }
cuprate-txpool = { workspace = true }
cuprate-database = { workspace = true, features = ["serde"] }
cuprate-pruning = { workspace = true }
cuprate-test-utils = { workspace = true }
cuprate-types = { workspace = true }
cuprate-epee-encoding = { workspace = true }
cuprate-fast-sync = { workspace = true }
cuprate-fixed-bytes = { workspace = true }
cuprate-helper = { workspace = true, features = ["std", "serde", "time"] }
cuprate-hex = { workspace = true }
cuprate-json-rpc = { workspace = true }
cuprate-levin = { workspace = true }
cuprate-p2p-core = { workspace = true }
cuprate-p2p = { workspace = true }
cuprate-pruning = { workspace = true }
cuprate-rpc-interface = { workspace = true }
cuprate-rpc-types = { workspace = true }
cuprate-rpc-types = { workspace = true, features = ["from"] }
cuprate-test-utils = { workspace = true }
cuprate-txpool = { workspace = true }
cuprate-types = { workspace = true, features = ["json"] }
cuprate-wire = { workspace = true }
# TODO: after v1.0.0, remove unneeded dependencies.
anyhow = { workspace = true }
@ -56,6 +58,7 @@ futures = { workspace = true }
hex = { workspace = true }
hex-literal = { workspace = true }
indexmap = { workspace = true }
monero-address = { workspace = true }
monero-serai = { workspace = true }
nu-ansi-term = { workspace = true }
paste = { workspace = true }
@ -67,16 +70,21 @@ rayon = { workspace = true }
serde_bytes = { workspace = true }
serde_json = { workspace = true }
serde = { workspace = true }
strum = { workspace = true }
thiserror = { workspace = true }
thread_local = { workspace = true }
tokio-util = { workspace = true, features = ["rt"] }
tokio-stream = { workspace = true }
tokio = { workspace = true }
toml = { workspace = true, features = ["parse", "display"]}
toml_edit = { workspace = true }
tower = { workspace = true }
tracing-appender = { workspace = true }
tracing-subscriber = { workspace = true, features = ["std", "fmt", "default"] }
tracing = { workspace = true, features = ["default"] }
[dev-dependencies]
tempfile = { workspace = true }
[lints]
workspace = true

View file

@ -1,64 +0,0 @@
# ____ _
# / ___| _ _ __ _ __ __ _| |_ ___
# | | | | | | '_ \| '__/ _` | __/ _ \
# | |__| |_| | |_) | | | (_| | || __/
# \____\__,_| .__/|_| \__,_|\__\___|
# |_|
#
## The network to run on, valid values: "Mainnet", "Testnet", "Stagenet".
network = "Mainnet"
## Tracing config.
[tracing]
## The stdout logging config.
stdout = { level = "info" }
## The file output logging config.
file = { level = "debug", max_log_files = 7 }
## Clear-net config.
[p2p.clear_net]
## The number of outbound connections we should make and maintain.
outbound_connections = 64
## The number of extra connections we should make under load from the rest of Cuprate, i.e. when syncing.
extra_outbound_connections = 8
## The maximum number of incoming we should allow.
max_inbound_connections = 128
## The percent of outbound connections that should be to nodes we have not connected to before.
gray_peers_percent = 0.7
## The port to accept connections on, if left `0` no connections will be accepted.
p2p_port = 0
## The IP address to listen to connections on.
listen_on = "0.0.0.0"
## The Clear-net addressbook config.
[p2p.clear_net.address_book_config]
## The size of the white peer list, which contains peers we have made a connection to before.
max_white_list_length = 1_000
## The size of the gray peer list, which contains peers we have not made a connection to before.
max_gray_list_length = 5_000
## The amount of time between address book saves.
peer_save_period = { secs = 90, nanos = 0 }
## The block downloader config.
[p2p.block_downloader]
## The size of the buffer of sequential blocks waiting to be verified and added to the chain (bytes).
buffer_bytes = 1_000_000_000
## The size of the queue of blocks which are waiting for a parent block to be downloaded (bytes).
in_progress_queue_bytes = 500_000_000
## The target size of a batch of blocks (bytes), must not exceed 100MB.
target_batch_bytes = 10_000_000
## The amount of time between checking the pool of connected peers for free peers to download blocks.
check_client_pool_interval = { secs = 30, nanos = 0 }
## Txpool storage config.
[storage.txpool]
## The database sync mode for the txpool.
sync_mode = "Fast"
## The maximum size of all the txs in the pool (bytes).
max_txpool_byte_size = 100_000_000
## Blockchain storage config.
[storage.blockchain]
## The database sync mode for the blockchain.
sync_mode = "Fast"

View file

@ -1 +0,0 @@
0.0.1.toml

View file

@ -1,6 +0,0 @@
# `cuprated` configs
This directory holds configuration files for all `cuprated` versions.
For example, `0.0.1.toml` is the config file for `cuprated v0.0.1`.
`Cuprated.toml` is a symlink to the latest config file.

View file

@ -33,6 +33,9 @@ use crate::{
mod commands;
mod handler;
#[cfg(test)]
mod tests;
pub use commands::{BlockchainManagerCommand, IncomingBlockOk};
/// Initialize the blockchain manager.

View file

@ -242,7 +242,6 @@ impl super::BlockchainManager {
/// This function will panic if any internal service returns an unexpected error that we cannot
/// recover from.
async fn handle_incoming_block_batch_alt_chain(&mut self, mut batch: BlockBatch) {
// TODO: this needs testing (this whole section does but alt-blocks specifically).
let mut blocks = batch.blocks.into_iter();
while let Some((block, txs)) = blocks.next() {
@ -271,6 +270,11 @@ impl super::BlockchainManager {
Ok(AddAltBlock::Reorged) => {
// Collect the remaining blocks and add them to the main chain instead.
batch.blocks = blocks.collect();
if batch.blocks.is_empty() {
return;
}
self.handle_incoming_block_batch_main_chain(batch).await;
return;
}
@ -396,7 +400,7 @@ impl super::BlockchainManager {
.await
.expect(PANIC_CRITICAL_SERVICE_ERROR)
.call(BlockchainWriteRequest::PopBlocks(
current_main_chain_height - split_height + 1,
current_main_chain_height - split_height,
))
.await
.expect(PANIC_CRITICAL_SERVICE_ERROR)
@ -409,7 +413,7 @@ impl super::BlockchainManager {
.await
.expect(PANIC_CRITICAL_SERVICE_ERROR)
.call(BlockChainContextRequest::PopBlocks {
numb_blocks: current_main_chain_height - split_height + 1,
numb_blocks: current_main_chain_height - split_height,
})
.await
.expect(PANIC_CRITICAL_SERVICE_ERROR);

View file

@ -0,0 +1,320 @@
use std::{collections::HashMap, env::temp_dir, path::PathBuf, sync::Arc};
use monero_serai::{
block::{Block, BlockHeader},
transaction::{Input, Output, Timelock, Transaction, TransactionPrefix},
};
use tokio::sync::{oneshot, watch};
use tower::BoxError;
use cuprate_consensus_context::{BlockchainContext, ContextConfig};
use cuprate_consensus_rules::{hard_forks::HFInfo, miner_tx::calculate_block_reward, HFsInfo};
use cuprate_helper::network::Network;
use cuprate_p2p::{block_downloader::BlockBatch, BroadcastSvc};
use cuprate_p2p_core::handles::HandleBuilder;
use crate::blockchain::{
check_add_genesis, manager::BlockchainManager, manager::BlockchainManagerCommand,
ConsensusBlockchainReadHandle,
};
async fn mock_manager(data_dir: PathBuf) -> BlockchainManager {
let blockchain_config = cuprate_blockchain::config::ConfigBuilder::new()
.data_directory(data_dir.clone())
.build();
let txpool_config = cuprate_txpool::config::ConfigBuilder::new()
.data_directory(data_dir)
.build();
let (mut blockchain_read_handle, mut blockchain_write_handle, _) =
cuprate_blockchain::service::init(blockchain_config).unwrap();
let (txpool_read_handle, txpool_write_handle, _) =
cuprate_txpool::service::init(txpool_config).unwrap();
check_add_genesis(
&mut blockchain_read_handle,
&mut blockchain_write_handle,
Network::Mainnet,
)
.await;
let mut context_config = ContextConfig::main_net();
context_config.difficulty_cfg.fixed_difficulty = Some(1);
context_config.hard_fork_cfg.info = HFsInfo::new([HFInfo::new(0, 0); 16]);
let blockchain_read_handle =
ConsensusBlockchainReadHandle::new(blockchain_read_handle, BoxError::from);
let blockchain_context_service = cuprate_consensus_context::initialize_blockchain_context(
context_config,
blockchain_read_handle.clone(),
)
.await
.unwrap();
BlockchainManager {
blockchain_write_handle,
blockchain_read_handle,
txpool_write_handle,
blockchain_context_service,
stop_current_block_downloader: Arc::new(Default::default()),
broadcast_svc: BroadcastSvc::mock(),
}
}
fn generate_block(context: &BlockchainContext) -> Block {
Block {
header: BlockHeader {
hardfork_version: 16,
hardfork_signal: 16,
timestamp: 1000,
previous: context.top_hash,
nonce: 0,
},
miner_transaction: Transaction::V2 {
prefix: TransactionPrefix {
additional_timelock: Timelock::Block(context.chain_height + 60),
inputs: vec![Input::Gen(context.chain_height)],
outputs: vec![Output {
// we can set the block weight to 1 as the true value won't get us into the penalty zone.
amount: Some(calculate_block_reward(
1,
context.median_weight_for_block_reward,
context.already_generated_coins,
context.current_hf,
)),
key: Default::default(),
view_tag: Some(1),
}],
extra: rand::random::<[u8; 32]>().to_vec(),
},
proofs: None,
},
transactions: vec![],
}
}
#[tokio::test]
async fn simple_reorg() {
// create 2 managers
let data_dir_1 = tempfile::tempdir().unwrap();
let mut manager_1 = mock_manager(data_dir_1.path().to_path_buf()).await;
let data_dir_2 = tempfile::tempdir().unwrap();
let mut manager_2 = mock_manager(data_dir_2.path().to_path_buf()).await;
// give both managers the same first non-genesis block
let block_1 = generate_block(manager_1.blockchain_context_service.blockchain_context());
manager_1
.handle_command(BlockchainManagerCommand::AddBlock {
block: block_1.clone(),
prepped_txs: HashMap::new(),
response_tx: oneshot::channel().0,
})
.await;
manager_2
.handle_command(BlockchainManagerCommand::AddBlock {
block: block_1,
prepped_txs: HashMap::new(),
response_tx: oneshot::channel().0,
})
.await;
assert_eq!(
manager_1.blockchain_context_service.blockchain_context(),
manager_2.blockchain_context_service.blockchain_context()
);
// give managers different 2nd block
let block_2a = generate_block(manager_1.blockchain_context_service.blockchain_context());
let block_2b = generate_block(manager_2.blockchain_context_service.blockchain_context());
manager_1
.handle_command(BlockchainManagerCommand::AddBlock {
block: block_2a,
prepped_txs: HashMap::new(),
response_tx: oneshot::channel().0,
})
.await;
manager_2
.handle_command(BlockchainManagerCommand::AddBlock {
block: block_2b.clone(),
prepped_txs: HashMap::new(),
response_tx: oneshot::channel().0,
})
.await;
let manager_1_context = manager_1
.blockchain_context_service
.blockchain_context()
.clone();
assert_ne!(
&manager_1_context,
manager_2.blockchain_context_service.blockchain_context()
);
// give manager 1 missing block
manager_1
.handle_command(BlockchainManagerCommand::AddBlock {
block: block_2b,
prepped_txs: HashMap::new(),
response_tx: oneshot::channel().0,
})
.await;
// make sure this didn't change the context
assert_eq!(
&manager_1_context,
manager_1.blockchain_context_service.blockchain_context()
);
// give both managers new block (built of manager 2's chain)
let block_3 = generate_block(manager_2.blockchain_context_service.blockchain_context());
manager_1
.handle_command(BlockchainManagerCommand::AddBlock {
block: block_3.clone(),
prepped_txs: HashMap::new(),
response_tx: oneshot::channel().0,
})
.await;
manager_2
.handle_command(BlockchainManagerCommand::AddBlock {
block: block_3,
prepped_txs: HashMap::new(),
response_tx: oneshot::channel().0,
})
.await;
// make sure manager 1 reorged.
assert_eq!(
manager_1.blockchain_context_service.blockchain_context(),
manager_2.blockchain_context_service.blockchain_context()
);
assert_eq!(
manager_1
.blockchain_context_service
.blockchain_context()
.chain_height,
4
);
}
/// Same as [`simple_reorg`] but uses block batches instead.
#[tokio::test]
async fn simple_reorg_block_batch() {
cuprate_fast_sync::set_fast_sync_hashes(&[]);
let handle = HandleBuilder::new().build();
// create 2 managers
let data_dir_1 = tempfile::tempdir().unwrap();
let mut manager_1 = mock_manager(data_dir_1.path().to_path_buf()).await;
let data_dir_2 = tempfile::tempdir().unwrap();
let mut manager_2 = mock_manager(data_dir_2.path().to_path_buf()).await;
// give both managers the same first non-genesis block
let block_1 = generate_block(manager_1.blockchain_context_service.blockchain_context());
manager_1
.handle_incoming_block_batch(BlockBatch {
blocks: vec![(block_1.clone(), vec![])],
size: 0,
peer_handle: handle.1.clone(),
})
.await;
manager_2
.handle_incoming_block_batch(BlockBatch {
blocks: vec![(block_1, vec![])],
size: 0,
peer_handle: handle.1.clone(),
})
.await;
assert_eq!(
manager_1.blockchain_context_service.blockchain_context(),
manager_2.blockchain_context_service.blockchain_context()
);
// give managers different 2nd block
let block_2a = generate_block(manager_1.blockchain_context_service.blockchain_context());
let block_2b = generate_block(manager_2.blockchain_context_service.blockchain_context());
manager_1
.handle_incoming_block_batch(BlockBatch {
blocks: vec![(block_2a, vec![])],
size: 0,
peer_handle: handle.1.clone(),
})
.await;
manager_2
.handle_incoming_block_batch(BlockBatch {
blocks: vec![(block_2b.clone(), vec![])],
size: 0,
peer_handle: handle.1.clone(),
})
.await;
let manager_1_context = manager_1
.blockchain_context_service
.blockchain_context()
.clone();
assert_ne!(
&manager_1_context,
manager_2.blockchain_context_service.blockchain_context()
);
// give manager 1 missing block
manager_1
.handle_incoming_block_batch(BlockBatch {
blocks: vec![(block_2b, vec![])],
size: 0,
peer_handle: handle.1.clone(),
})
.await;
// make sure this didn't change the context
assert_eq!(
&manager_1_context,
manager_1.blockchain_context_service.blockchain_context()
);
// give both managers new block (built of manager 2's chain)
let block_3 = generate_block(manager_2.blockchain_context_service.blockchain_context());
manager_1
.handle_incoming_block_batch(BlockBatch {
blocks: vec![(block_3.clone(), vec![])],
size: 0,
peer_handle: handle.1.clone(),
})
.await;
manager_2
.handle_incoming_block_batch(BlockBatch {
blocks: vec![(block_3, vec![])],
size: 0,
peer_handle: handle.1.clone(),
})
.await;
// make sure manager 1 reorged.
assert_eq!(
manager_1.blockchain_context_service.blockchain_context(),
manager_2.blockchain_context_service.blockchain_context()
);
assert_eq!(
manager_1
.blockchain_context_service
.blockchain_context()
.chain_height,
4
);
}

View file

@ -3,6 +3,7 @@ use std::{
fs::{read_to_string, File},
io,
path::Path,
str::FromStr,
time::Duration,
};
@ -30,6 +31,9 @@ mod storage;
mod tokio;
mod tracing_config;
#[macro_use]
mod macros;
use fs::FileSystemConfig;
use p2p::P2PConfig;
use rayon::RayonConfig;
@ -37,6 +41,24 @@ use storage::StorageConfig;
use tokio::TokioConfig;
use tracing_config::TracingConfig;
/// Header to put at the start of the generated config file.
const HEADER: &str = r"## ____ _
## / ___| _ _ __ _ __ __ _| |_ ___
## | | | | | | '_ \| '__/ _` | __/ _ \
## | |__| |_| | |_) | | | (_| | || __/
## \____\__,_| .__/|_| \__,_|\__\___|
## |_|
##
## All these config values can be set to
## their default by commenting them out with '#'.
##
## Some values are already commented out,
## to set the value remove the '#' at the start of the line.
##
## For more documentation, see: <https://user.cuprate.org>.
";
/// Reads the args & config file, returning a [`Config`].
pub fn read_config_and_args() -> Config {
let args = args::Args::parse();
@ -76,32 +98,83 @@ pub fn read_config_and_args() -> Config {
args.apply_args(config)
}
/// The config for all of Cuprate.
#[derive(Debug, Default, Deserialize, Serialize, PartialEq)]
#[serde(deny_unknown_fields, default)]
pub struct Config {
/// The network we should run on.
network: Network,
config_struct! {
/// The config for all of Cuprate.
#[derive(Debug, Deserialize, Serialize, PartialEq)]
#[serde(deny_unknown_fields, default)]
pub struct Config {
/// The network cuprated should run on.
///
/// Valid values | "Mainnet", "Testnet", "Stagenet"
pub network: Network,
pub no_fast_sync: bool,
/// Enable/disable fast sync.
///
/// Fast sync skips verification of old blocks by
/// comparing block hashes to a built-in hash file,
/// disabling this will significantly increase sync time.
/// New blocks are still fully validated.
///
/// Type | boolean
/// Valid values | true, false
pub fast_sync: bool,
/// [`tracing`] config.
pub tracing: TracingConfig,
#[child = true]
/// Configuration for cuprated's logging system, tracing.
///
/// Tracing is used for logging to stdout and files.
pub tracing: TracingConfig,
pub tokio: TokioConfig,
#[child = true]
/// Configuration for cuprated's asynchronous runtime system, tokio.
///
/// Tokio is used for network operations and the major services inside `cuprated`.
pub tokio: TokioConfig,
pub rayon: RayonConfig,
#[child = true]
/// Configuration for cuprated's thread-pool system, rayon.
///
/// Rayon is used for CPU intensive tasks.
pub rayon: RayonConfig,
/// The P2P network config.
p2p: P2PConfig,
#[child = true]
/// Configuration for cuprated's P2P system.
pub p2p: P2PConfig,
/// The storage config.
pub storage: StorageConfig,
#[child = true]
/// Configuration for persistent data storage.
pub storage: StorageConfig,
pub fs: FileSystemConfig,
#[child = true]
/// Configuration for the file-system.
pub fs: FileSystemConfig,
}
}
impl Default for Config {
fn default() -> Self {
Self {
network: Default::default(),
fast_sync: true,
tracing: Default::default(),
tokio: Default::default(),
rayon: Default::default(),
p2p: Default::default(),
storage: Default::default(),
fs: Default::default(),
}
}
}
impl Config {
/// Returns a default [`Config`], with doc comments.
pub fn documented_config() -> String {
let str = toml::ser::to_string_pretty(&Self::default()).unwrap();
let mut doc = toml_edit::DocumentMut::from_str(&str).unwrap();
Self::write_docs(doc.as_table_mut());
format!("{HEADER}{doc}")
}
/// Attempts to read a config file in [`toml`] format from the given [`Path`].
///
/// # Errors
@ -193,30 +266,13 @@ impl Config {
mod test {
use toml::from_str;
use crate::constants::EXAMPLE_CONFIG;
use super::*;
/// Tests the latest config is the `Default`.
#[test]
fn config_latest() {
let config: Config = from_str(EXAMPLE_CONFIG).unwrap();
assert_eq!(config, Config::default());
}
fn documented_config() {
let str = Config::documented_config();
let conf: Config = from_str(&str).unwrap();
/// Tests backwards compatibility.
#[test]
fn config_backwards_compat() {
// (De)serialization tests.
#[expect(
clippy::single_element_loop,
reason = "Remove after adding other versions"
)]
for version in ["0.0.1"] {
let path = format!("config/{version}.toml");
println!("Testing config serde backwards compat: {path}");
let string = read_to_string(path).unwrap();
from_str::<Config>(&string).unwrap();
}
assert_eq!(conf, Config::default());
}
}

View file

@ -5,7 +5,7 @@ use serde_json::Value;
use cuprate_helper::network::Network;
use crate::{config::Config, constants::EXAMPLE_CONFIG, version::CupratedVersionInfo};
use crate::{config::Config, version::CupratedVersionInfo};
/// Cuprate Args.
#[derive(clap::Parser, Debug)]
@ -61,7 +61,7 @@ impl Args {
}
if self.generate_config {
println!("{EXAMPLE_CONFIG}");
println!("{}", Config::documented_config());
exit(0);
}
}
@ -71,7 +71,7 @@ impl Args {
/// This may exit the program if a config value was set that requires an early exit.
pub const fn apply_args(&self, mut config: Config) -> Config {
config.network = self.network;
config.no_fast_sync = config.no_fast_sync || self.no_fast_sync;
config.fast_sync = config.fast_sync && !self.no_fast_sync;
if let Some(outbound_connections) = self.outbound_connections {
config.p2p.clear_net.general.outbound_connections = outbound_connections;

View file

@ -4,11 +4,44 @@ use serde::{Deserialize, Serialize};
use cuprate_helper::fs::{CUPRATE_CACHE_DIR, CUPRATE_DATA_DIR};
#[derive(Debug, Deserialize, Serialize, PartialEq, Eq)]
#[serde(deny_unknown_fields, default)]
pub struct FileSystemConfig {
pub data_directory: PathBuf,
pub cache_directory: PathBuf,
use super::macros::config_struct;
config_struct! {
/// The file system config.
#[derive(Debug, Deserialize, Serialize, PartialEq, Eq)]
#[serde(deny_unknown_fields, default)]
pub struct FileSystemConfig {
#[comment_out = true]
/// The data directory.
///
/// This directory store the blockchain, transaction pool,
/// log files, and any misc data files.
///
/// The default directories for each OS:
///
/// | OS | Path |
/// |---------|-----------------------------------------------------|
/// | Windows | "C:\Users\Alice\AppData\Roaming\Cuprate\" |
/// | macOS | "/Users/Alice/Library/Application Support/Cuprate/" |
/// | Linux | "/home/alice/.local/share/cuprate/" |
pub data_directory: PathBuf,
#[comment_out = true]
/// The cache directory.
///
/// This directory store cache files.
/// Although not recommended, this directory can be
/// deleted without major disruption to cuprated.
///
/// The default directories for each OS:
///
/// | OS | Path |
/// |---------|-----------------------------------------|
/// | Windows | "C:\Users\Alice\AppData\Local\Cuprate\" |
/// | macOS | "/Users/Alice/Library/Caches/Cuprate/" |
/// | Linux | "/home/alice/.cache/cuprate/" |
pub cache_directory: PathBuf,
}
}
impl Default for FileSystemConfig {

View file

@ -0,0 +1,137 @@
use toml_edit::TableLike;
/// A macro for config structs defined in `cuprated`. This macro generates a function that
/// can insert toml comments created from doc comments on fields.
///
/// # Attributes
/// - `#[flatten = true]`: lets the writer know that the field is flattened into the parent struct.
/// - `#[child = true]`: writes the doc comments for all fields in the child struct.
/// - `#[inline = true]`: inlines the struct into `{}` instead of having a separate `[]` header.
/// - `#[comment_out = true]`: comments out the field.
///
/// # Documentation
/// Consider using the following style when adding documentation:
///
/// ```rust
/// struct Config {
/// /// BRIEF DESCRIPTION.
/// ///
/// /// (optional) LONGER DESCRIPTION.
// ///
/// /// Type | (optional) FIELD TYPE
/// /// Valid values | EXPRESSION REPRESENTING VALID VALUES
/// /// Examples | (optional) A FEW EXAMPLE VALUES
/// field: (),
/// }
/// ```
///
/// For example:
/// ```rust
/// struct Config {
/// /// Enable/disable fast sync.
/// ///
/// /// Fast sync skips verification of old blocks by
/// /// comparing block hashes to a built-in hash file,
/// /// disabling this will significantly increase sync time.
/// /// New blocks are still fully validated.
/// ///
/// /// Type | boolean
/// /// Valid values | true, false
/// fast_sync: bool,
/// }
/// ```
///
/// Language for types:
///
/// | Rust type | Wording used in user-book |
/// |--------------|---------------------------|
/// | bool | boolean
/// | u{8-64} | Number
/// | i{8-64} | Signed number
/// | f{32,64} | Floating point number
/// | str, String | String
/// | enum, struct | `DataStructureName` (e.g. `Duration`) or $DESCRIPTION (e.g. `IP address`)
///
/// If some fields are redundant or unnecessary, do not add them.
///
/// # Field documentation length
/// In order to prevent wrapping/scrollbars in the user book and in editors,
/// add newlines when a documentation line crosses ~70 characters, around this long:
///
/// `----------------------------------------------------------------------`
macro_rules! config_struct {
(
$(#[$meta:meta])*
pub struct $name:ident {
$(
$(#[flatten = $flat:literal])?
$(#[child = $child:literal])?
$(#[inline = $inline:literal])?
$(#[comment_out = $comment_out:literal])?
$(#[doc = $doc:expr])*
$(##[$field_meta:meta])*
pub $field:ident: $field_ty:ty,
)*
}
) => {
$(#[$meta])*
pub struct $name {
$(
$(#[doc = $doc])*
$(#[$field_meta])*
pub $field: $field_ty,
)*
}
impl $name {
#[allow(unused_labels, clippy::allow_attributes)]
pub fn write_docs(doc: &mut dyn ::toml_edit::TableLike) {
$(
'write_field: {
let key_str = &stringify!($field);
let mut field_prefix = [ $(
format!("##{}\n", $doc),
)*].concat();
$(
if $comment_out {
field_prefix.push('#');
}
)?
$(
if $flat {
<$field_ty>::write_docs(doc);
break 'write_field;
}
)?
$(
if $child {
<$field_ty>::write_docs(doc.get_key_value_mut(&key_str).unwrap().1.as_table_like_mut().unwrap());
}
)?
if let Some(table) = doc.entry(&key_str).or_insert_with(|| panic!()).as_table_mut() {
$(
if $inline {
let mut table = table.clone().into_inline_table();
doc.insert(&key_str, ::toml_edit::Item::Value(::toml_edit::Value::InlineTable(table)));
doc.key_mut(&key_str).unwrap().leaf_decor_mut().set_prefix(field_prefix);
break 'write_field;
}
)?
table.decor_mut().set_prefix(format!("\n{}", field_prefix));
}else {
doc.key_mut(&key_str).unwrap().leaf_decor_mut().set_prefix(field_prefix);
}
}
)*
}
}
};
}
pub(crate) use config_struct;

View file

@ -8,28 +8,73 @@ use serde::{Deserialize, Serialize};
use cuprate_helper::{fs::address_book_path, network::Network};
/// P2P config.
#[derive(Debug, Default, Deserialize, Serialize, PartialEq)]
#[serde(deny_unknown_fields, default)]
pub struct P2PConfig {
/// Clear-net config.
pub clear_net: ClearNetConfig,
/// Block downloader config.
pub block_downloader: BlockDownloaderConfig,
use super::macros::config_struct;
config_struct! {
/// P2P config.
#[derive(Debug, Default, Deserialize, Serialize, PartialEq)]
#[serde(deny_unknown_fields, default)]
pub struct P2PConfig {
#[child = true]
/// The clear-net P2P config.
pub clear_net: ClearNetConfig,
#[child = true]
/// Block downloader config.
///
/// The block downloader handles downloading old blocks from peers when we are behind.
pub block_downloader: BlockDownloaderConfig,
}
}
#[derive(Debug, Clone, Deserialize, Serialize, Eq, PartialEq)]
#[serde(deny_unknown_fields, default)]
pub struct BlockDownloaderConfig {
/// The size in bytes of the buffer between the block downloader and the place which
/// is consuming the downloaded blocks.
pub buffer_bytes: usize,
/// The size of the in progress queue (in bytes) at which we stop requesting more blocks.
pub in_progress_queue_bytes: usize,
/// The [`Duration`] between checking the client pool for free peers.
pub check_client_pool_interval: Duration,
/// The target size of a single batch of blocks (in bytes).
pub target_batch_bytes: usize,
config_struct! {
#[derive(Debug, Clone, Deserialize, Serialize, Eq, PartialEq)]
#[serde(deny_unknown_fields, default)]
pub struct BlockDownloaderConfig {
#[comment_out = true]
/// The size in bytes of the buffer between the block downloader
/// and the place which is consuming the downloaded blocks (`cuprated`).
///
/// This value is an absolute maximum,
/// once this is reached the block downloader will pause.
///
/// Type | Number
/// Valid values | >= 0
/// Examples | 1_000_000_000, 5_500_000_000, 500_000_000
pub buffer_bytes: usize,
#[comment_out = true]
/// The size of the in progress queue (in bytes)
/// at which cuprated stops requesting more blocks.
///
/// The value is _NOT_ an absolute maximum,
/// the in-progress queue could get much larger.
/// This value is only the value cuprated stops requesting more blocks,
/// if cuprated still has requests in progress,
/// it will still accept the response and add the blocks to the queue.
///
/// Type | Number
/// Valid values | >= 0
/// Examples | 500_000_000, 1_000_000_000,
pub in_progress_queue_bytes: usize,
#[inline = true]
/// The duration between checking the client pool for free peers.
///
/// Type | Duration
/// Examples | { secs = 30, nanos = 0 }, { secs = 35, nano = 123 }
pub check_client_pool_interval: Duration,
#[comment_out = true]
/// The target size of a single batch of blocks (in bytes).
///
/// This value must be below 100_000,000,
/// it is not recommended to set it above 30_000_000.
///
/// Type | Number
/// Valid values | 0..100_000,000
pub target_batch_bytes: usize,
}
}
impl From<BlockDownloaderConfig> for cuprate_p2p::block_downloader::BlockDownloaderConfig {
@ -50,19 +95,27 @@ impl Default for BlockDownloaderConfig {
buffer_bytes: 1_000_000_000,
in_progress_queue_bytes: 500_000_000,
check_client_pool_interval: Duration::from_secs(30),
target_batch_bytes: 10_000_000,
target_batch_bytes: 15_000_000,
}
}
}
/// The config values for P2P clear-net.
#[derive(Debug, Deserialize, Serialize, PartialEq)]
#[serde(deny_unknown_fields, default)]
pub struct ClearNetConfig {
/// The server config.
pub listen_on: IpAddr,
#[serde(flatten)]
pub general: SharedNetConfig,
config_struct! {
/// The config values for P2P clear-net.
#[derive(Debug, Deserialize, Serialize, PartialEq)]
#[serde(deny_unknown_fields, default)]
pub struct ClearNetConfig {
/// The IP address to bind and listen for connections on.
///
/// Type | IPv4/IPv6 address
/// Examples | "0.0.0.0", "192.168.1.50", "::"
pub listen_on: IpAddr,
#[flatten = true]
/// Shared config values.
##[serde(flatten)]
pub general: SharedNetConfig,
}
}
impl Default for ClearNetConfig {
@ -74,26 +127,66 @@ impl Default for ClearNetConfig {
}
}
/// Network config values shared between all network zones.
#[derive(Debug, Deserialize, Serialize, PartialEq)]
#[serde(deny_unknown_fields, default)]
pub struct SharedNetConfig {
/// The number of outbound connections to make and try keep.
pub outbound_connections: usize,
/// The amount of extra connections we can make if we are under load from the rest of Cuprate.
pub extra_outbound_connections: usize,
/// The maximum amount of inbound connections
pub max_inbound_connections: usize,
/// The percent of connections that should be to peers we haven't connected to before.
pub gray_peers_percent: f64,
/// port to use to accept p2p connections.
pub p2p_port: u16,
/// The address book config.
address_book_config: AddressBookConfig,
config_struct! {
/// Network config values shared between all network zones.
#[derive(Debug, Deserialize, Serialize, PartialEq)]
#[serde(deny_unknown_fields, default)]
pub struct SharedNetConfig {
#[comment_out = true]
/// The number of outbound connections to make and try keep.
///
/// It's recommended to keep this value above 12.
///
/// Type | Number
/// Valid values | >= 0
/// Examples | 12, 32, 64, 100, 500
pub outbound_connections: usize,
#[comment_out = true]
/// The amount of extra connections to make if cuprated is under load.
///
/// Type | Number
/// Valid values | >= 0
/// Examples | 0, 12, 32, 64, 100, 500
pub extra_outbound_connections: usize,
#[comment_out = true]
/// The maximum amount of inbound connections to allow.
///
/// Type | Number
/// Valid values | >= 0
/// Examples | 0, 12, 32, 64, 100, 500
pub max_inbound_connections: usize,
#[comment_out = true]
/// The percent of connections that should be
/// to peers that haven't connected to before.
///
/// 0.0 is 0%.
/// 1.0 is 100%.
///
/// Type | Floating point number
/// Valid values | 0.0..1.0
/// Examples | 0.0, 0.5, 0.123, 0.999, 1.0
pub gray_peers_percent: f64,
/// The port to use to accept incoming P2P connections.
///
/// Setting this to 0 will disable incoming P2P connections.
///
/// Type | Number
/// Valid values | 0..65534
/// Examples | 18080, 9999, 5432
pub p2p_port: u16,
#[child = true]
/// The address book config.
pub address_book_config: AddressBookConfig,
}
}
impl SharedNetConfig {
/// Returns the [`AddressBookConfig`].
/// Returns the [`cuprate_address_book::AddressBookConfig`].
pub fn address_book_config(
&self,
cache_dir: &Path,
@ -111,22 +204,47 @@ impl SharedNetConfig {
impl Default for SharedNetConfig {
fn default() -> Self {
Self {
outbound_connections: 64,
outbound_connections: 32,
extra_outbound_connections: 8,
max_inbound_connections: 128,
gray_peers_percent: 0.7,
p2p_port: 0,
p2p_port: 18080,
address_book_config: AddressBookConfig::default(),
}
}
}
#[derive(Debug, Deserialize, Serialize, Eq, PartialEq)]
#[serde(deny_unknown_fields, default)]
pub struct AddressBookConfig {
max_white_list_length: usize,
max_gray_list_length: usize,
peer_save_period: Duration,
config_struct! {
/// The addressbook config exposed to users.
#[derive(Debug, Deserialize, Serialize, Eq, PartialEq)]
#[serde(deny_unknown_fields, default)]
pub struct AddressBookConfig {
/// The size of the white peer list.
///
/// The white list holds peers that have been connected to before.
///
/// Type | Number
/// Valid values | >= 0
/// Examples | 1000, 500, 241
pub max_white_list_length: usize,
/// The size of the gray peer list.
///
/// The gray peer list holds peers that have been
/// told about but not connected to cuprated.
///
/// Type | Number
/// Valid values | >= 0
/// Examples | 1000, 500, 241
pub max_gray_list_length: usize,
#[inline = true]
/// The time period between address book saves.
///
/// Type | Duration
/// Examples | { secs = 90, nanos = 0 }, { secs = 100, nano = 123 }
pub peer_save_period: Duration,
}
}
impl Default for AddressBookConfig {

View file

@ -1,11 +1,18 @@
use serde::{Deserialize, Serialize};
/// The [`rayon`] config.
#[derive(Debug, Deserialize, Serialize, Eq, PartialEq)]
#[serde(deny_unknown_fields, default)]
pub struct RayonConfig {
/// The number of threads to use for the [`rayon::ThreadPool`].
pub threads: usize,
use super::macros::config_struct;
config_struct! {
/// The [`rayon`] config.
#[derive(Debug, Deserialize, Serialize, Eq, PartialEq)]
#[serde(deny_unknown_fields, default)]
pub struct RayonConfig {
#[comment_out = true]
/// Type | Number
/// Valid values | >= 1
/// Examples | 1, 8, 14
pub threads: usize,
}
}
impl Default for RayonConfig {

View file

@ -6,16 +6,31 @@ use cuprate_database::config::SyncMode;
use cuprate_database_service::ReaderThreads;
use cuprate_helper::fs::CUPRATE_DATA_DIR;
/// The storage config.
#[derive(Debug, Deserialize, Serialize, PartialEq, Eq)]
#[serde(deny_unknown_fields, default)]
pub struct StorageConfig {
/// The amount of reader threads to spawn between the tx-pool and blockchain.
pub reader_threads: usize,
/// The tx-pool config.
pub txpool: TxpoolConfig,
/// The blockchain config.
pub blockchain: BlockchainConfig,
use super::macros::config_struct;
config_struct! {
/// The storage config.
#[derive(Debug, Deserialize, Serialize, PartialEq, Eq)]
#[serde(deny_unknown_fields, default)]
pub struct StorageConfig {
#[comment_out = true]
/// The amount of reader threads to spawn for the tx-pool and blockchain.
///
/// The tx-pool and blockchain both share a single threadpool.
///
/// Type | Number
/// Valid values | >= 0
/// Examples | 1, 16, 10
pub reader_threads: usize,
#[child = true]
/// The tx-pool config.
pub txpool: TxpoolConfig,
#[child = true]
/// The blockchain config.
pub blockchain: BlockchainConfig,
}
}
impl Default for StorageConfig {
@ -28,23 +43,35 @@ impl Default for StorageConfig {
}
}
/// The blockchain config.
#[derive(Default, Debug, Deserialize, Serialize, PartialEq, Eq)]
#[serde(deny_unknown_fields, default)]
pub struct BlockchainConfig {
#[serde(flatten)]
pub shared: SharedStorageConfig,
config_struct! {
/// The blockchain config.
#[derive(Default, Debug, Deserialize, Serialize, PartialEq, Eq)]
#[serde(deny_unknown_fields, default)]
pub struct BlockchainConfig {
#[flatten = true]
/// Shared config.
##[serde(flatten)]
pub shared: SharedStorageConfig,
}
}
/// The tx-pool config.
#[derive(Debug, Deserialize, Serialize, PartialEq, Eq)]
#[serde(deny_unknown_fields, default)]
pub struct TxpoolConfig {
#[serde(flatten)]
pub shared: SharedStorageConfig,
config_struct! {
/// The tx-pool config.
#[derive(Debug, Deserialize, Serialize, PartialEq, Eq)]
#[serde(deny_unknown_fields, default)]
pub struct TxpoolConfig {
#[flatten = true]
/// Shared config.
##[serde(flatten)]
pub shared: SharedStorageConfig,
/// The maximum size of the tx-pool.
pub max_txpool_byte_size: usize,
/// The maximum size of the tx-pool.
///
/// Type | Number
/// Valid values | >= 0
/// Examples | 100_000_000, 50_000_000
pub max_txpool_byte_size: usize,
}
}
impl Default for TxpoolConfig {
@ -56,10 +83,19 @@ impl Default for TxpoolConfig {
}
}
/// Config values shared between the tx-pool and blockchain.
#[derive(Debug, Default, Deserialize, Serialize, PartialEq, Eq)]
#[serde(deny_unknown_fields, default)]
pub struct SharedStorageConfig {
/// The [`SyncMode`] of the database.
pub sync_mode: SyncMode,
config_struct! {
/// Config values shared between the tx-pool and blockchain.
#[derive(Debug, Default, Deserialize, Serialize, PartialEq, Eq)]
#[serde(deny_unknown_fields, default)]
pub struct SharedStorageConfig {
#[comment_out = true]
/// The sync mode of the database.
///
/// Using "Safe" makes the DB less likely to corrupt
/// if there is an unexpected crash, although it will
/// make DB writes much slower.
///
/// Valid values | "Fast", "Safe"
pub sync_mode: SyncMode,
}
}

View file

@ -1,11 +1,20 @@
use serde::{Deserialize, Serialize};
/// [`tokio`] config.
#[derive(Debug, Deserialize, Serialize, Eq, PartialEq)]
#[serde(deny_unknown_fields, default)]
pub struct TokioConfig {
/// The amount of threads to spawn for the async thread-pool
pub threads: usize,
use super::macros::config_struct;
config_struct! {
/// [`tokio`] config.
#[derive(Debug, Deserialize, Serialize, Eq, PartialEq)]
#[serde(deny_unknown_fields, default)]
pub struct TokioConfig {
#[comment_out = true]
/// The amount of threads to spawn for the tokio thread-pool.
///
/// Type | Number
/// Valid values | >= 1
/// Examples | 1, 8, 14
pub threads: usize,
}
}
impl Default for TokioConfig {

View file

@ -1,20 +1,38 @@
use serde::{Deserialize, Serialize};
use tracing::level_filters::LevelFilter;
/// [`tracing`] config.
#[derive(Debug, Default, Deserialize, Serialize, Eq, PartialEq)]
#[serde(deny_unknown_fields, default)]
pub struct TracingConfig {
pub stdout: StdoutTracingConfig,
pub file: FileTracingConfig,
use super::macros::config_struct;
config_struct! {
/// [`tracing`] config.
#[derive(Debug, Default, Deserialize, Serialize, Eq, PartialEq)]
#[serde(deny_unknown_fields, default)]
pub struct TracingConfig {
#[child = true]
/// Configuration for cuprated's stdout logging system.
pub stdout: StdoutTracingConfig,
#[child = true]
/// Configuration for cuprated's file logging system.
pub file: FileTracingConfig,
}
}
#[derive(Debug, Deserialize, Serialize, Eq, PartialEq)]
#[serde(deny_unknown_fields, default)]
pub struct StdoutTracingConfig {
/// The default minimum log level.
#[serde(with = "level_filter_serde")]
pub level: LevelFilter,
config_struct! {
#[derive(Debug, Deserialize, Serialize, Eq, PartialEq)]
#[serde(deny_unknown_fields, default)]
pub struct StdoutTracingConfig {
/// The minimum log level for stdout.
///
/// Levels below this one will not be shown.
/// "error" is the highest level only showing errors,
/// "trace" is the lowest showing as much as possible.
///
/// Type | Level
/// Valid values | "error", "warn", "info", "debug", "trace"
##[serde(with = "level_filter_serde")]
pub level: LevelFilter,
}
}
impl Default for StdoutTracingConfig {
@ -25,15 +43,30 @@ impl Default for StdoutTracingConfig {
}
}
#[derive(Debug, Deserialize, Serialize, Eq, PartialEq)]
#[serde(deny_unknown_fields, default)]
pub struct FileTracingConfig {
/// The default minimum log level.
#[serde(with = "level_filter_serde")]
pub level: LevelFilter,
/// The maximum amount of log files to keep, once this number is passed the oldest file
/// will be deleted.
pub max_log_files: usize,
config_struct! {
#[derive(Debug, Deserialize, Serialize, Eq, PartialEq)]
#[serde(deny_unknown_fields, default)]
pub struct FileTracingConfig {
/// The minimum log level for file logs.
///
/// Levels below this one will not be shown.
/// "error" is the highest level only showing errors,
/// "trace" is the lowest showing as much as possible.
///
/// Type | Level
/// Valid values | "error", "warn", "info", "debug", "trace"
##[serde(with = "level_filter_serde")]
pub level: LevelFilter,
/// The maximum amount of log files to keep.
///
/// Once this number is passed the oldest file will be deleted.
///
/// Type | Number
/// Valid values | >= 0
/// Examples | 0, 7, 200
pub max_log_files: usize,
}
}
impl Default for FileTracingConfig {

View file

@ -34,8 +34,6 @@ pub const DEFAULT_CONFIG_WARNING: &str = formatcp!(
pub const DEFAULT_CONFIG_STARTUP_DELAY: Duration = Duration::from_secs(15);
pub const EXAMPLE_CONFIG: &str = include_str!("../config/Cuprated.toml");
#[cfg(test)]
mod test {
use super::*;
@ -45,15 +43,15 @@ mod test {
fn version() {
let semantic_version = format!("{MAJOR_VERSION}.{MINOR_VERSION}.{PATCH_VERSION}");
assert_eq!(VERSION, VERSION);
assert_eq!(VERSION, "0.0.1");
assert_eq!(VERSION, "0.0.2");
}
#[test]
fn version_build() {
if cfg!(debug_assertions) {
assert_eq!(VERSION_BUILD, "0.0.1-debug");
assert_eq!(VERSION_BUILD, "0.0.2-debug");
} else {
assert_eq!(VERSION_BUILD, "0.0.1-release");
assert_eq!(VERSION_BUILD, "0.0.2-release");
}
}
}

View file

@ -32,8 +32,8 @@ const _: () = {
/// The killswitch activates if the current timestamp is ahead of this timestamp.
///
/// Wed Apr 16 12:00:00 AM UTC 2025
pub const KILLSWITCH_ACTIVATION_TIMESTAMP: u64 = 1744761600;
/// Wed May 14 12:00:00 AM UTC 2025
pub const KILLSWITCH_ACTIVATION_TIMESTAMP: u64 = 1747180800;
/// Check if the system clock is past a certain timestamp,
/// if so, exit the entire program.
@ -44,8 +44,8 @@ fn killswitch() {
/// sanity checking the system's clock to make
/// sure it is not overly behind.
///
/// Tue Mar 11 08:33:20 PM UTC 2025
const SYSTEM_CLOCK_SANITY_TIMESTAMP: u64 = 1741725200;
/// Tue April 8 12:00:00 AM UTC 2025
const SYSTEM_CLOCK_SANITY_TIMESTAMP: u64 = 1744070400;
let current_ts = current_unix_timestamp();

View file

@ -20,12 +20,13 @@ use std::{mem, sync::Arc};
use tokio::sync::mpsc;
use tower::{Service, ServiceExt};
use tracing::level_filters::LevelFilter;
use tracing::{error, info, level_filters::LevelFilter};
use tracing_subscriber::{layer::SubscriberExt, reload::Handle, util::SubscriberInitExt, Registry};
use cuprate_consensus_context::{
BlockChainContextRequest, BlockChainContextResponse, BlockchainContextService,
};
use cuprate_database::{InitError, DATABASE_CORRUPT_MSG};
use cuprate_helper::time::secs_to_hms;
use cuprate_types::blockchain::BlockchainWriteRequest;
@ -55,7 +56,7 @@ fn main() {
let config = config::read_config_and_args();
blockchain::set_fast_sync_hashes(!config.no_fast_sync, config.network());
blockchain::set_fast_sync_hashes(config.fast_sync, config.network());
// Initialize logging.
logging::init_logging(&config);
@ -77,9 +78,13 @@ fn main() {
config.blockchain_config(),
Arc::clone(&db_thread_pool),
)
.unwrap();
.inspect_err(|e| error!("Blockchain database error: {e}"))
.expect(DATABASE_CORRUPT_MSG);
let (txpool_read_handle, txpool_write_handle, _) =
cuprate_txpool::service::init_with_pool(config.txpool_config(), db_thread_pool).unwrap();
cuprate_txpool::service::init_with_pool(config.txpool_config(), db_thread_pool)
.inspect_err(|e| error!("Txpool database error: {e}"))
.expect(DATABASE_CORRUPT_MSG);
// Initialize async tasks.
@ -141,13 +146,19 @@ fn main() {
.await;
// Start the command listener.
let (command_tx, command_rx) = mpsc::channel(1);
std::thread::spawn(|| commands::command_listener(command_tx));
if std::io::IsTerminal::is_terminal(&std::io::stdin()) {
let (command_tx, command_rx) = mpsc::channel(1);
std::thread::spawn(|| commands::command_listener(command_tx));
// Wait on the io_loop, spawned on a separate task as this improves performance.
tokio::spawn(commands::io_loop(command_rx, context_svc))
.await
.unwrap();
// Wait on the io_loop, spawned on a separate task as this improves performance.
tokio::spawn(commands::io_loop(command_rx, context_svc))
.await
.unwrap();
} else {
// If no STDIN, await OS exit signal.
info!("Terminal/TTY not detected, disabling STDIN commands");
tokio::signal::ctrl_c().await.unwrap();
}
});
}

View file

@ -2,11 +2,9 @@
//!
//! Will contain the code to initiate the RPC and a request handler.
mod bin;
mod constants;
mod handler;
mod json;
mod other;
mod request;
mod handlers;
mod rpc_handler;
mod service;
pub use handler::CupratedRpcHandler;
pub use rpc_handler::CupratedRpcHandler;

View file

@ -1,85 +0,0 @@
use anyhow::Error;
use cuprate_rpc_types::{
bin::{
BinRequest, BinResponse, GetBlocksByHeightRequest, GetBlocksByHeightResponse,
GetBlocksRequest, GetBlocksResponse, GetHashesRequest, GetHashesResponse,
GetOutputIndexesRequest, GetOutputIndexesResponse, GetOutsRequest, GetOutsResponse,
GetTransactionPoolHashesRequest, GetTransactionPoolHashesResponse,
},
json::{GetOutputDistributionRequest, GetOutputDistributionResponse},
};
use crate::rpc::CupratedRpcHandler;
/// Map a [`BinRequest`] to the function that will lead to a [`BinResponse`].
pub(super) async fn map_request(
state: CupratedRpcHandler,
request: BinRequest,
) -> Result<BinResponse, Error> {
use BinRequest as Req;
use BinResponse as Resp;
Ok(match request {
Req::GetBlocks(r) => Resp::GetBlocks(get_blocks(state, r).await?),
Req::GetBlocksByHeight(r) => Resp::GetBlocksByHeight(get_blocks_by_height(state, r).await?),
Req::GetHashes(r) => Resp::GetHashes(get_hashes(state, r).await?),
Req::GetOutputIndexes(r) => Resp::GetOutputIndexes(get_output_indexes(state, r).await?),
Req::GetOuts(r) => Resp::GetOuts(get_outs(state, r).await?),
Req::GetTransactionPoolHashes(r) => {
Resp::GetTransactionPoolHashes(get_transaction_pool_hashes(state, r).await?)
}
Req::GetOutputDistribution(r) => {
Resp::GetOutputDistribution(get_output_distribution(state, r).await?)
}
})
}
async fn get_blocks(
state: CupratedRpcHandler,
request: GetBlocksRequest,
) -> Result<GetBlocksResponse, Error> {
todo!()
}
async fn get_blocks_by_height(
state: CupratedRpcHandler,
request: GetBlocksByHeightRequest,
) -> Result<GetBlocksByHeightResponse, Error> {
todo!()
}
async fn get_hashes(
state: CupratedRpcHandler,
request: GetHashesRequest,
) -> Result<GetHashesResponse, Error> {
todo!()
}
async fn get_output_indexes(
state: CupratedRpcHandler,
request: GetOutputIndexesRequest,
) -> Result<GetOutputIndexesResponse, Error> {
todo!()
}
async fn get_outs(
state: CupratedRpcHandler,
request: GetOutsRequest,
) -> Result<GetOutsResponse, Error> {
todo!()
}
async fn get_transaction_pool_hashes(
state: CupratedRpcHandler,
request: GetTransactionPoolHashesRequest,
) -> Result<GetTransactionPoolHashesResponse, Error> {
todo!()
}
async fn get_output_distribution(
state: CupratedRpcHandler,
request: GetOutputDistributionRequest,
) -> Result<GetOutputDistributionResponse, Error> {
todo!()
}

View file

@ -3,3 +3,6 @@
/// The string message used in RPC response fields for when
/// `cuprated` does not support a field that `monerod` has.
pub(super) const FIELD_NOT_SUPPORTED: &str = "`cuprated` does not support this field.";
/// The error message returned when an unsupported RPC call is requested.
pub(super) const UNSUPPORTED_RPC_CALL: &str = "This RPC call is not supported by Cuprate.";

View file

@ -0,0 +1,18 @@
//! RPC handler functions.
//!
//! These are the glue (async) functions that connect all the
//! internal `cuprated` functions and fulfill the request.
//!
//! - JSON-RPC handlers are in [`json_rpc`]
//! - Other JSON endpoint handlers are in [`other_json`]
//! - Other binary endpoint handlers are in [`bin`]
//!
//! - [`helper`] contains helper functions used by many handlers
//! - [`shared`] contains shared functions used by multiple handlers
pub(super) mod bin;
pub(super) mod json_rpc;
pub(super) mod other_json;
mod helper;
mod shared;

View file

@ -0,0 +1,253 @@
//! RPC request handler functions (binary endpoints).
//!
//! TODO:
//! Some handlers have `todo!()`s for other Cuprate internals that must be completed, see:
//! <https://github.com/Cuprate/cuprate/pull/355>
use std::num::NonZero;
use anyhow::{anyhow, Error};
use bytes::Bytes;
use cuprate_constants::rpc::{RESTRICTED_BLOCK_COUNT, RESTRICTED_TRANSACTIONS_COUNT};
use cuprate_fixed_bytes::ByteArrayVec;
use cuprate_helper::cast::{u64_to_usize, usize_to_u64};
use cuprate_rpc_interface::RpcHandler;
use cuprate_rpc_types::{
base::{AccessResponseBase, ResponseBase},
bin::{
BinRequest, BinResponse, GetBlocksByHeightRequest, GetBlocksByHeightResponse,
GetBlocksRequest, GetBlocksResponse, GetHashesRequest, GetHashesResponse,
GetOutputIndexesRequest, GetOutputIndexesResponse, GetOutsRequest, GetOutsResponse,
GetTransactionPoolHashesRequest, GetTransactionPoolHashesResponse,
},
json::{GetOutputDistributionRequest, GetOutputDistributionResponse},
misc::RequestedInfo,
};
use cuprate_types::{
rpc::{PoolInfo, PoolInfoExtent},
BlockCompleteEntry,
};
use crate::rpc::{
handlers::{helper, shared},
service::{blockchain, txpool},
CupratedRpcHandler,
};
/// Map a [`BinRequest`] to the function that will lead to a [`BinResponse`].
pub async fn map_request(
state: CupratedRpcHandler,
request: BinRequest,
) -> Result<BinResponse, Error> {
use BinRequest as Req;
use BinResponse as Resp;
Ok(match request {
Req::GetBlocks(r) => Resp::GetBlocks(get_blocks(state, r).await?),
Req::GetBlocksByHeight(r) => Resp::GetBlocksByHeight(get_blocks_by_height(state, r).await?),
Req::GetHashes(r) => Resp::GetHashes(get_hashes(state, r).await?),
Req::GetOutputIndexes(r) => Resp::GetOutputIndexes(get_output_indexes(state, r).await?),
Req::GetOuts(r) => Resp::GetOuts(get_outs(state, r).await?),
Req::GetTransactionPoolHashes(r) => {
Resp::GetTransactionPoolHashes(get_transaction_pool_hashes(state, r).await?)
}
Req::GetOutputDistribution(r) => {
Resp::GetOutputDistribution(get_output_distribution(state, r).await?)
}
})
}
/// <https://github.com/monero-project/monero/blob/cc73fe71162d564ffda8e549b79a350bca53c454/src/rpc/core_rpc_server.cpp#L611-L789>
async fn get_blocks(
mut state: CupratedRpcHandler,
request: GetBlocksRequest,
) -> Result<GetBlocksResponse, Error> {
// Time should be set early:
// <https://github.com/monero-project/monero/blob/cc73fe71162d564ffda8e549b79a350bca53c454/src/rpc/core_rpc_server.cpp#L628-L631>
let daemon_time = cuprate_helper::time::current_unix_timestamp();
let GetBlocksRequest {
requested_info,
block_ids,
start_height,
prune,
no_miner_tx,
pool_info_since,
} = request;
let block_hashes: Vec<[u8; 32]> = (&block_ids).into();
drop(block_ids);
let Some(requested_info) = RequestedInfo::from_u8(request.requested_info) else {
return Err(anyhow!("Wrong requested info"));
};
let (get_blocks, get_pool) = match requested_info {
RequestedInfo::BlocksOnly => (true, false),
RequestedInfo::BlocksAndPool => (true, true),
RequestedInfo::PoolOnly => (false, true),
};
let pool_info_extent = PoolInfoExtent::None;
let pool_info = if get_pool {
let is_restricted = state.is_restricted();
let include_sensitive_txs = !is_restricted;
let max_tx_count = if is_restricted {
RESTRICTED_TRANSACTIONS_COUNT
} else {
usize::MAX
};
txpool::pool_info(
&mut state.txpool_read,
include_sensitive_txs,
max_tx_count,
NonZero::new(u64_to_usize(request.pool_info_since)),
)
.await?
} else {
PoolInfo::None
};
let resp = GetBlocksResponse {
base: helper::access_response_base(false),
blocks: vec![],
start_height: 0,
current_height: 0,
output_indices: vec![],
daemon_time,
pool_info,
};
if !get_blocks {
return Ok(resp);
}
if let Some(block_id) = block_hashes.first() {
let (height, hash) = helper::top_height(&mut state).await?;
if hash == *block_id {
return Ok(GetBlocksResponse {
current_height: height + 1,
..resp
});
}
}
let (block_hashes, start_height, _) =
blockchain::next_chain_entry(&mut state.blockchain_read, block_hashes, start_height)
.await?;
if start_height.is_none() {
return Err(anyhow!("Block IDs were not sorted properly"));
}
let (blocks, missing_hashes, height) =
blockchain::block_complete_entries(&mut state.blockchain_read, block_hashes).await?;
if !missing_hashes.is_empty() {
return Err(anyhow!("Missing blocks"));
}
Ok(GetBlocksResponse {
blocks,
current_height: usize_to_u64(height),
..resp
})
}
/// <https://github.com/monero-project/monero/blob/cc73fe71162d564ffda8e549b79a350bca53c454/src/rpc/core_rpc_server.cpp#L817-L857>
async fn get_blocks_by_height(
mut state: CupratedRpcHandler,
request: GetBlocksByHeightRequest,
) -> Result<GetBlocksByHeightResponse, Error> {
if state.is_restricted() && request.heights.len() > RESTRICTED_BLOCK_COUNT {
return Err(anyhow!("Too many blocks requested in restricted mode"));
}
let blocks =
blockchain::block_complete_entries_by_height(&mut state.blockchain_read, request.heights)
.await?;
Ok(GetBlocksByHeightResponse {
base: helper::access_response_base(false),
blocks,
})
}
/// <https://github.com/monero-project/monero/blob/cc73fe71162d564ffda8e549b79a350bca53c454/src/rpc/core_rpc_server.cpp#L859-L880>
async fn get_hashes(
mut state: CupratedRpcHandler,
request: GetHashesRequest,
) -> Result<GetHashesResponse, Error> {
let GetHashesRequest {
start_height,
block_ids,
} = request;
// FIXME: impl `last()`
let last = {
let len = block_ids.len();
if len == 0 {
return Err(anyhow!("block_ids empty"));
}
block_ids[len - 1]
};
let hashes: Vec<[u8; 32]> = (&block_ids).into();
let (m_blocks_ids, _, current_height) =
blockchain::next_chain_entry(&mut state.blockchain_read, hashes, start_height).await?;
Ok(GetHashesResponse {
base: helper::access_response_base(false),
m_blocks_ids: m_blocks_ids.into(),
current_height: usize_to_u64(current_height),
start_height,
})
}
/// <https://github.com/monero-project/monero/blob/cc73fe71162d564ffda8e549b79a350bca53c454/src/rpc/core_rpc_server.cpp#L959-L977>
async fn get_output_indexes(
mut state: CupratedRpcHandler,
request: GetOutputIndexesRequest,
) -> Result<GetOutputIndexesResponse, Error> {
Ok(GetOutputIndexesResponse {
base: helper::access_response_base(false),
o_indexes: blockchain::tx_output_indexes(&mut state.blockchain_read, request.txid).await?,
})
}
/// <https://github.com/monero-project/monero/blob/cc73fe71162d564ffda8e549b79a350bca53c454/src/rpc/core_rpc_server.cpp#L882-L910>
async fn get_outs(
state: CupratedRpcHandler,
request: GetOutsRequest,
) -> Result<GetOutsResponse, Error> {
shared::get_outs(state, request).await
}
/// <https://github.com/monero-project/monero/blob/cc73fe71162d564ffda8e549b79a350bca53c454/src/rpc/core_rpc_server.cpp#L1689-L1711>
async fn get_transaction_pool_hashes(
mut state: CupratedRpcHandler,
_: GetTransactionPoolHashesRequest,
) -> Result<GetTransactionPoolHashesResponse, Error> {
Ok(GetTransactionPoolHashesResponse {
base: helper::access_response_base(false),
tx_hashes: shared::get_transaction_pool_hashes(state)
.await
.map(ByteArrayVec::from)?,
})
}
/// <https://github.com/monero-project/monero/blob/cc73fe71162d564ffda8e549b79a350bca53c454/src/rpc/core_rpc_server.cpp#L3352-L3398>
async fn get_output_distribution(
state: CupratedRpcHandler,
request: GetOutputDistributionRequest,
) -> Result<GetOutputDistributionResponse, Error> {
shared::get_output_distribution(state, request).await
}

View file

@ -0,0 +1,191 @@
//! These are internal helper functions used by the actual RPC handlers.
//!
//! Many of the handlers have bodies with only small differences,
//! the identical code is extracted and reused here in these functions.
//!
//! These build on-top of [`crate::rpc::service`] functions.
use anyhow::{anyhow, Error};
use cuprate_helper::{
cast::{u64_to_usize, usize_to_u64},
map::split_u128_into_low_high_bits,
};
use cuprate_rpc_types::{
base::{AccessResponseBase, ResponseBase},
misc::BlockHeader,
};
use cuprate_types::HardFork;
use monero_serai::transaction::Timelock;
use crate::rpc::{
service::{blockchain, blockchain_context},
CupratedRpcHandler,
};
/// Map some data into a [`BlockHeader`].
///
/// Sort of equivalent to:
/// <https://github.com/monero-project/monero/blob/893916ad091a92e765ce3241b94e706ad012b62a/src/rpc/core_rpc_server.cpp#L2361>.
pub(super) async fn block_header(
state: &mut CupratedRpcHandler,
height: u64,
fill_pow_hash: bool,
) -> Result<BlockHeader, Error> {
let block = blockchain::block(&mut state.blockchain_read, height).await?;
let header = blockchain::block_extended_header(&mut state.blockchain_read, height).await?;
let hardfork = HardFork::from_vote(header.vote);
let (top_height, _) = top_height(state).await?;
// TODO: if the request block is not on the main chain,
// we must get the alt block and this variable will be `true`.
let orphan_status = false;
// TODO: impl cheaper way to get this.
// <https://github.com/Cuprate/cuprate/pull/355#discussion_r1904508934>
let difficulty = blockchain_context::batch_get_difficulties(
&mut state.blockchain_context,
vec![(height, hardfork)],
)
.await?
.first()
.copied()
.ok_or_else(|| anyhow!("Failed to get block difficulty"))?;
let pow_hash = if fill_pow_hash {
let seed_height =
cuprate_consensus_rules::blocks::randomx_seed_height(u64_to_usize(height));
let seed_hash = blockchain::block_hash(
&mut state.blockchain_read,
height,
todo!("access to `cuprated`'s Chain"),
)
.await?;
Some(
blockchain_context::calculate_pow(
&mut state.blockchain_context,
hardfork,
block,
seed_hash,
)
.await?,
)
} else {
None
};
let block_weight = usize_to_u64(header.block_weight);
let depth = top_height.saturating_sub(height);
let (cumulative_difficulty_top64, cumulative_difficulty) =
split_u128_into_low_high_bits(header.cumulative_difficulty);
let (difficulty_top64, difficulty) = split_u128_into_low_high_bits(difficulty);
let reward = block
.miner_transaction
.prefix()
.outputs
.iter()
.map(|o| o.amount.expect("coinbase is transparent"))
.sum::<u64>();
Ok(cuprate_types::rpc::BlockHeader {
block_weight,
cumulative_difficulty_top64,
cumulative_difficulty,
depth,
difficulty_top64,
difficulty,
hash: block.hash(),
height,
long_term_weight: usize_to_u64(header.long_term_weight),
major_version: header.version,
miner_tx_hash: block.miner_transaction.hash(),
minor_version: header.vote,
nonce: block.header.nonce,
num_txes: usize_to_u64(block.transactions.len()),
orphan_status,
pow_hash,
prev_hash: block.header.previous,
reward,
timestamp: block.header.timestamp,
}
.into())
}
/// Same as [`block_header`] but with the block's hash.
pub(super) async fn block_header_by_hash(
state: &mut CupratedRpcHandler,
hash: [u8; 32],
fill_pow_hash: bool,
) -> Result<BlockHeader, Error> {
let (_, height) = blockchain::find_block(&mut state.blockchain_read, hash)
.await?
.ok_or_else(|| anyhow!("Block did not exist."))?;
let block_header = block_header(state, usize_to_u64(height), fill_pow_hash).await?;
Ok(block_header)
}
/// Check if `height` is greater than the [`top_height`].
///
/// # Errors
/// This returns the [`top_height`] on [`Ok`] and
/// returns [`Error`] if `height` is greater than [`top_height`].
pub(super) async fn check_height(
state: &mut CupratedRpcHandler,
height: u64,
) -> Result<u64, Error> {
let (top_height, _) = top_height(state).await?;
if height > top_height {
return Err(anyhow!(
"Requested block height: {height} greater than top block height: {top_height}",
));
}
Ok(top_height)
}
/// Parse a hexadecimal [`String`] as a 32-byte hash.
#[expect(clippy::needless_pass_by_value)]
pub(super) fn hex_to_hash(hex: String) -> Result<[u8; 32], Error> {
let error = || anyhow!("Failed to parse hex representation of hash. Hex = {hex}.");
let Ok(bytes) = hex::decode(&hex) else {
return Err(error());
};
let Ok(hash) = bytes.try_into() else {
return Err(error());
};
Ok(hash)
}
/// [`cuprate_types::blockchain::BlockchainResponse::ChainHeight`] minus 1.
pub(super) async fn top_height(state: &mut CupratedRpcHandler) -> Result<(u64, [u8; 32]), Error> {
let (chain_height, hash) = blockchain::chain_height(&mut state.blockchain_read).await?;
let height = chain_height.checked_sub(1).unwrap();
Ok((height, hash))
}
/// TODO: impl bootstrap
pub const fn response_base(is_bootstrap: bool) -> ResponseBase {
if is_bootstrap {
ResponseBase::OK_UNTRUSTED
} else {
ResponseBase::OK
}
}
/// TODO: impl bootstrap
pub const fn access_response_base(is_bootstrap: bool) -> AccessResponseBase {
if is_bootstrap {
AccessResponseBase::OK_UNTRUSTED
} else {
AccessResponseBase::OK
}
}

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,790 @@
//! RPC request handler functions (other JSON endpoints).
//!
//! TODO:
//! Some handlers have `todo!()`s for other Cuprate internals that must be completed, see:
//! <https://github.com/Cuprate/cuprate/pull/355>
use std::{
borrow::Cow,
collections::{BTreeSet, HashMap, HashSet},
};
use anyhow::{anyhow, Error};
use monero_serai::transaction::{Input, Timelock, Transaction};
use cuprate_constants::rpc::{
MAX_RESTRICTED_GLOBAL_FAKE_OUTS_COUNT, RESTRICTED_SPENT_KEY_IMAGES_COUNT,
RESTRICTED_TRANSACTIONS_COUNT,
};
use cuprate_helper::cast::usize_to_u64;
use cuprate_hex::{Hex, HexVec};
use cuprate_p2p_core::{client::handshaker::builder::DummyAddressBook, ClearNet};
use cuprate_rpc_interface::RpcHandler;
use cuprate_rpc_types::{
base::{AccessResponseBase, ResponseBase},
misc::{Status, TxEntry, TxEntryType},
other::{
GetAltBlocksHashesRequest, GetAltBlocksHashesResponse, GetHeightRequest, GetHeightResponse,
GetLimitRequest, GetLimitResponse, GetNetStatsRequest, GetNetStatsResponse, GetOutsRequest,
GetOutsResponse, GetPeerListRequest, GetPeerListResponse, GetPublicNodesRequest,
GetPublicNodesResponse, GetTransactionPoolHashesRequest, GetTransactionPoolHashesResponse,
GetTransactionPoolRequest, GetTransactionPoolResponse, GetTransactionPoolStatsRequest,
GetTransactionPoolStatsResponse, GetTransactionsRequest, GetTransactionsResponse,
InPeersRequest, InPeersResponse, IsKeyImageSpentRequest, IsKeyImageSpentResponse,
MiningStatusRequest, MiningStatusResponse, OtherRequest, OtherResponse, OutPeersRequest,
OutPeersResponse, PopBlocksRequest, PopBlocksResponse, SaveBcRequest, SaveBcResponse,
SendRawTransactionRequest, SendRawTransactionResponse, SetBootstrapDaemonRequest,
SetBootstrapDaemonResponse, SetLimitRequest, SetLimitResponse, SetLogCategoriesRequest,
SetLogCategoriesResponse, SetLogHashRateRequest, SetLogHashRateResponse,
SetLogLevelRequest, SetLogLevelResponse, StartMiningRequest, StartMiningResponse,
StopDaemonRequest, StopDaemonResponse, StopMiningRequest, StopMiningResponse,
UpdateRequest, UpdateResponse,
},
};
use cuprate_types::{
rpc::{KeyImageSpentStatus, PoolInfo, PoolTxInfo, PublicNode},
TxInPool, TxRelayChecks,
};
use crate::{
rpc::{
constants::UNSUPPORTED_RPC_CALL,
handlers::{helper, shared},
service::{address_book, blockchain, blockchain_context, blockchain_manager, txpool},
CupratedRpcHandler,
},
statics::START_INSTANT_UNIX,
};
/// Map a [`OtherRequest`] to the function that will lead to a [`OtherResponse`].
pub async fn map_request(
state: CupratedRpcHandler,
request: OtherRequest,
) -> Result<OtherResponse, Error> {
use OtherRequest as Req;
use OtherResponse as Resp;
Ok(match request {
Req::GetHeight(r) => Resp::GetHeight(get_height(state, r).await?),
Req::GetTransactions(r) => Resp::GetTransactions(get_transactions(state, r).await?),
Req::GetAltBlocksHashes(r) => {
Resp::GetAltBlocksHashes(get_alt_blocks_hashes(state, r).await?)
}
Req::IsKeyImageSpent(r) => Resp::IsKeyImageSpent(is_key_image_spent(state, r).await?),
Req::SendRawTransaction(r) => {
Resp::SendRawTransaction(send_raw_transaction(state, r).await?)
}
Req::SaveBc(r) => Resp::SaveBc(save_bc(state, r).await?),
Req::GetPeerList(r) => Resp::GetPeerList(get_peer_list(state, r).await?),
Req::SetLogLevel(r) => Resp::SetLogLevel(set_log_level(state, r).await?),
Req::SetLogCategories(r) => Resp::SetLogCategories(set_log_categories(state, r).await?),
Req::GetTransactionPool(r) => {
Resp::GetTransactionPool(get_transaction_pool(state, r).await?)
}
Req::GetTransactionPoolStats(r) => {
Resp::GetTransactionPoolStats(get_transaction_pool_stats(state, r).await?)
}
Req::StopDaemon(r) => Resp::StopDaemon(stop_daemon(state, r).await?),
Req::GetLimit(r) => Resp::GetLimit(get_limit(state, r).await?),
Req::SetLimit(r) => Resp::SetLimit(set_limit(state, r).await?),
Req::OutPeers(r) => Resp::OutPeers(out_peers(state, r).await?),
Req::InPeers(r) => Resp::InPeers(in_peers(state, r).await?),
Req::GetNetStats(r) => Resp::GetNetStats(get_net_stats(state, r).await?),
Req::GetOuts(r) => Resp::GetOuts(get_outs(state, r).await?),
Req::PopBlocks(r) => Resp::PopBlocks(pop_blocks(state, r).await?),
Req::GetTransactionPoolHashes(r) => {
Resp::GetTransactionPoolHashes(get_transaction_pool_hashes(state, r).await?)
}
Req::GetPublicNodes(r) => Resp::GetPublicNodes(get_public_nodes(state, r).await?),
// Unsupported requests.
Req::SetBootstrapDaemon(_)
| Req::Update(_)
| Req::StartMining(_)
| Req::StopMining(_)
| Req::MiningStatus(_)
| Req::SetLogHashRate(_) => return Err(anyhow!(UNSUPPORTED_RPC_CALL)),
})
}
/// <https://github.com/monero-project/monero/blob/cc73fe71162d564ffda8e549b79a350bca53c454/src/rpc/core_rpc_server.cpp#L486-L499>
async fn get_height(
mut state: CupratedRpcHandler,
_: GetHeightRequest,
) -> Result<GetHeightResponse, Error> {
let (height, hash) = helper::top_height(&mut state).await?;
let hash = Hex(hash);
Ok(GetHeightResponse {
base: helper::response_base(false),
height,
hash,
})
}
/// <https://github.com/monero-project/monero/blob/cc73fe71162d564ffda8e549b79a350bca53c454/src/rpc/core_rpc_server.cpp#L979-L1227>
async fn get_transactions(
mut state: CupratedRpcHandler,
request: GetTransactionsRequest,
) -> Result<GetTransactionsResponse, Error> {
if state.is_restricted() && request.txs_hashes.len() > RESTRICTED_TRANSACTIONS_COUNT {
return Err(anyhow!(
"Too many transactions requested in restricted mode"
));
}
let (txs_in_blockchain, missed_txs) = {
let requested_txs = request.txs_hashes.into_iter().map(|tx| tx.0).collect();
blockchain::transactions(&mut state.blockchain_read, requested_txs).await?
};
let missed_tx = missed_txs.clone().into_iter().map(Hex).collect();
// Check the txpool for missed transactions.
let txs_in_pool = if missed_txs.is_empty() {
vec![]
} else {
let include_sensitive_txs = !state.is_restricted();
txpool::txs_by_hash(&mut state.txpool_read, missed_txs, include_sensitive_txs).await?
};
let (txs, txs_as_hex, txs_as_json) = {
// Prepare the final JSON output.
let len = txs_in_blockchain.len() + txs_in_pool.len();
let mut txs = Vec::with_capacity(len);
let mut txs_as_hex = Vec::with_capacity(len);
let mut txs_as_json = Vec::with_capacity(if request.decode_as_json { len } else { 0 });
// Map all blockchain transactions.
for tx in txs_in_blockchain {
let tx_hash = Hex(tx.tx_hash);
let prunable_hash = Hex(tx.prunable_hash);
let (pruned_as_hex, prunable_as_hex) = if tx.pruned_blob.is_empty() {
(HexVec::new(), HexVec::new())
} else {
(HexVec(tx.pruned_blob), HexVec(tx.prunable_blob))
};
let as_hex = if pruned_as_hex.is_empty() {
// `monerod` will insert a `""` into the `txs_as_hex` array for pruned transactions.
// curl http://127.0.0.1:18081/get_transactions -d '{"txs_hashes":["4c8b98753d1577d225a497a50f453827cff3aa023a4add60ec4ce4f923f75de8"]}' -H 'Content-Type: application/json'
HexVec::new()
} else {
HexVec(tx.tx_blob)
};
txs_as_hex.push(as_hex.clone());
let as_json = if request.decode_as_json {
let tx = Transaction::read(&mut as_hex.as_slice())?;
let json_type = cuprate_types::json::tx::Transaction::from(tx);
let json = serde_json::to_string(&json_type).unwrap();
txs_as_json.push(json.clone());
json
} else {
String::new()
};
let tx_entry_type = TxEntryType::Blockchain {
block_height: tx.block_height,
block_timestamp: tx.block_timestamp,
confirmations: tx.confirmations,
output_indices: tx.output_indices,
in_pool: false,
};
let tx = TxEntry {
as_hex,
as_json,
double_spend_seen: false,
tx_hash,
prunable_as_hex,
prunable_hash,
pruned_as_hex,
tx_entry_type,
};
txs.push(tx);
}
// Map all txpool transactions.
for tx_in_pool in txs_in_pool {
let TxInPool {
tx_blob,
tx_hash,
double_spend_seen,
received_timestamp,
relayed,
} = tx_in_pool;
let tx_hash = Hex(tx_hash);
let tx = Transaction::read(&mut tx_blob.as_slice())?;
let pruned_as_hex = HexVec::new();
let prunable_as_hex = HexVec::new();
let prunable_hash = Hex([0; 32]);
let as_hex = HexVec(tx_blob);
txs_as_hex.push(as_hex.clone());
let as_json = if request.decode_as_json {
let json_type = cuprate_types::json::tx::Transaction::from(tx);
let json = serde_json::to_string(&json_type).unwrap();
txs_as_json.push(json.clone());
json
} else {
String::new()
};
let tx_entry_type = TxEntryType::Pool {
relayed,
received_timestamp,
in_pool: true,
};
let tx = TxEntry {
as_hex,
as_json,
double_spend_seen,
tx_hash,
prunable_as_hex,
prunable_hash,
pruned_as_hex,
tx_entry_type,
};
txs.push(tx);
}
(txs, txs_as_hex, txs_as_json)
};
Ok(GetTransactionsResponse {
base: helper::access_response_base(false),
txs_as_hex,
txs_as_json,
missed_tx,
txs,
})
}
/// <https://github.com/monero-project/monero/blob/cc73fe71162d564ffda8e549b79a350bca53c454/src/rpc/core_rpc_server.cpp#L790-L815>
async fn get_alt_blocks_hashes(
mut state: CupratedRpcHandler,
_: GetAltBlocksHashesRequest,
) -> Result<GetAltBlocksHashesResponse, Error> {
let blks_hashes = blockchain::alt_chains(&mut state.blockchain_read)
.await?
.into_iter()
.map(|info| Hex(info.block_hash))
.collect();
Ok(GetAltBlocksHashesResponse {
base: helper::access_response_base(false),
blks_hashes,
})
}
/// <https://github.com/monero-project/monero/blob/cc73fe71162d564ffda8e549b79a350bca53c454/src/rpc/core_rpc_server.cpp#L1229-L1305>
async fn is_key_image_spent(
mut state: CupratedRpcHandler,
request: IsKeyImageSpentRequest,
) -> Result<IsKeyImageSpentResponse, Error> {
let restricted = state.is_restricted();
if restricted && request.key_images.len() > RESTRICTED_SPENT_KEY_IMAGES_COUNT {
return Err(anyhow!("Too many key images queried in restricted mode"));
}
let key_images = request
.key_images
.into_iter()
.map(|k| k.0)
.collect::<Vec<[u8; 32]>>();
let mut spent_status = Vec::with_capacity(key_images.len());
// Check the blockchain for key image spend status.
blockchain::key_images_spent_vec(&mut state.blockchain_read, key_images.clone())
.await?
.into_iter()
.for_each(|ki| {
if ki {
spent_status.push(KeyImageSpentStatus::SpentInBlockchain);
} else {
spent_status.push(KeyImageSpentStatus::Unspent);
}
});
assert_eq!(spent_status.len(), key_images.len(), "key_images_spent() should be returning a Vec with an equal length to the input, the below zip() relies on this.");
// Filter the remaining unspent key images out from the vector.
let key_images = key_images
.into_iter()
.zip(&spent_status)
.filter_map(|(ki, status)| match status {
KeyImageSpentStatus::Unspent => Some(ki),
KeyImageSpentStatus::SpentInBlockchain => None,
KeyImageSpentStatus::SpentInPool => unreachable!(),
})
.collect::<Vec<[u8; 32]>>();
// Check if the remaining unspent key images exist in the transaction pool.
if !key_images.is_empty() {
txpool::key_images_spent_vec(&mut state.txpool_read, key_images, !restricted)
.await?
.into_iter()
.for_each(|ki| {
if ki {
spent_status.push(KeyImageSpentStatus::SpentInPool);
} else {
spent_status.push(KeyImageSpentStatus::Unspent);
}
});
}
let spent_status = spent_status
.into_iter()
.map(KeyImageSpentStatus::to_u8)
.collect();
Ok(IsKeyImageSpentResponse {
base: helper::access_response_base(false),
spent_status,
})
}
/// <https://github.com/monero-project/monero/blob/cc73fe71162d564ffda8e549b79a350bca53c454/src/rpc/core_rpc_server.cpp#L1307-L1411>
async fn send_raw_transaction(
mut state: CupratedRpcHandler,
request: SendRawTransactionRequest,
) -> Result<SendRawTransactionResponse, Error> {
let mut resp = SendRawTransactionResponse {
base: helper::access_response_base(false),
double_spend: false,
fee_too_low: false,
invalid_input: false,
invalid_output: false,
low_mixin: false,
nonzero_unlock_time: false,
not_relayed: request.do_not_relay,
overspend: false,
reason: String::new(),
sanity_check_failed: false,
too_big: false,
too_few_outputs: false,
tx_extra_too_big: false,
};
let tx = Transaction::read(&mut request.tx_as_hex.as_slice())?;
if request.do_sanity_checks {
/// FIXME: these checks could be defined elsewhere.
///
/// <https://github.com/monero-project/monero/blob/893916ad091a92e765ce3241b94e706ad012b62a/src/cryptonote_core/tx_sanity_check.cpp#L42>
fn tx_sanity_check(tx: &Transaction, rct_outs_available: u64) -> Result<(), String> {
let Some(input) = tx.prefix().inputs.first() else {
return Err("No inputs".to_string());
};
let mut rct_indices = vec![];
let mut n_indices: usize = 0;
for input in &tx.prefix().inputs {
match input {
Input::Gen(_) => return Err("Transaction is coinbase".to_string()),
Input::ToKey {
amount,
key_offsets,
key_image,
} => {
let Some(amount) = amount else {
continue;
};
/// <https://github.com/monero-project/monero/blob/893916ad091a92e765ce3241b94e706ad012b62a/src/cryptonote_basic/cryptonote_format_utils.cpp#L1526>
fn relative_output_offsets_to_absolute(mut offsets: Vec<u64>) -> Vec<u64> {
assert!(!offsets.is_empty());
for i in 1..offsets.len() {
offsets[i] += offsets[i - 1];
}
offsets
}
n_indices += key_offsets.len();
let absolute = relative_output_offsets_to_absolute(key_offsets.clone());
rct_indices.extend(absolute);
}
}
}
if n_indices <= 10 {
return Ok(());
}
if rct_outs_available < 10_000 {
return Ok(());
}
let rct_indices_len = rct_indices.len();
if rct_indices_len < n_indices * 8 / 10 {
return Err(format!("amount of unique indices is too low (amount of rct indices is {rct_indices_len} out of total {n_indices} indices."));
}
let median = cuprate_helper::num::median(rct_indices);
if median < rct_outs_available * 6 / 10 {
return Err(format!("median offset index is too low (median is {median} out of total {rct_outs_available} offsets). Transactions should contain a higher fraction of recent outputs."));
}
Ok(())
}
let rct_outs_available = blockchain::total_rct_outputs(&mut state.blockchain_read).await?;
if let Err(e) = tx_sanity_check(&tx, rct_outs_available) {
resp.base.response_base.status = Status::Failed;
resp.reason.push_str(&format!("Sanity check failed: {e}"));
resp.sanity_check_failed = true;
return Ok(resp);
}
}
let tx_relay_checks =
txpool::check_maybe_relay_local(&mut state.txpool_manager, tx, !request.do_not_relay)
.await?;
if tx_relay_checks.is_empty() {
return Ok(resp);
}
// <https://github.com/monero-project/monero/blob/cc73fe71162d564ffda8e549b79a350bca53c454/src/rpc/core_rpc_server.cpp#L124>
fn add_reason(reasons: &mut String, reason: &'static str) {
if !reasons.is_empty() {
reasons.push_str(", ");
}
reasons.push_str(reason);
}
let mut reasons = String::new();
#[rustfmt::skip]
let array = [
(&mut resp.double_spend, TxRelayChecks::DOUBLE_SPEND, "double spend"),
(&mut resp.fee_too_low, TxRelayChecks::FEE_TOO_LOW, "fee too low"),
(&mut resp.invalid_input, TxRelayChecks::INVALID_INPUT, "invalid input"),
(&mut resp.invalid_output, TxRelayChecks::INVALID_OUTPUT, "invalid output"),
(&mut resp.low_mixin, TxRelayChecks::LOW_MIXIN, "bad ring size"),
(&mut resp.nonzero_unlock_time, TxRelayChecks::NONZERO_UNLOCK_TIME, "tx unlock time is not zero"),
(&mut resp.overspend, TxRelayChecks::OVERSPEND, "overspend"),
(&mut resp.too_big, TxRelayChecks::TOO_BIG, "too big"),
(&mut resp.too_few_outputs, TxRelayChecks::TOO_FEW_OUTPUTS, "too few outputs"),
(&mut resp.tx_extra_too_big, TxRelayChecks::TX_EXTRA_TOO_BIG, "tx-extra too big"),
];
for (field, flag, reason) in array {
if tx_relay_checks.contains(flag) {
*field = true;
add_reason(&mut reasons, reason);
}
}
Ok(resp)
}
/// <https://github.com/monero-project/monero/blob/cc73fe71162d564ffda8e549b79a350bca53c454/src/rpc/core_rpc_server.cpp#L1525-L1535>
async fn save_bc(mut state: CupratedRpcHandler, _: SaveBcRequest) -> Result<SaveBcResponse, Error> {
blockchain_manager::sync(&mut state.blockchain_manager).await?;
Ok(SaveBcResponse {
base: ResponseBase::OK,
})
}
/// <https://github.com/monero-project/monero/blob/cc73fe71162d564ffda8e549b79a350bca53c454/src/rpc/core_rpc_server.cpp#L1537-L1582>
async fn get_peer_list(
mut state: CupratedRpcHandler,
request: GetPeerListRequest,
) -> Result<GetPeerListResponse, Error> {
let (white_list, gray_list) = address_book::peerlist::<ClearNet>(&mut DummyAddressBook).await?;
Ok(GetPeerListResponse {
base: helper::response_base(false),
white_list,
gray_list,
})
}
/// <https://github.com/monero-project/monero/blob/cc73fe71162d564ffda8e549b79a350bca53c454/src/rpc/core_rpc_server.cpp#L1663-L1687>
async fn get_transaction_pool(
mut state: CupratedRpcHandler,
_: GetTransactionPoolRequest,
) -> Result<GetTransactionPoolResponse, Error> {
let include_sensitive_txs = !state.is_restricted();
let (transactions, spent_key_images) =
txpool::pool(&mut state.txpool_read, include_sensitive_txs).await?;
Ok(GetTransactionPoolResponse {
base: helper::access_response_base(false),
transactions,
spent_key_images,
})
}
/// <https://github.com/monero-project/monero/blob/cc73fe71162d564ffda8e549b79a350bca53c454/src/rpc/core_rpc_server.cpp#L1741-L1756>
async fn get_transaction_pool_stats(
mut state: CupratedRpcHandler,
_: GetTransactionPoolStatsRequest,
) -> Result<GetTransactionPoolStatsResponse, Error> {
let include_sensitive_txs = !state.is_restricted();
let pool_stats = txpool::pool_stats(&mut state.txpool_read, include_sensitive_txs).await?;
Ok(GetTransactionPoolStatsResponse {
base: helper::access_response_base(false),
pool_stats,
})
}
/// <https://github.com/monero-project/monero/blob/cc73fe71162d564ffda8e549b79a350bca53c454/src/rpc/core_rpc_server.cpp#L1780-L1788>
async fn stop_daemon(
mut state: CupratedRpcHandler,
_: StopDaemonRequest,
) -> Result<StopDaemonResponse, Error> {
blockchain_manager::stop(&mut state.blockchain_manager).await?;
Ok(StopDaemonResponse { status: Status::Ok })
}
/// <https://github.com/monero-project/monero/blob/cc73fe71162d564ffda8e549b79a350bca53c454/src/rpc/core_rpc_server.cpp#L3066-L3077>
async fn get_limit(
mut state: CupratedRpcHandler,
_: GetLimitRequest,
) -> Result<GetLimitResponse, Error> {
todo!("waiting on p2p service");
Ok(GetLimitResponse {
base: helper::response_base(false),
limit_down: todo!(),
limit_up: todo!(),
})
}
/// <https://github.com/monero-project/monero/blob/cc73fe71162d564ffda8e549b79a350bca53c454/src/rpc/core_rpc_server.cpp#L3079-L3117>
async fn set_limit(
mut state: CupratedRpcHandler,
request: SetLimitRequest,
) -> Result<SetLimitResponse, Error> {
todo!("waiting on p2p service");
Ok(SetLimitResponse {
base: helper::response_base(false),
limit_down: todo!(),
limit_up: todo!(),
})
}
/// <https://github.com/monero-project/monero/blob/cc73fe71162d564ffda8e549b79a350bca53c454/src/rpc/core_rpc_server.cpp#L3119-L3127>
async fn out_peers(
mut state: CupratedRpcHandler,
request: OutPeersRequest,
) -> Result<OutPeersResponse, Error> {
todo!("waiting on p2p service");
Ok(OutPeersResponse {
base: helper::response_base(false),
out_peers: todo!(),
})
}
/// <https://github.com/monero-project/monero/blob/cc73fe71162d564ffda8e549b79a350bca53c454/src/rpc/core_rpc_server.cpp#L3129-L3137>
async fn in_peers(
mut state: CupratedRpcHandler,
request: InPeersRequest,
) -> Result<InPeersResponse, Error> {
todo!("waiting on p2p service");
Ok(InPeersResponse {
base: helper::response_base(false),
in_peers: todo!(),
})
}
/// <https://github.com/monero-project/monero/blob/cc73fe71162d564ffda8e549b79a350bca53c454/src/rpc/core_rpc_server.cpp#L584-L599>
async fn get_net_stats(
mut state: CupratedRpcHandler,
_: GetNetStatsRequest,
) -> Result<GetNetStatsResponse, Error> {
todo!("waiting on p2p service");
Ok(GetNetStatsResponse {
base: helper::response_base(false),
start_time: *START_INSTANT_UNIX,
total_packets_in: todo!(),
total_bytes_in: todo!(),
total_packets_out: todo!(),
total_bytes_out: todo!(),
})
}
/// <https://github.com/monero-project/monero/blob/cc73fe71162d564ffda8e549b79a350bca53c454/src/rpc/core_rpc_server.cpp#L912-L957>
async fn get_outs(
state: CupratedRpcHandler,
request: GetOutsRequest,
) -> Result<GetOutsResponse, Error> {
let outs = shared::get_outs(
state,
cuprate_rpc_types::bin::GetOutsRequest {
outputs: request.outputs,
get_txid: request.get_txid,
},
)
.await?
.outs
.into_iter()
.map(Into::into)
.collect();
Ok(GetOutsResponse {
base: helper::response_base(false),
outs,
})
}
/// <https://github.com/monero-project/monero/blob/cc73fe71162d564ffda8e549b79a350bca53c454/src/rpc/core_rpc_server.cpp#L3242-L3252>
async fn pop_blocks(
mut state: CupratedRpcHandler,
request: PopBlocksRequest,
) -> Result<PopBlocksResponse, Error> {
let height =
blockchain_manager::pop_blocks(&mut state.blockchain_manager, request.nblocks).await?;
Ok(PopBlocksResponse {
base: helper::response_base(false),
height,
})
}
/// <https://github.com/monero-project/monero/blob/cc73fe71162d564ffda8e549b79a350bca53c454/src/rpc/core_rpc_server.cpp#L1713-L1739>
async fn get_transaction_pool_hashes(
mut state: CupratedRpcHandler,
_: GetTransactionPoolHashesRequest,
) -> Result<GetTransactionPoolHashesResponse, Error> {
Ok(GetTransactionPoolHashesResponse {
base: helper::response_base(false),
tx_hashes: shared::get_transaction_pool_hashes(state)
.await?
.into_iter()
.map(Hex)
.collect(),
})
}
/// <https://github.com/monero-project/monero/blob/cc73fe71162d564ffda8e549b79a350bca53c454/src/rpc/core_rpc_server.cpp#L193-L225>
async fn get_public_nodes(
mut state: CupratedRpcHandler,
request: GetPublicNodesRequest,
) -> Result<GetPublicNodesResponse, Error> {
let (white, gray) = address_book::peerlist::<ClearNet>(&mut DummyAddressBook).await?;
fn map(peers: Vec<cuprate_types::rpc::Peer>) -> Vec<PublicNode> {
peers
.into_iter()
.map(|peer| {
let cuprate_types::rpc::Peer {
host,
rpc_port,
rpc_credits_per_hash,
last_seen,
..
} = peer;
PublicNode {
host,
rpc_port,
rpc_credits_per_hash,
last_seen,
}
})
.collect()
}
let white = map(white);
let gray = map(gray);
Ok(GetPublicNodesResponse {
base: helper::response_base(false),
white,
gray,
})
}
//---------------------------------------------------------------------------------------------------- Unsupported RPC calls (for now)
/// <https://github.com/monero-project/monero/blob/cc73fe71162d564ffda8e549b79a350bca53c454/src/rpc/core_rpc_server.cpp#L1758-L1778>
async fn set_bootstrap_daemon(
state: CupratedRpcHandler,
request: SetBootstrapDaemonRequest,
) -> Result<SetBootstrapDaemonResponse, Error> {
todo!();
}
/// <https://github.com/monero-project/monero/blob/cc73fe71162d564ffda8e549b79a350bca53c454/src/rpc/core_rpc_server.cpp#L3139-L3240>
async fn update(
state: CupratedRpcHandler,
request: UpdateRequest,
) -> Result<UpdateResponse, Error> {
todo!();
}
/// <https://github.com/monero-project/monero/blob/cc73fe71162d564ffda8e549b79a350bca53c454/src/rpc/core_rpc_server.cpp#L1641-L1652>
async fn set_log_level(
state: CupratedRpcHandler,
request: SetLogLevelRequest,
) -> Result<SetLogLevelResponse, Error> {
todo!()
}
/// <https://github.com/monero-project/monero/blob/cc73fe71162d564ffda8e549b79a350bca53c454/src/rpc/core_rpc_server.cpp#L1654-L1661>
async fn set_log_categories(
state: CupratedRpcHandler,
request: SetLogCategoriesRequest,
) -> Result<SetLogCategoriesResponse, Error> {
todo!()
}
//---------------------------------------------------------------------------------------------------- Unsupported RPC calls (forever)
/// <https://github.com/monero-project/monero/blob/cc73fe71162d564ffda8e549b79a350bca53c454/src/rpc/core_rpc_server.cpp#L1413-L1462>
async fn start_mining(
state: CupratedRpcHandler,
request: StartMiningRequest,
) -> Result<StartMiningResponse, Error> {
unreachable!()
}
/// <https://github.com/monero-project/monero/blob/cc73fe71162d564ffda8e549b79a350bca53c454/src/rpc/core_rpc_server.cpp#L1464-L1482>
async fn stop_mining(
state: CupratedRpcHandler,
request: StopMiningRequest,
) -> Result<StopMiningResponse, Error> {
unreachable!();
}
/// <https://github.com/monero-project/monero/blob/cc73fe71162d564ffda8e549b79a350bca53c454/src/rpc/core_rpc_server.cpp#L1484-L1523>
async fn mining_status(
state: CupratedRpcHandler,
request: MiningStatusRequest,
) -> Result<MiningStatusResponse, Error> {
unreachable!();
}
/// <https://github.com/monero-project/monero/blob/cc73fe71162d564ffda8e549b79a350bca53c454/src/rpc/core_rpc_server.cpp#L1626-L1639>
async fn set_log_hash_rate(
state: CupratedRpcHandler,
request: SetLogHashRateRequest,
) -> Result<SetLogHashRateResponse, Error> {
unreachable!();
}

View file

@ -0,0 +1,130 @@
//! RPC handler functions that are shared between different endpoint/methods.
//!
//! TODO:
//! Some handlers have `todo!()`s for other Cuprate internals that must be completed, see:
//! <https://github.com/Cuprate/cuprate/pull/355>
use std::{
collections::{HashMap, HashSet},
num::NonZero,
};
use anyhow::{anyhow, Error};
use cuprate_types::OutputDistributionInput;
use monero_serai::transaction::Timelock;
use cuprate_constants::rpc::MAX_RESTRICTED_GLOBAL_FAKE_OUTS_COUNT;
use cuprate_helper::cast::usize_to_u64;
use cuprate_hex::Hex;
use cuprate_rpc_interface::RpcHandler;
use cuprate_rpc_types::{
bin::{
GetOutsRequest, GetOutsResponse, GetTransactionPoolHashesRequest,
GetTransactionPoolHashesResponse,
},
json::{GetOutputDistributionRequest, GetOutputDistributionResponse},
misc::{Distribution, OutKeyBin},
};
use crate::rpc::{
handlers::helper,
service::{blockchain, blockchain_context, txpool},
CupratedRpcHandler,
};
/// <https://github.com/monero-project/monero/blob/cc73fe71162d564ffda8e549b79a350bca53c454/src/rpc/core_rpc_server.cpp#L912-L957>
///
/// Shared between:
/// - Other JSON's `/get_outs`
/// - Binary's `/get_outs.bin`
pub(super) async fn get_outs(
mut state: CupratedRpcHandler,
request: GetOutsRequest,
) -> Result<GetOutsResponse, Error> {
if state.is_restricted() && request.outputs.len() > MAX_RESTRICTED_GLOBAL_FAKE_OUTS_COUNT {
return Err(anyhow!("Too many outs requested"));
}
let outputs = blockchain::outputs_vec(
&mut state.blockchain_read,
request.outputs,
request.get_txid,
)
.await?;
let mut outs = Vec::<OutKeyBin>::with_capacity(outputs.len());
let blockchain_ctx = state.blockchain_context.blockchain_context();
for (_, index_vec) in outputs {
for (_, out) in index_vec {
let out_key = OutKeyBin {
key: out.key.0,
mask: out.commitment.0,
unlocked: cuprate_consensus_rules::transactions::output_unlocked(
&out.time_lock,
blockchain_ctx.chain_height,
blockchain_ctx.current_adjusted_timestamp_for_time_lock(),
blockchain_ctx.current_hf,
),
height: usize_to_u64(out.height),
txid: out.txid.unwrap_or_default(),
};
outs.push(out_key);
}
}
Ok(GetOutsResponse {
base: helper::access_response_base(false),
outs,
})
}
/// <https://github.com/monero-project/monero/blob/cc73fe71162d564ffda8e549b79a350bca53c454/src/rpc/core_rpc_server.cpp#L1713-L1739>
///
/// Shared between:
/// - Other JSON's `/get_transaction_pool_hashes`
/// - Binary's `/get_transaction_pool_hashes.bin`
///
/// Returns transaction hashes.
pub(super) async fn get_transaction_pool_hashes(
mut state: CupratedRpcHandler,
) -> Result<Vec<[u8; 32]>, Error> {
let include_sensitive_txs = !state.is_restricted();
txpool::all_hashes(&mut state.txpool_read, include_sensitive_txs).await
}
/// <https://github.com/monero-project/monero/blob/cc73fe71162d564ffda8e549b79a350bca53c454/src/rpc/core_rpc_server.cpp#L3352-L3398>
///
/// Shared between:
/// - Other JSON's `/get_output_distribution`
/// - Binary's `/get_output_distribution.bin`
///
/// Returns transaction hashes.
pub(super) async fn get_output_distribution(
mut state: CupratedRpcHandler,
request: GetOutputDistributionRequest,
) -> Result<GetOutputDistributionResponse, Error> {
if state.is_restricted() && request.amounts != [0] {
return Err(anyhow!(
"Restricted RPC can only get output distribution for RCT outputs. Use your own node."
));
}
let input = OutputDistributionInput {
amounts: request.amounts,
cumulative: request.cumulative,
from_height: request.from_height,
// 0 / `None` is placeholder for the whole chain
to_height: NonZero::new(request.to_height),
};
let distributions = blockchain::output_distribution(&mut state.blockchain_read, input).await?;
Ok(GetOutputDistributionResponse {
base: helper::access_response_base(false),
distributions: todo!(
"This type contains binary strings: <https://github.com/monero-project/monero/issues/9422>"
),
})
}

View file

@ -1,294 +0,0 @@
use std::sync::Arc;
use anyhow::Error;
use tower::ServiceExt;
use cuprate_rpc_types::json::{
AddAuxPowRequest, AddAuxPowResponse, BannedRequest, BannedResponse, CalcPowRequest,
CalcPowResponse, FlushCacheRequest, FlushCacheResponse, FlushTransactionPoolRequest,
FlushTransactionPoolResponse, GenerateBlocksRequest, GenerateBlocksResponse,
GetAlternateChainsRequest, GetAlternateChainsResponse, GetBansRequest, GetBansResponse,
GetBlockCountRequest, GetBlockCountResponse, GetBlockHeaderByHashRequest,
GetBlockHeaderByHashResponse, GetBlockHeaderByHeightRequest, GetBlockHeaderByHeightResponse,
GetBlockHeadersRangeRequest, GetBlockHeadersRangeResponse, GetBlockRequest, GetBlockResponse,
GetCoinbaseTxSumRequest, GetCoinbaseTxSumResponse, GetConnectionsRequest,
GetConnectionsResponse, GetFeeEstimateRequest, GetFeeEstimateResponse, GetInfoRequest,
GetInfoResponse, GetLastBlockHeaderRequest, GetLastBlockHeaderResponse, GetMinerDataRequest,
GetMinerDataResponse, GetOutputHistogramRequest, GetOutputHistogramResponse,
GetTransactionPoolBacklogRequest, GetTransactionPoolBacklogResponse, GetTxIdsLooseRequest,
GetTxIdsLooseResponse, GetVersionRequest, GetVersionResponse, HardForkInfoRequest,
HardForkInfoResponse, JsonRpcRequest, JsonRpcResponse, OnGetBlockHashRequest,
OnGetBlockHashResponse, PruneBlockchainRequest, PruneBlockchainResponse, RelayTxRequest,
RelayTxResponse, SetBansRequest, SetBansResponse, SubmitBlockRequest, SubmitBlockResponse,
SyncInfoRequest, SyncInfoResponse,
};
use crate::rpc::CupratedRpcHandler;
/// Map a [`JsonRpcRequest`] to the function that will lead to a [`JsonRpcResponse`].
pub(super) async fn map_request(
state: CupratedRpcHandler,
request: JsonRpcRequest,
) -> Result<JsonRpcResponse, Error> {
use JsonRpcRequest as Req;
use JsonRpcResponse as Resp;
Ok(match request {
Req::GetBlockCount(r) => Resp::GetBlockCount(get_block_count(state, r).await?),
Req::OnGetBlockHash(r) => Resp::OnGetBlockHash(on_get_block_hash(state, r).await?),
Req::SubmitBlock(r) => Resp::SubmitBlock(submit_block(state, r).await?),
Req::GenerateBlocks(r) => Resp::GenerateBlocks(generate_blocks(state, r).await?),
Req::GetLastBlockHeader(r) => {
Resp::GetLastBlockHeader(get_last_block_header(state, r).await?)
}
Req::GetBlockHeaderByHash(r) => {
Resp::GetBlockHeaderByHash(get_block_header_by_hash(state, r).await?)
}
Req::GetBlockHeaderByHeight(r) => {
Resp::GetBlockHeaderByHeight(get_block_header_by_height(state, r).await?)
}
Req::GetBlockHeadersRange(r) => {
Resp::GetBlockHeadersRange(get_block_headers_range(state, r).await?)
}
Req::GetBlock(r) => Resp::GetBlock(get_block(state, r).await?),
Req::GetConnections(r) => Resp::GetConnections(get_connections(state, r).await?),
Req::GetInfo(r) => Resp::GetInfo(get_info(state, r).await?),
Req::HardForkInfo(r) => Resp::HardForkInfo(hard_fork_info(state, r).await?),
Req::SetBans(r) => Resp::SetBans(set_bans(state, r).await?),
Req::GetBans(r) => Resp::GetBans(get_bans(state, r).await?),
Req::Banned(r) => Resp::Banned(banned(state, r).await?),
Req::FlushTransactionPool(r) => {
Resp::FlushTransactionPool(flush_transaction_pool(state, r).await?)
}
Req::GetOutputHistogram(r) => {
Resp::GetOutputHistogram(get_output_histogram(state, r).await?)
}
Req::GetCoinbaseTxSum(r) => Resp::GetCoinbaseTxSum(get_coinbase_tx_sum(state, r).await?),
Req::GetVersion(r) => Resp::GetVersion(get_version(state, r).await?),
Req::GetFeeEstimate(r) => Resp::GetFeeEstimate(get_fee_estimate(state, r).await?),
Req::GetAlternateChains(r) => {
Resp::GetAlternateChains(get_alternate_chains(state, r).await?)
}
Req::RelayTx(r) => Resp::RelayTx(relay_tx(state, r).await?),
Req::SyncInfo(r) => Resp::SyncInfo(sync_info(state, r).await?),
Req::GetTransactionPoolBacklog(r) => {
Resp::GetTransactionPoolBacklog(get_transaction_pool_backlog(state, r).await?)
}
Req::GetMinerData(r) => Resp::GetMinerData(get_miner_data(state, r).await?),
Req::PruneBlockchain(r) => Resp::PruneBlockchain(prune_blockchain(state, r).await?),
Req::CalcPow(r) => Resp::CalcPow(calc_pow(state, r).await?),
Req::FlushCache(r) => Resp::FlushCache(flush_cache(state, r).await?),
Req::AddAuxPow(r) => Resp::AddAuxPow(add_aux_pow(state, r).await?),
Req::GetTxIdsLoose(r) => Resp::GetTxIdsLoose(get_tx_ids_loose(state, r).await?),
})
}
async fn get_block_count(
state: CupratedRpcHandler,
request: GetBlockCountRequest,
) -> Result<GetBlockCountResponse, Error> {
todo!()
}
async fn on_get_block_hash(
state: CupratedRpcHandler,
request: OnGetBlockHashRequest,
) -> Result<OnGetBlockHashResponse, Error> {
todo!()
}
async fn submit_block(
state: CupratedRpcHandler,
request: SubmitBlockRequest,
) -> Result<SubmitBlockResponse, Error> {
todo!()
}
async fn generate_blocks(
state: CupratedRpcHandler,
request: GenerateBlocksRequest,
) -> Result<GenerateBlocksResponse, Error> {
todo!()
}
async fn get_last_block_header(
state: CupratedRpcHandler,
request: GetLastBlockHeaderRequest,
) -> Result<GetLastBlockHeaderResponse, Error> {
todo!()
}
async fn get_block_header_by_hash(
state: CupratedRpcHandler,
request: GetBlockHeaderByHashRequest,
) -> Result<GetBlockHeaderByHashResponse, Error> {
todo!()
}
async fn get_block_header_by_height(
state: CupratedRpcHandler,
request: GetBlockHeaderByHeightRequest,
) -> Result<GetBlockHeaderByHeightResponse, Error> {
todo!()
}
async fn get_block_headers_range(
state: CupratedRpcHandler,
request: GetBlockHeadersRangeRequest,
) -> Result<GetBlockHeadersRangeResponse, Error> {
todo!()
}
async fn get_block(
state: CupratedRpcHandler,
request: GetBlockRequest,
) -> Result<GetBlockResponse, Error> {
todo!()
}
async fn get_connections(
state: CupratedRpcHandler,
request: GetConnectionsRequest,
) -> Result<GetConnectionsResponse, Error> {
todo!()
}
async fn get_info(
state: CupratedRpcHandler,
request: GetInfoRequest,
) -> Result<GetInfoResponse, Error> {
todo!()
}
async fn hard_fork_info(
state: CupratedRpcHandler,
request: HardForkInfoRequest,
) -> Result<HardForkInfoResponse, Error> {
todo!()
}
async fn set_bans(
state: CupratedRpcHandler,
request: SetBansRequest,
) -> Result<SetBansResponse, Error> {
todo!()
}
async fn get_bans(
state: CupratedRpcHandler,
request: GetBansRequest,
) -> Result<GetBansResponse, Error> {
todo!()
}
async fn banned(
state: CupratedRpcHandler,
request: BannedRequest,
) -> Result<BannedResponse, Error> {
todo!()
}
async fn flush_transaction_pool(
state: CupratedRpcHandler,
request: FlushTransactionPoolRequest,
) -> Result<FlushTransactionPoolResponse, Error> {
todo!()
}
async fn get_output_histogram(
state: CupratedRpcHandler,
request: GetOutputHistogramRequest,
) -> Result<GetOutputHistogramResponse, Error> {
todo!()
}
async fn get_coinbase_tx_sum(
state: CupratedRpcHandler,
request: GetCoinbaseTxSumRequest,
) -> Result<GetCoinbaseTxSumResponse, Error> {
todo!()
}
async fn get_version(
state: CupratedRpcHandler,
request: GetVersionRequest,
) -> Result<GetVersionResponse, Error> {
todo!()
}
async fn get_fee_estimate(
state: CupratedRpcHandler,
request: GetFeeEstimateRequest,
) -> Result<GetFeeEstimateResponse, Error> {
todo!()
}
async fn get_alternate_chains(
state: CupratedRpcHandler,
request: GetAlternateChainsRequest,
) -> Result<GetAlternateChainsResponse, Error> {
todo!()
}
async fn relay_tx(
state: CupratedRpcHandler,
request: RelayTxRequest,
) -> Result<RelayTxResponse, Error> {
todo!()
}
async fn sync_info(
state: CupratedRpcHandler,
request: SyncInfoRequest,
) -> Result<SyncInfoResponse, Error> {
todo!()
}
async fn get_transaction_pool_backlog(
state: CupratedRpcHandler,
request: GetTransactionPoolBacklogRequest,
) -> Result<GetTransactionPoolBacklogResponse, Error> {
todo!()
}
async fn get_miner_data(
state: CupratedRpcHandler,
request: GetMinerDataRequest,
) -> Result<GetMinerDataResponse, Error> {
todo!()
}
async fn prune_blockchain(
state: CupratedRpcHandler,
request: PruneBlockchainRequest,
) -> Result<PruneBlockchainResponse, Error> {
todo!()
}
async fn calc_pow(
state: CupratedRpcHandler,
request: CalcPowRequest,
) -> Result<CalcPowResponse, Error> {
todo!()
}
async fn flush_cache(
state: CupratedRpcHandler,
request: FlushCacheRequest,
) -> Result<FlushCacheResponse, Error> {
todo!()
}
async fn add_aux_pow(
state: CupratedRpcHandler,
request: AddAuxPowRequest,
) -> Result<AddAuxPowResponse, Error> {
todo!()
}
async fn get_tx_ids_loose(
state: CupratedRpcHandler,
request: GetTxIdsLooseRequest,
) -> Result<GetTxIdsLooseResponse, Error> {
todo!()
}

View file

@ -1,260 +0,0 @@
use anyhow::Error;
use cuprate_rpc_types::other::{
GetAltBlocksHashesRequest, GetAltBlocksHashesResponse, GetHeightRequest, GetHeightResponse,
GetLimitRequest, GetLimitResponse, GetNetStatsRequest, GetNetStatsResponse, GetOutsRequest,
GetOutsResponse, GetPeerListRequest, GetPeerListResponse, GetPublicNodesRequest,
GetPublicNodesResponse, GetTransactionPoolHashesRequest, GetTransactionPoolHashesResponse,
GetTransactionPoolRequest, GetTransactionPoolResponse, GetTransactionPoolStatsRequest,
GetTransactionPoolStatsResponse, GetTransactionsRequest, GetTransactionsResponse,
InPeersRequest, InPeersResponse, IsKeyImageSpentRequest, IsKeyImageSpentResponse,
MiningStatusRequest, MiningStatusResponse, OtherRequest, OtherResponse, OutPeersRequest,
OutPeersResponse, PopBlocksRequest, PopBlocksResponse, SaveBcRequest, SaveBcResponse,
SendRawTransactionRequest, SendRawTransactionResponse, SetBootstrapDaemonRequest,
SetBootstrapDaemonResponse, SetLimitRequest, SetLimitResponse, SetLogCategoriesRequest,
SetLogCategoriesResponse, SetLogHashRateRequest, SetLogHashRateResponse, SetLogLevelRequest,
SetLogLevelResponse, StartMiningRequest, StartMiningResponse, StopDaemonRequest,
StopDaemonResponse, StopMiningRequest, StopMiningResponse, UpdateRequest, UpdateResponse,
};
use crate::rpc::CupratedRpcHandler;
/// Map a [`OtherRequest`] to the function that will lead to a [`OtherResponse`].
pub(super) async fn map_request(
state: CupratedRpcHandler,
request: OtherRequest,
) -> Result<OtherResponse, Error> {
use OtherRequest as Req;
use OtherResponse as Resp;
Ok(match request {
Req::GetHeight(r) => Resp::GetHeight(get_height(state, r).await?),
Req::GetTransactions(r) => Resp::GetTransactions(get_transactions(state, r).await?),
Req::GetAltBlocksHashes(r) => {
Resp::GetAltBlocksHashes(get_alt_blocks_hashes(state, r).await?)
}
Req::IsKeyImageSpent(r) => Resp::IsKeyImageSpent(is_key_image_spent(state, r).await?),
Req::SendRawTransaction(r) => {
Resp::SendRawTransaction(send_raw_transaction(state, r).await?)
}
Req::StartMining(r) => Resp::StartMining(start_mining(state, r).await?),
Req::StopMining(r) => Resp::StopMining(stop_mining(state, r).await?),
Req::MiningStatus(r) => Resp::MiningStatus(mining_status(state, r).await?),
Req::SaveBc(r) => Resp::SaveBc(save_bc(state, r).await?),
Req::GetPeerList(r) => Resp::GetPeerList(get_peer_list(state, r).await?),
Req::SetLogHashRate(r) => Resp::SetLogHashRate(set_log_hash_rate(state, r).await?),
Req::SetLogLevel(r) => Resp::SetLogLevel(set_log_level(state, r).await?),
Req::SetLogCategories(r) => Resp::SetLogCategories(set_log_categories(state, r).await?),
Req::SetBootstrapDaemon(r) => {
Resp::SetBootstrapDaemon(set_bootstrap_daemon(state, r).await?)
}
Req::GetTransactionPool(r) => {
Resp::GetTransactionPool(get_transaction_pool(state, r).await?)
}
Req::GetTransactionPoolStats(r) => {
Resp::GetTransactionPoolStats(get_transaction_pool_stats(state, r).await?)
}
Req::StopDaemon(r) => Resp::StopDaemon(stop_daemon(state, r).await?),
Req::GetLimit(r) => Resp::GetLimit(get_limit(state, r).await?),
Req::SetLimit(r) => Resp::SetLimit(set_limit(state, r).await?),
Req::OutPeers(r) => Resp::OutPeers(out_peers(state, r).await?),
Req::InPeers(r) => Resp::InPeers(in_peers(state, r).await?),
Req::GetNetStats(r) => Resp::GetNetStats(get_net_stats(state, r).await?),
Req::GetOuts(r) => Resp::GetOuts(get_outs(state, r).await?),
Req::Update(r) => Resp::Update(update(state, r).await?),
Req::PopBlocks(r) => Resp::PopBlocks(pop_blocks(state, r).await?),
Req::GetTransactionPoolHashes(r) => {
Resp::GetTransactionPoolHashes(get_transaction_pool_hashes(state, r).await?)
}
Req::GetPublicNodes(r) => Resp::GetPublicNodes(get_public_nodes(state, r).await?),
})
}
async fn get_height(
state: CupratedRpcHandler,
request: GetHeightRequest,
) -> Result<GetHeightResponse, Error> {
todo!()
}
async fn get_transactions(
state: CupratedRpcHandler,
request: GetTransactionsRequest,
) -> Result<GetTransactionsResponse, Error> {
todo!()
}
async fn get_alt_blocks_hashes(
state: CupratedRpcHandler,
request: GetAltBlocksHashesRequest,
) -> Result<GetAltBlocksHashesResponse, Error> {
todo!()
}
async fn is_key_image_spent(
state: CupratedRpcHandler,
request: IsKeyImageSpentRequest,
) -> Result<IsKeyImageSpentResponse, Error> {
todo!()
}
async fn send_raw_transaction(
state: CupratedRpcHandler,
request: SendRawTransactionRequest,
) -> Result<SendRawTransactionResponse, Error> {
todo!()
}
async fn start_mining(
state: CupratedRpcHandler,
request: StartMiningRequest,
) -> Result<StartMiningResponse, Error> {
todo!()
}
async fn stop_mining(
state: CupratedRpcHandler,
request: StopMiningRequest,
) -> Result<StopMiningResponse, Error> {
todo!()
}
async fn mining_status(
state: CupratedRpcHandler,
request: MiningStatusRequest,
) -> Result<MiningStatusResponse, Error> {
todo!()
}
async fn save_bc(
state: CupratedRpcHandler,
request: SaveBcRequest,
) -> Result<SaveBcResponse, Error> {
todo!()
}
async fn get_peer_list(
state: CupratedRpcHandler,
request: GetPeerListRequest,
) -> Result<GetPeerListResponse, Error> {
todo!()
}
async fn set_log_hash_rate(
state: CupratedRpcHandler,
request: SetLogHashRateRequest,
) -> Result<SetLogHashRateResponse, Error> {
todo!()
}
async fn set_log_level(
state: CupratedRpcHandler,
request: SetLogLevelRequest,
) -> Result<SetLogLevelResponse, Error> {
todo!()
}
async fn set_log_categories(
state: CupratedRpcHandler,
request: SetLogCategoriesRequest,
) -> Result<SetLogCategoriesResponse, Error> {
todo!()
}
async fn set_bootstrap_daemon(
state: CupratedRpcHandler,
request: SetBootstrapDaemonRequest,
) -> Result<SetBootstrapDaemonResponse, Error> {
todo!()
}
async fn get_transaction_pool(
state: CupratedRpcHandler,
request: GetTransactionPoolRequest,
) -> Result<GetTransactionPoolResponse, Error> {
todo!()
}
async fn get_transaction_pool_stats(
state: CupratedRpcHandler,
request: GetTransactionPoolStatsRequest,
) -> Result<GetTransactionPoolStatsResponse, Error> {
todo!()
}
async fn stop_daemon(
state: CupratedRpcHandler,
request: StopDaemonRequest,
) -> Result<StopDaemonResponse, Error> {
todo!()
}
async fn get_limit(
state: CupratedRpcHandler,
request: GetLimitRequest,
) -> Result<GetLimitResponse, Error> {
todo!()
}
async fn set_limit(
state: CupratedRpcHandler,
request: SetLimitRequest,
) -> Result<SetLimitResponse, Error> {
todo!()
}
async fn out_peers(
state: CupratedRpcHandler,
request: OutPeersRequest,
) -> Result<OutPeersResponse, Error> {
todo!()
}
async fn in_peers(
state: CupratedRpcHandler,
request: InPeersRequest,
) -> Result<InPeersResponse, Error> {
todo!()
}
async fn get_net_stats(
state: CupratedRpcHandler,
request: GetNetStatsRequest,
) -> Result<GetNetStatsResponse, Error> {
todo!()
}
async fn get_outs(
state: CupratedRpcHandler,
request: GetOutsRequest,
) -> Result<GetOutsResponse, Error> {
todo!()
}
async fn update(
state: CupratedRpcHandler,
request: UpdateRequest,
) -> Result<UpdateResponse, Error> {
todo!()
}
async fn pop_blocks(
state: CupratedRpcHandler,
request: PopBlocksRequest,
) -> Result<PopBlocksResponse, Error> {
todo!()
}
async fn get_transaction_pool_hashes(
state: CupratedRpcHandler,
request: GetTransactionPoolHashesRequest,
) -> Result<GetTransactionPoolHashesResponse, Error> {
todo!()
}
async fn get_public_nodes(
state: CupratedRpcHandler,
request: GetPublicNodesRequest,
) -> Result<GetPublicNodesResponse, Error> {
todo!()
}

View file

@ -1,72 +0,0 @@
//! Functions for [`TxpoolReadRequest`].
use std::convert::Infallible;
use anyhow::{anyhow, Error};
use tower::{Service, ServiceExt};
use cuprate_helper::cast::usize_to_u64;
use cuprate_txpool::{
service::{
interface::{TxpoolReadRequest, TxpoolReadResponse},
TxpoolReadHandle,
},
TxEntry,
};
// FIXME: use `anyhow::Error` over `tower::BoxError` in txpool.
/// [`TxpoolReadRequest::Backlog`]
pub(crate) async fn backlog(txpool_read: &mut TxpoolReadHandle) -> Result<Vec<TxEntry>, Error> {
let TxpoolReadResponse::Backlog(tx_entries) = txpool_read
.ready()
.await
.map_err(|e| anyhow!(e))?
.call(TxpoolReadRequest::Backlog)
.await
.map_err(|e| anyhow!(e))?
else {
unreachable!();
};
Ok(tx_entries)
}
/// [`TxpoolReadRequest::Size`]
pub(crate) async fn size(
txpool_read: &mut TxpoolReadHandle,
include_sensitive_txs: bool,
) -> Result<u64, Error> {
let TxpoolReadResponse::Size(size) = txpool_read
.ready()
.await
.map_err(|e| anyhow!(e))?
.call(TxpoolReadRequest::Size {
include_sensitive_txs,
})
.await
.map_err(|e| anyhow!(e))?
else {
unreachable!();
};
Ok(usize_to_u64(size))
}
/// TODO
pub(crate) async fn flush(
txpool_manager: &mut Infallible,
tx_hashes: Vec<[u8; 32]>,
) -> Result<(), Error> {
todo!();
Ok(())
}
/// TODO
pub(crate) async fn relay(
txpool_manager: &mut Infallible,
tx_hashes: Vec<[u8; 32]>,
) -> Result<(), Error> {
todo!();
Ok(())
}

View file

@ -1,4 +1,4 @@
//! Dummy implementation of [`RpcHandler`].
//! `cuprated`'s implementation of [`RpcHandler`].
use std::task::{Context, Poll};
@ -16,14 +16,13 @@ use cuprate_rpc_types::{
json::{JsonRpcRequest, JsonRpcResponse},
other::{OtherRequest, OtherResponse},
};
use cuprate_txpool::service::{TxpoolReadHandle, TxpoolWriteHandle};
use cuprate_types::{AddAuxPow, AuxPow, HardFork};
use cuprate_txpool::service::TxpoolReadHandle;
use cuprate_types::BlockTemplate;
use crate::rpc::{bin, json, other};
use crate::rpc::handlers;
/// TODO: use real type when public.
#[derive(Clone)]
#[expect(clippy::large_enum_variant)]
pub enum BlockchainManagerRequest {
/// Pop blocks off the top of the blockchain.
///
@ -37,7 +36,13 @@ pub enum BlockchainManagerRequest {
Pruned,
/// Relay a block to the network.
RelayBlock(Block),
RelayBlock(
/// This is [`Box`]ed due to `clippy::large_enum_variant`.
Box<Block>,
),
/// Sync/flush the blockchain database to disk.
Sync,
/// Is the blockchain in the middle of syncing?
///
@ -65,7 +70,7 @@ pub enum BlockchainManagerRequest {
/// Number of the blocks to be generated.
amount_of_blocks: u64,
/// The previous block's hash.
prev_block: [u8; 32],
prev_block: Option<[u8; 32]>,
/// The starting value for the nonce.
starting_nonce: u32,
/// The address that will receive the coinbase reward.
@ -83,6 +88,16 @@ pub enum BlockchainManagerRequest {
//
/// Get the next [`PruningSeed`] needed for a pruned sync.
NextNeededPruningSeed,
/// Create a block template.
CreateBlockTemplate {
prev_block: [u8; 32],
account_public_address: String,
extra_nonce: Vec<u8>,
},
/// Safely shutdown `cuprated`.
Stop,
}
/// TODO: use real type when public.
@ -93,6 +108,7 @@ pub enum BlockchainManagerResponse {
/// Response to:
/// - [`BlockchainManagerRequest::Prune`]
/// - [`BlockchainManagerRequest::RelayBlock`]
/// - [`BlockchainManagerRequest::Sync`]
Ok,
/// Response to [`BlockchainManagerRequest::PopBlocks`]
@ -124,10 +140,11 @@ pub enum BlockchainManagerResponse {
height: usize,
},
// /// Response to [`BlockchainManagerRequest::Spans`].
// Spans(Vec<Span<Z::Addr>>),
/// Response to [`BlockchainManagerRequest::NextNeededPruningSeed`].
NextNeededPruningSeed(PruningSeed),
/// Response to [`BlockchainManagerRequest::CreateBlockTemplate`].
CreateBlockTemplate(Box<BlockTemplate>),
}
/// TODO: use real type when public.
@ -139,7 +156,7 @@ pub type BlockchainManagerHandle = cuprate_database_service::DatabaseReadService
/// TODO
#[derive(Clone)]
pub struct CupratedRpcHandler {
/// Should this RPC server be [restricted](RpcHandler::restricted)?
/// Should this RPC server be [restricted](RpcHandler::is_restricted)?
///
/// This is not `pub` on purpose, as it should not be mutated after [`Self::new`].
restricted: bool,
@ -182,7 +199,7 @@ impl CupratedRpcHandler {
}
impl RpcHandler for CupratedRpcHandler {
fn restricted(&self) -> bool {
fn is_restricted(&self) -> bool {
self.restricted
}
}
@ -198,7 +215,7 @@ impl Service<JsonRpcRequest> for CupratedRpcHandler {
fn call(&mut self, request: JsonRpcRequest) -> Self::Future {
let state = self.clone();
Box::pin(json::map_request(state, request))
Box::pin(handlers::json_rpc::map_request(state, request))
}
}
@ -213,7 +230,7 @@ impl Service<BinRequest> for CupratedRpcHandler {
fn call(&mut self, request: BinRequest) -> Self::Future {
let state = self.clone();
Box::pin(bin::map_request(state, request))
Box::pin(handlers::bin::map_request(state, request))
}
}
@ -228,6 +245,6 @@ impl Service<OtherRequest> for CupratedRpcHandler {
fn call(&mut self, request: OtherRequest) -> Self::Future {
let state = self.clone();
Box::pin(other::map_request(state, request))
Box::pin(handlers::other_json::map_request(state, request))
}
}

View file

@ -1,4 +1,4 @@
//! Convenience functions for requests/responses.
//! Convenience functions for Cuprate's various [`tower::Service`] requests/responses.
//!
//! This module implements many methods for
//! [`CupratedRpcHandler`](crate::rpc::CupratedRpcHandler)
@ -12,8 +12,8 @@
//! the [`blockchain`] modules contains methods for the
//! blockchain database [`tower::Service`] API.
mod address_book;
mod blockchain;
mod blockchain_context;
mod blockchain_manager;
mod txpool;
pub(super) mod address_book;
pub(super) mod blockchain;
pub(super) mod blockchain_context;
pub(super) mod blockchain_manager;
pub(super) mod txpool;

View file

@ -1,25 +1,23 @@
//! Functions for TODO: doc enum message.
//! Functions to send [`AddressBookRequest`]s.
use std::convert::Infallible;
use std::net::SocketAddrV4;
use anyhow::{anyhow, Error};
use tower::ServiceExt;
use cuprate_helper::cast::usize_to_u64;
use cuprate_helper::{cast::usize_to_u64, map::u32_from_ipv4};
use cuprate_p2p_core::{
services::{AddressBookRequest, AddressBookResponse},
services::{AddressBookRequest, AddressBookResponse, ZoneSpecificPeerListEntryBase},
types::{BanState, ConnectionId},
AddressBook, NetworkZone,
};
use cuprate_pruning::PruningSeed;
use cuprate_rpc_types::misc::{ConnectionInfo, Span};
use crate::rpc::constants::FIELD_NOT_SUPPORTED;
use cuprate_rpc_types::misc::ConnectionInfo;
use cuprate_types::rpc::Peer;
// FIXME: use `anyhow::Error` over `tower::BoxError` in address book.
/// [`AddressBookRequest::PeerlistSize`]
pub(crate) async fn peerlist_size<Z: NetworkZone>(
pub async fn peerlist_size<Z: NetworkZone>(
address_book: &mut impl AddressBook<Z>,
) -> Result<(u64, u64), Error> {
let AddressBookResponse::PeerlistSize { white, grey } = address_book
@ -37,7 +35,7 @@ pub(crate) async fn peerlist_size<Z: NetworkZone>(
}
/// [`AddressBookRequest::ConnectionInfo`]
pub(crate) async fn connection_info<Z: NetworkZone>(
pub async fn connection_info<Z: NetworkZone>(
address_book: &mut impl AddressBook<Z>,
) -> Result<Vec<ConnectionInfo>, Error> {
let AddressBookResponse::ConnectionInfo(vec) = address_book
@ -94,7 +92,7 @@ pub(crate) async fn connection_info<Z: NetworkZone>(
}
/// [`AddressBookRequest::ConnectionCount`]
pub(crate) async fn connection_count<Z: NetworkZone>(
pub async fn connection_count<Z: NetworkZone>(
address_book: &mut impl AddressBook<Z>,
) -> Result<(u64, u64), Error> {
let AddressBookResponse::ConnectionCount { incoming, outgoing } = address_book
@ -112,7 +110,7 @@ pub(crate) async fn connection_count<Z: NetworkZone>(
}
/// [`AddressBookRequest::SetBan`]
pub(crate) async fn set_ban<Z: NetworkZone>(
pub async fn set_ban<Z: NetworkZone>(
address_book: &mut impl AddressBook<Z>,
set_ban: cuprate_p2p_core::types::SetBan<Z::Addr>,
) -> Result<(), Error> {
@ -131,7 +129,7 @@ pub(crate) async fn set_ban<Z: NetworkZone>(
}
/// [`AddressBookRequest::GetBan`]
pub(crate) async fn get_ban<Z: NetworkZone>(
pub async fn get_ban<Z: NetworkZone>(
address_book: &mut impl AddressBook<Z>,
peer: Z::Addr,
) -> Result<Option<std::time::Instant>, Error> {
@ -150,7 +148,7 @@ pub(crate) async fn get_ban<Z: NetworkZone>(
}
/// [`AddressBookRequest::GetBans`]
pub(crate) async fn get_bans<Z: NetworkZone>(
pub async fn get_bans<Z: NetworkZone>(
address_book: &mut impl AddressBook<Z>,
) -> Result<Vec<BanState<Z::Addr>>, Error> {
let AddressBookResponse::GetBans(bans) = address_book
@ -166,3 +164,62 @@ pub(crate) async fn get_bans<Z: NetworkZone>(
Ok(bans)
}
/// [`AddressBookRequest::Peerlist`]
pub async fn peerlist<Z: NetworkZone>(
address_book: &mut impl AddressBook<Z>,
) -> Result<(Vec<Peer>, Vec<Peer>), Error> {
let AddressBookResponse::Peerlist(peerlist) = address_book
.ready()
.await
.map_err(|e| anyhow!(e))?
.call(AddressBookRequest::Peerlist)
.await
.map_err(|e| anyhow!(e))?
else {
unreachable!();
};
fn map<Z: NetworkZone>(peers: Vec<ZoneSpecificPeerListEntryBase<Z::Addr>>) -> Vec<Peer> {
peers
.into_iter()
.map(|peer| {
let ZoneSpecificPeerListEntryBase {
adr,
id,
last_seen,
pruning_seed,
rpc_port,
rpc_credits_per_hash,
} = peer;
let host = adr.to_string();
let (ip, port) = if let Ok(socket_addr) = host.parse::<SocketAddrV4>() {
(u32_from_ipv4(*socket_addr.ip()), socket_addr.port())
} else {
(0, 0)
};
let last_seen = last_seen.try_into().unwrap_or(0);
let pruning_seed = pruning_seed.compress();
Peer {
id,
host,
ip,
port,
rpc_port,
rpc_credits_per_hash,
last_seen,
pruning_seed,
}
})
.collect()
}
let white = map::<Z>(peerlist.white);
let grey = map::<Z>(peerlist.grey);
Ok((white, grey))
}

View file

@ -1,7 +1,7 @@
//! Functions for [`BlockchainReadRequest`].
//! Functions to send [`BlockchainReadRequest`]s.
use std::{
collections::{BTreeMap, HashMap, HashSet},
collections::{BTreeSet, HashMap, HashSet},
ops::Range,
};
@ -10,17 +10,22 @@ use indexmap::{IndexMap, IndexSet};
use monero_serai::block::Block;
use tower::{Service, ServiceExt};
use cuprate_blockchain::{service::BlockchainReadHandle, types::AltChainInfo};
use cuprate_blockchain::service::BlockchainReadHandle;
use cuprate_helper::cast::{u64_to_usize, usize_to_u64};
use cuprate_rpc_types::misc::GetOutputsOut;
use cuprate_types::{
blockchain::{BlockchainReadRequest, BlockchainResponse},
output_cache::OutputCache,
Chain, ChainInfo, CoinbaseTxSum, ExtendedBlockHeader, HardFork, MinerData,
OutputHistogramEntry, OutputHistogramInput, OutputOnChain,
rpc::{
ChainInfo, CoinbaseTxSum, KeyImageSpentStatus, OutputDistributionData,
OutputHistogramEntry, OutputHistogramInput,
},
BlockCompleteEntry, Chain, ExtendedBlockHeader, OutputDistributionInput, OutputOnChain,
TxInBlockchain,
};
/// [`BlockchainReadRequest::Block`].
pub(crate) async fn block(
pub async fn block(
blockchain_read: &mut BlockchainReadHandle,
height: u64,
) -> Result<Block, Error> {
@ -39,7 +44,7 @@ pub(crate) async fn block(
}
/// [`BlockchainReadRequest::BlockByHash`].
pub(crate) async fn block_by_hash(
pub async fn block_by_hash(
blockchain_read: &mut BlockchainReadHandle,
hash: [u8; 32],
) -> Result<Block, Error> {
@ -56,7 +61,7 @@ pub(crate) async fn block_by_hash(
}
/// [`BlockchainReadRequest::BlockExtendedHeader`].
pub(crate) async fn block_extended_header(
pub async fn block_extended_header(
blockchain_read: &mut BlockchainReadHandle,
height: u64,
) -> Result<ExtendedBlockHeader, Error> {
@ -75,7 +80,7 @@ pub(crate) async fn block_extended_header(
}
/// [`BlockchainReadRequest::BlockHash`].
pub(crate) async fn block_hash(
pub async fn block_hash(
blockchain_read: &mut BlockchainReadHandle,
height: u64,
chain: Chain,
@ -96,7 +101,7 @@ pub(crate) async fn block_hash(
}
/// [`BlockchainReadRequest::FindBlock`].
pub(crate) async fn find_block(
pub async fn find_block(
blockchain_read: &mut BlockchainReadHandle,
block_hash: [u8; 32],
) -> Result<Option<(Chain, usize)>, Error> {
@ -112,8 +117,39 @@ pub(crate) async fn find_block(
Ok(option)
}
/// [`BlockchainReadRequest::NextChainEntry`].
///
/// Returns only the:
/// - block IDs
/// - start height
/// - current chain height
pub async fn next_chain_entry(
blockchain_read: &mut BlockchainReadHandle,
block_hashes: Vec<[u8; 32]>,
start_height: u64,
) -> Result<(Vec<[u8; 32]>, Option<usize>, usize), Error> {
let BlockchainResponse::NextChainEntry {
block_ids,
start_height,
chain_height,
..
} = blockchain_read
.ready()
.await?
.call(BlockchainReadRequest::NextChainEntry(
block_hashes,
u64_to_usize(start_height),
))
.await?
else {
unreachable!();
};
Ok((block_ids, start_height, chain_height))
}
/// [`BlockchainReadRequest::FilterUnknownHashes`].
pub(crate) async fn filter_unknown_hashes(
pub async fn filter_unknown_hashes(
blockchain_read: &mut BlockchainReadHandle,
block_hashes: HashSet<[u8; 32]>,
) -> Result<HashSet<[u8; 32]>, Error> {
@ -130,7 +166,7 @@ pub(crate) async fn filter_unknown_hashes(
}
/// [`BlockchainReadRequest::BlockExtendedHeaderInRange`]
pub(crate) async fn block_extended_header_in_range(
pub async fn block_extended_header_in_range(
blockchain_read: &mut BlockchainReadHandle,
range: Range<usize>,
chain: Chain,
@ -150,7 +186,7 @@ pub(crate) async fn block_extended_header_in_range(
}
/// [`BlockchainReadRequest::ChainHeight`].
pub(crate) async fn chain_height(
pub async fn chain_height(
blockchain_read: &mut BlockchainReadHandle,
) -> Result<(u64, [u8; 32]), Error> {
let BlockchainResponse::ChainHeight(height, hash) = blockchain_read
@ -166,7 +202,7 @@ pub(crate) async fn chain_height(
}
/// [`BlockchainReadRequest::GeneratedCoins`].
pub(crate) async fn generated_coins(
pub async fn generated_coins(
blockchain_read: &mut BlockchainReadHandle,
block_height: u64,
) -> Result<u64, Error> {
@ -185,14 +221,38 @@ pub(crate) async fn generated_coins(
}
/// [`BlockchainReadRequest::Outputs`]
pub(crate) async fn outputs(
pub async fn outputs(
blockchain_read: &mut BlockchainReadHandle,
outputs: IndexMap<u64, IndexSet<u64>>,
get_txid: bool,
) -> Result<OutputCache, Error> {
let BlockchainResponse::Outputs(outputs) = blockchain_read
.ready()
.await?
.call(BlockchainReadRequest::Outputs(outputs))
.call(BlockchainReadRequest::Outputs { outputs, get_txid })
.await?
else {
unreachable!();
};
Ok(outputs)
}
/// [`BlockchainReadRequest::OutputsVec`]
pub async fn outputs_vec(
blockchain_read: &mut BlockchainReadHandle,
outputs: Vec<GetOutputsOut>,
get_txid: bool,
) -> Result<Vec<(u64, Vec<(u64, OutputOnChain)>)>, Error> {
let outputs = outputs
.into_iter()
.map(|output| (output.amount, output.index))
.collect();
let BlockchainResponse::OutputsVec(outputs) = blockchain_read
.ready()
.await?
.call(BlockchainReadRequest::OutputsVec { outputs, get_txid })
.await?
else {
unreachable!();
@ -202,7 +262,7 @@ pub(crate) async fn outputs(
}
/// [`BlockchainReadRequest::NumberOutputsWithAmount`]
pub(crate) async fn number_outputs_with_amount(
pub async fn number_outputs_with_amount(
blockchain_read: &mut BlockchainReadHandle,
output_amounts: Vec<u64>,
) -> Result<HashMap<u64, usize>, Error> {
@ -221,11 +281,11 @@ pub(crate) async fn number_outputs_with_amount(
}
/// [`BlockchainReadRequest::KeyImagesSpent`]
pub(crate) async fn key_images_spent(
pub async fn key_images_spent(
blockchain_read: &mut BlockchainReadHandle,
key_images: HashSet<[u8; 32]>,
) -> Result<bool, Error> {
let BlockchainResponse::KeyImagesSpent(is_spent) = blockchain_read
let BlockchainResponse::KeyImagesSpent(status) = blockchain_read
.ready()
.await?
.call(BlockchainReadRequest::KeyImagesSpent(key_images))
@ -234,11 +294,28 @@ pub(crate) async fn key_images_spent(
unreachable!();
};
Ok(is_spent)
Ok(status)
}
/// [`BlockchainReadRequest::KeyImagesSpentVec`]
pub async fn key_images_spent_vec(
blockchain_read: &mut BlockchainReadHandle,
key_images: Vec<[u8; 32]>,
) -> Result<Vec<bool>, Error> {
let BlockchainResponse::KeyImagesSpentVec(status) = blockchain_read
.ready()
.await?
.call(BlockchainReadRequest::KeyImagesSpentVec(key_images))
.await?
else {
unreachable!();
};
Ok(status)
}
/// [`BlockchainReadRequest::CompactChainHistory`]
pub(crate) async fn compact_chain_history(
pub async fn compact_chain_history(
blockchain_read: &mut BlockchainReadHandle,
) -> Result<(Vec<[u8; 32]>, u128), Error> {
let BlockchainResponse::CompactChainHistory {
@ -257,7 +334,7 @@ pub(crate) async fn compact_chain_history(
}
/// [`BlockchainReadRequest::FindFirstUnknown`]
pub(crate) async fn find_first_unknown(
pub async fn find_first_unknown(
blockchain_read: &mut BlockchainReadHandle,
hashes: Vec<[u8; 32]>,
) -> Result<Option<(usize, u64)>, Error> {
@ -274,9 +351,7 @@ pub(crate) async fn find_first_unknown(
}
/// [`BlockchainReadRequest::TotalTxCount`]
pub(crate) async fn total_tx_count(
blockchain_read: &mut BlockchainReadHandle,
) -> Result<u64, Error> {
pub async fn total_tx_count(blockchain_read: &mut BlockchainReadHandle) -> Result<u64, Error> {
let BlockchainResponse::TotalTxCount(tx_count) = blockchain_read
.ready()
.await?
@ -290,7 +365,7 @@ pub(crate) async fn total_tx_count(
}
/// [`BlockchainReadRequest::DatabaseSize`]
pub(crate) async fn database_size(
pub async fn database_size(
blockchain_read: &mut BlockchainReadHandle,
) -> Result<(u64, u64), Error> {
let BlockchainResponse::DatabaseSize {
@ -308,8 +383,25 @@ pub(crate) async fn database_size(
Ok((database_size, free_space))
}
/// [`BlockchainReadRequest::OutputDistribution`]
pub async fn output_distribution(
blockchain_read: &mut BlockchainReadHandle,
input: OutputDistributionInput,
) -> Result<Vec<OutputDistributionData>, Error> {
let BlockchainResponse::OutputDistribution(data) = blockchain_read
.ready()
.await?
.call(BlockchainReadRequest::OutputDistribution(input))
.await?
else {
unreachable!();
};
Ok(data)
}
/// [`BlockchainReadRequest::OutputHistogram`]
pub(crate) async fn output_histogram(
pub async fn output_histogram(
blockchain_read: &mut BlockchainReadHandle,
input: OutputHistogramInput,
) -> Result<Vec<OutputHistogramEntry>, Error> {
@ -326,7 +418,7 @@ pub(crate) async fn output_histogram(
}
/// [`BlockchainReadRequest::CoinbaseTxSum`]
pub(crate) async fn coinbase_tx_sum(
pub async fn coinbase_tx_sum(
blockchain_read: &mut BlockchainReadHandle,
height: u64,
count: u64,
@ -347,7 +439,7 @@ pub(crate) async fn coinbase_tx_sum(
}
/// [`BlockchainReadRequest::AltChains`]
pub(crate) async fn alt_chains(
pub async fn alt_chains(
blockchain_read: &mut BlockchainReadHandle,
) -> Result<Vec<ChainInfo>, Error> {
let BlockchainResponse::AltChains(vec) = blockchain_read
@ -363,9 +455,7 @@ pub(crate) async fn alt_chains(
}
/// [`BlockchainReadRequest::AltChainCount`]
pub(crate) async fn alt_chain_count(
blockchain_read: &mut BlockchainReadHandle,
) -> Result<u64, Error> {
pub async fn alt_chain_count(blockchain_read: &mut BlockchainReadHandle) -> Result<u64, Error> {
let BlockchainResponse::AltChainCount(count) = blockchain_read
.ready()
.await?
@ -377,3 +467,91 @@ pub(crate) async fn alt_chain_count(
Ok(usize_to_u64(count))
}
/// [`BlockchainReadRequest::Transactions`].
pub async fn transactions(
blockchain_read: &mut BlockchainReadHandle,
tx_hashes: HashSet<[u8; 32]>,
) -> Result<(Vec<TxInBlockchain>, Vec<[u8; 32]>), Error> {
let BlockchainResponse::Transactions { txs, missed_txs } = blockchain_read
.ready()
.await?
.call(BlockchainReadRequest::Transactions { tx_hashes })
.await?
else {
unreachable!();
};
Ok((txs, missed_txs))
}
/// [`BlockchainReadRequest::TotalRctOutputs`].
pub async fn total_rct_outputs(blockchain_read: &mut BlockchainReadHandle) -> Result<u64, Error> {
let BlockchainResponse::TotalRctOutputs(n) = blockchain_read
.ready()
.await?
.call(BlockchainReadRequest::TotalRctOutputs)
.await?
else {
unreachable!();
};
Ok(n)
}
/// [`BlockchainReadRequest::BlockCompleteEntries`].
pub async fn block_complete_entries(
blockchain_read: &mut BlockchainReadHandle,
block_hashes: Vec<[u8; 32]>,
) -> Result<(Vec<BlockCompleteEntry>, Vec<[u8; 32]>, usize), Error> {
let BlockchainResponse::BlockCompleteEntries {
blocks,
missing_hashes,
blockchain_height,
} = blockchain_read
.ready()
.await?
.call(BlockchainReadRequest::BlockCompleteEntries(block_hashes))
.await?
else {
unreachable!();
};
Ok((blocks, missing_hashes, blockchain_height))
}
/// [`BlockchainReadRequest::BlockCompleteEntriesByHeight`].
pub async fn block_complete_entries_by_height(
blockchain_read: &mut BlockchainReadHandle,
block_heights: Vec<u64>,
) -> Result<Vec<BlockCompleteEntry>, Error> {
let BlockchainResponse::BlockCompleteEntriesByHeight(blocks) = blockchain_read
.ready()
.await?
.call(BlockchainReadRequest::BlockCompleteEntriesByHeight(
block_heights.into_iter().map(u64_to_usize).collect(),
))
.await?
else {
unreachable!();
};
Ok(blocks)
}
/// [`BlockchainReadRequest::TxOutputIndexes`].
pub async fn tx_output_indexes(
blockchain_read: &mut BlockchainReadHandle,
tx_hash: [u8; 32],
) -> Result<Vec<u64>, Error> {
let BlockchainResponse::TxOutputIndexes(o_indexes) = blockchain_read
.ready()
.await?
.call(BlockchainReadRequest::TxOutputIndexes { tx_hash })
.await?
else {
unreachable!();
};
Ok(o_indexes)
}

View file

@ -1,6 +1,4 @@
//! Functions for [`BlockChainContextRequest`] and [`BlockChainContextResponse`].
use std::convert::Infallible;
//! Functions to send [`BlockChainContextRequest`]s.
use anyhow::{anyhow, Error};
use monero_serai::block::Block;
@ -10,20 +8,13 @@ use cuprate_consensus_context::{
BlockChainContextRequest, BlockChainContextResponse, BlockchainContext,
BlockchainContextService,
};
use cuprate_helper::cast::u64_to_usize;
use cuprate_types::{FeeEstimate, HardFork, HardForkInfo};
use cuprate_types::{
rpc::{FeeEstimate, HardForkInfo},
HardFork,
};
// FIXME: use `anyhow::Error` over `tower::BoxError` in blockchain context.
pub(crate) async fn context(
blockchain_context: &mut BlockchainContextService,
) -> Result<BlockchainContext, Error> {
// TODO: Remove this whole function just call directly in all usages.
let context = blockchain_context.blockchain_context().clone();
Ok(context)
}
/// [`BlockChainContextRequest::HardForkInfo`].
pub(crate) async fn hard_fork_info(
blockchain_context: &mut BlockchainContextService,
@ -66,17 +57,22 @@ pub(crate) async fn fee_estimate(
pub(crate) async fn calculate_pow(
blockchain_context: &mut BlockchainContextService,
hardfork: HardFork,
height: u64,
block: Box<Block>,
block: Block,
seed_hash: [u8; 32],
) -> Result<[u8; 32], Error> {
let Some(height) = block.number() else {
return Err(anyhow!("Block is missing height"));
};
let block = Box::new(block);
let BlockChainContextResponse::CalculatePow(hash) = blockchain_context
.ready()
.await
.map_err(|e| anyhow!(e))?
.call(BlockChainContextRequest::CalculatePow {
hardfork,
height: u64_to_usize(height),
height,
block,
seed_hash,
})
@ -88,3 +84,22 @@ pub(crate) async fn calculate_pow(
Ok(hash)
}
/// [`BlockChainContextRequest::BatchGetDifficulties`]
pub async fn batch_get_difficulties(
blockchain_context: &mut BlockchainContextService,
difficulties: Vec<(u64, HardFork)>,
) -> Result<Vec<u128>, Error> {
let BlockChainContextResponse::BatchDifficulties(resp) = blockchain_context
.ready()
.await
.map_err(|e| anyhow!(e))?
.call(BlockChainContextRequest::BatchGetDifficulties(difficulties))
.await
.map_err(|e| anyhow!(e))?
else {
unreachable!();
};
Ok(resp)
}

View file

@ -1,4 +1,4 @@
//! Functions for [`BlockchainManagerRequest`] & [`BlockchainManagerResponse`].
//! Functions to send [`BlockchainManagerRequest`]s.
use anyhow::Error;
use monero_serai::block::Block;
@ -8,15 +8,14 @@ use cuprate_helper::cast::{u64_to_usize, usize_to_u64};
use cuprate_p2p_core::{types::ConnectionId, NetworkZone};
use cuprate_pruning::PruningSeed;
use cuprate_rpc_types::misc::Span;
use cuprate_types::{AddAuxPow, AuxPow, HardFork};
use cuprate_types::BlockTemplate;
use crate::rpc::{
constants::FIELD_NOT_SUPPORTED,
handler::{BlockchainManagerHandle, BlockchainManagerRequest, BlockchainManagerResponse},
use crate::rpc::rpc_handler::{
BlockchainManagerHandle, BlockchainManagerRequest, BlockchainManagerResponse,
};
/// [`BlockchainManagerRequest::PopBlocks`]
pub(crate) async fn pop_blocks(
pub async fn pop_blocks(
blockchain_manager: &mut BlockchainManagerHandle,
amount: u64,
) -> Result<u64, Error> {
@ -35,9 +34,7 @@ pub(crate) async fn pop_blocks(
}
/// [`BlockchainManagerRequest::Prune`]
pub(crate) async fn prune(
blockchain_manager: &mut BlockchainManagerHandle,
) -> Result<PruningSeed, Error> {
pub async fn prune(blockchain_manager: &mut BlockchainManagerHandle) -> Result<PruningSeed, Error> {
let BlockchainManagerResponse::Prune(seed) = blockchain_manager
.ready()
.await?
@ -51,9 +48,7 @@ pub(crate) async fn prune(
}
/// [`BlockchainManagerRequest::Pruned`]
pub(crate) async fn pruned(
blockchain_manager: &mut BlockchainManagerHandle,
) -> Result<bool, Error> {
pub async fn pruned(blockchain_manager: &mut BlockchainManagerHandle) -> Result<bool, Error> {
let BlockchainManagerResponse::Pruned(pruned) = blockchain_manager
.ready()
.await?
@ -67,9 +62,9 @@ pub(crate) async fn pruned(
}
/// [`BlockchainManagerRequest::RelayBlock`]
pub(crate) async fn relay_block(
pub async fn relay_block(
blockchain_manager: &mut BlockchainManagerHandle,
block: Block,
block: Box<Block>,
) -> Result<(), Error> {
let BlockchainManagerResponse::Ok = blockchain_manager
.ready()
@ -84,9 +79,7 @@ pub(crate) async fn relay_block(
}
/// [`BlockchainManagerRequest::Syncing`]
pub(crate) async fn syncing(
blockchain_manager: &mut BlockchainManagerHandle,
) -> Result<bool, Error> {
pub async fn syncing(blockchain_manager: &mut BlockchainManagerHandle) -> Result<bool, Error> {
let BlockchainManagerResponse::Syncing(syncing) = blockchain_manager
.ready()
.await?
@ -100,9 +93,7 @@ pub(crate) async fn syncing(
}
/// [`BlockchainManagerRequest::Synced`]
pub(crate) async fn synced(
blockchain_manager: &mut BlockchainManagerHandle,
) -> Result<bool, Error> {
pub async fn synced(blockchain_manager: &mut BlockchainManagerHandle) -> Result<bool, Error> {
let BlockchainManagerResponse::Synced(syncing) = blockchain_manager
.ready()
.await?
@ -116,7 +107,7 @@ pub(crate) async fn synced(
}
/// [`BlockchainManagerRequest::Target`]
pub(crate) async fn target(
pub async fn target(
blockchain_manager: &mut BlockchainManagerHandle,
) -> Result<std::time::Duration, Error> {
let BlockchainManagerResponse::Target(target) = blockchain_manager
@ -132,9 +123,7 @@ pub(crate) async fn target(
}
/// [`BlockchainManagerRequest::TargetHeight`]
pub(crate) async fn target_height(
blockchain_manager: &mut BlockchainManagerHandle,
) -> Result<u64, Error> {
pub async fn target_height(blockchain_manager: &mut BlockchainManagerHandle) -> Result<u64, Error> {
let BlockchainManagerResponse::TargetHeight { height } = blockchain_manager
.ready()
.await?
@ -148,10 +137,10 @@ pub(crate) async fn target_height(
}
/// [`BlockchainManagerRequest::GenerateBlocks`]
pub(crate) async fn generate_blocks(
pub async fn generate_blocks(
blockchain_manager: &mut BlockchainManagerHandle,
amount_of_blocks: u64,
prev_block: [u8; 32],
prev_block: Option<[u8; 32]>,
starting_nonce: u32,
wallet_address: String,
) -> Result<(Vec<[u8; 32]>, u64), Error> {
@ -173,7 +162,7 @@ pub(crate) async fn generate_blocks(
}
// [`BlockchainManagerRequest::Spans`]
pub(crate) async fn spans<Z: NetworkZone>(
pub async fn spans<Z: NetworkZone>(
blockchain_manager: &mut BlockchainManagerHandle,
) -> Result<Vec<Span>, Error> {
// let BlockchainManagerResponse::Spans(vec) = blockchain_manager
@ -185,7 +174,8 @@ pub(crate) async fn spans<Z: NetworkZone>(
// unreachable!();
// };
let vec: Vec<cuprate_p2p_core::types::Span<Z::Addr>> = todo!();
let vec: Vec<cuprate_p2p_core::types::Span<Z::Addr>> =
todo!("waiting on blockchain downloader/syncer: <https://github.com/Cuprate/cuprate/pull/320#discussion_r1811089758>");
// FIXME: impl this map somewhere instead of inline.
let vec = vec
@ -205,7 +195,7 @@ pub(crate) async fn spans<Z: NetworkZone>(
}
/// [`BlockchainManagerRequest::NextNeededPruningSeed`]
pub(crate) async fn next_needed_pruning_seed(
pub async fn next_needed_pruning_seed(
blockchain_manager: &mut BlockchainManagerHandle,
) -> Result<PruningSeed, Error> {
let BlockchainManagerResponse::NextNeededPruningSeed(seed) = blockchain_manager
@ -219,3 +209,54 @@ pub(crate) async fn next_needed_pruning_seed(
Ok(seed)
}
/// [`BlockchainManagerRequest::CreateBlockTemplate`]
pub async fn create_block_template(
blockchain_manager: &mut BlockchainManagerHandle,
prev_block: [u8; 32],
account_public_address: String,
extra_nonce: Vec<u8>,
) -> Result<Box<BlockTemplate>, Error> {
let BlockchainManagerResponse::CreateBlockTemplate(block_template) = blockchain_manager
.ready()
.await?
.call(BlockchainManagerRequest::CreateBlockTemplate {
prev_block,
account_public_address,
extra_nonce,
})
.await?
else {
unreachable!();
};
Ok(block_template)
}
/// [`BlockchainManagerRequest::Sync`]
pub async fn sync(blockchain_manager: &mut BlockchainManagerHandle) -> Result<(), Error> {
let BlockchainManagerResponse::Ok = blockchain_manager
.ready()
.await?
.call(BlockchainManagerRequest::Sync)
.await?
else {
unreachable!();
};
Ok(())
}
/// [`BlockchainManagerRequest::Stop`]
pub async fn stop(blockchain_manager: &mut BlockchainManagerHandle) -> Result<(), Error> {
let BlockchainManagerResponse::Ok = blockchain_manager
.ready()
.await?
.call(BlockchainManagerRequest::Stop)
.await?
else {
unreachable!();
};
Ok(())
}

View file

@ -0,0 +1,244 @@
//! Functions to send [`TxpoolReadRequest`]s.
use std::{collections::HashSet, convert::Infallible, num::NonZero};
use anyhow::{anyhow, Error};
use monero_serai::transaction::Transaction;
use tower::{Service, ServiceExt};
use cuprate_helper::cast::usize_to_u64;
use cuprate_rpc_types::misc::{SpentKeyImageInfo, TxInfo};
use cuprate_txpool::{
service::{
interface::{TxpoolReadRequest, TxpoolReadResponse},
TxpoolReadHandle,
},
TxEntry,
};
use cuprate_types::{
rpc::{PoolInfo, PoolInfoFull, PoolInfoIncremental, PoolTxInfo, TxpoolStats},
TxInPool, TxRelayChecks,
};
// FIXME: use `anyhow::Error` over `tower::BoxError` in txpool.
/// [`TxpoolReadRequest::Backlog`]
pub async fn backlog(txpool_read: &mut TxpoolReadHandle) -> Result<Vec<TxEntry>, Error> {
let TxpoolReadResponse::Backlog(tx_entries) = txpool_read
.ready()
.await
.map_err(|e| anyhow!(e))?
.call(TxpoolReadRequest::Backlog)
.await
.map_err(|e| anyhow!(e))?
else {
unreachable!();
};
Ok(tx_entries)
}
/// [`TxpoolReadRequest::Size`]
pub async fn size(
txpool_read: &mut TxpoolReadHandle,
include_sensitive_txs: bool,
) -> Result<u64, Error> {
let TxpoolReadResponse::Size(size) = txpool_read
.ready()
.await
.map_err(|e| anyhow!(e))?
.call(TxpoolReadRequest::Size {
include_sensitive_txs,
})
.await
.map_err(|e| anyhow!(e))?
else {
unreachable!();
};
Ok(usize_to_u64(size))
}
/// [`TxpoolReadRequest::PoolInfo`]
pub async fn pool_info(
txpool_read: &mut TxpoolReadHandle,
include_sensitive_txs: bool,
max_tx_count: usize,
start_time: Option<NonZero<usize>>,
) -> Result<PoolInfo, Error> {
let TxpoolReadResponse::PoolInfo(pool_info) = txpool_read
.ready()
.await
.map_err(|e| anyhow!(e))?
.call(TxpoolReadRequest::PoolInfo {
include_sensitive_txs,
max_tx_count,
start_time,
})
.await
.map_err(|e| anyhow!(e))?
else {
unreachable!();
};
Ok(pool_info)
}
/// [`TxpoolReadRequest::TxsByHash`]
pub async fn txs_by_hash(
txpool_read: &mut TxpoolReadHandle,
tx_hashes: Vec<[u8; 32]>,
include_sensitive_txs: bool,
) -> Result<Vec<TxInPool>, Error> {
let TxpoolReadResponse::TxsByHash(txs_in_pool) = txpool_read
.ready()
.await
.map_err(|e| anyhow!(e))?
.call(TxpoolReadRequest::TxsByHash {
tx_hashes,
include_sensitive_txs,
})
.await
.map_err(|e| anyhow!(e))?
else {
unreachable!();
};
Ok(txs_in_pool)
}
/// [`TxpoolReadRequest::KeyImagesSpent`]
pub async fn key_images_spent(
txpool_read: &mut TxpoolReadHandle,
key_images: HashSet<[u8; 32]>,
include_sensitive_txs: bool,
) -> Result<bool, Error> {
let TxpoolReadResponse::KeyImagesSpent(status) = txpool_read
.ready()
.await
.map_err(|e| anyhow!(e))?
.call(TxpoolReadRequest::KeyImagesSpent {
key_images,
include_sensitive_txs,
})
.await
.map_err(|e| anyhow!(e))?
else {
unreachable!();
};
Ok(status)
}
/// [`TxpoolReadRequest::KeyImagesSpentVec`]
pub async fn key_images_spent_vec(
txpool_read: &mut TxpoolReadHandle,
key_images: Vec<[u8; 32]>,
include_sensitive_txs: bool,
) -> Result<Vec<bool>, Error> {
let TxpoolReadResponse::KeyImagesSpentVec(status) = txpool_read
.ready()
.await
.map_err(|e| anyhow!(e))?
.call(TxpoolReadRequest::KeyImagesSpentVec {
key_images,
include_sensitive_txs,
})
.await
.map_err(|e| anyhow!(e))?
else {
unreachable!();
};
Ok(status)
}
/// [`TxpoolReadRequest::Pool`]
pub async fn pool(
txpool_read: &mut TxpoolReadHandle,
include_sensitive_txs: bool,
) -> Result<(Vec<TxInfo>, Vec<SpentKeyImageInfo>), Error> {
let TxpoolReadResponse::Pool {
txs,
spent_key_images,
} = txpool_read
.ready()
.await
.map_err(|e| anyhow!(e))?
.call(TxpoolReadRequest::Pool {
include_sensitive_txs,
})
.await
.map_err(|e| anyhow!(e))?
else {
unreachable!();
};
let txs = txs.into_iter().map(Into::into).collect();
let spent_key_images = spent_key_images.into_iter().map(Into::into).collect();
Ok((txs, spent_key_images))
}
/// [`TxpoolReadRequest::PoolStats`]
pub async fn pool_stats(
txpool_read: &mut TxpoolReadHandle,
include_sensitive_txs: bool,
) -> Result<TxpoolStats, Error> {
let TxpoolReadResponse::PoolStats(txpool_stats) = txpool_read
.ready()
.await
.map_err(|e| anyhow!(e))?
.call(TxpoolReadRequest::PoolStats {
include_sensitive_txs,
})
.await
.map_err(|e| anyhow!(e))?
else {
unreachable!();
};
Ok(txpool_stats)
}
/// [`TxpoolReadRequest::AllHashes`]
pub async fn all_hashes(
txpool_read: &mut TxpoolReadHandle,
include_sensitive_txs: bool,
) -> Result<Vec<[u8; 32]>, Error> {
let TxpoolReadResponse::AllHashes(hashes) = txpool_read
.ready()
.await
.map_err(|e| anyhow!(e))?
.call(TxpoolReadRequest::AllHashes {
include_sensitive_txs,
})
.await
.map_err(|e| anyhow!(e))?
else {
unreachable!();
};
Ok(hashes)
}
/// TODO: impl txpool manager.
pub async fn flush(txpool_manager: &mut Infallible, tx_hashes: Vec<[u8; 32]>) -> Result<(), Error> {
todo!();
Ok(())
}
/// TODO: impl txpool manager.
pub async fn relay(txpool_manager: &mut Infallible, tx_hashes: Vec<[u8; 32]>) -> Result<(), Error> {
todo!();
Ok(())
}
/// TODO: impl txpool manager.
pub async fn check_maybe_relay_local(
txpool_manager: &mut Infallible,
tx: Transaction,
relay: bool,
) -> Result<TxRelayChecks, Error> {
Ok(todo!())
}

View file

@ -8,6 +8,7 @@ use cuprate_txpool::service::{TxpoolReadHandle, TxpoolWriteHandle};
mod dandelion;
mod incoming_tx;
mod relay_rules;
mod txs_being_handled;
pub use incoming_tx::{IncomingTxError, IncomingTxHandler, IncomingTxs};

View file

@ -40,6 +40,7 @@ use crate::{
signals::REORG_LOCK,
txpool::{
dandelion,
relay_rules::check_tx_relay_rules,
txs_being_handled::{TxsBeingHandled, TxsBeingHandledLocally},
},
};
@ -179,6 +180,14 @@ async fn handle_incoming_txs(
.map_err(IncomingTxError::Consensus)?;
for tx in txs {
// TODO: this could be a DoS, if someone spams us with txs that violate these rules?
// Maybe we should remember these invalid txs for some time to prevent them getting repeatedly sent.
if let Err(e) = check_tx_relay_rules(&tx, context) {
tracing::debug!(err = %e, tx = hex::encode(tx.tx_hash), "Tx failed relay check, skipping.");
continue;
}
handle_valid_tx(
tx,
state.clone(),

View file

@ -0,0 +1,87 @@
use std::cmp::max;
use monero_serai::transaction::Timelock;
use thiserror::Error;
use cuprate_consensus_context::BlockchainContext;
use cuprate_consensus_rules::miner_tx::calculate_block_reward;
use cuprate_helper::cast::usize_to_u64;
use cuprate_types::TransactionVerificationData;
/// The maximum size of the tx extra field.
///
/// <https://github.com/monero-project/monero/blob/3b01c490953fe92f3c6628fa31d280a4f0490d28/src/cryptonote_config.h#L217>
const MAX_TX_EXTRA_SIZE: usize = 1060;
/// <https://github.com/monero-project/monero/blob/3b01c490953fe92f3c6628fa31d280a4f0490d28/src/cryptonote_config.h#L75>
const DYNAMIC_FEE_REFERENCE_TRANSACTION_WEIGHT: u128 = 3_000;
/// <https://github.com/monero-project/monero/blob/3b01c490953fe92f3c6628fa31d280a4f0490d28/src/cryptonote_core/blockchain.h#L646>
const FEE_MASK: u64 = 10_u64.pow(4);
#[derive(Debug, Error)]
pub enum RelayRuleError {
#[error("Tx has non-zero timelock.")]
NonZeroTimelock,
#[error("Tx extra field is too large.")]
ExtraFieldTooLarge,
#[error("Tx fee too low.")]
FeeBelowMinimum,
}
/// Checks the transaction passes the relay rules.
///
/// Relay rules are rules that govern the txs we accept to our tx-pool and propagate around the network.
pub fn check_tx_relay_rules(
tx: &TransactionVerificationData,
context: &BlockchainContext,
) -> Result<(), RelayRuleError> {
if tx.tx.prefix().additional_timelock != Timelock::None {
return Err(RelayRuleError::NonZeroTimelock);
}
if tx.tx.prefix().extra.len() > MAX_TX_EXTRA_SIZE {
return Err(RelayRuleError::ExtraFieldTooLarge);
}
check_fee(tx.tx_weight, tx.fee, context)
}
/// Checks the fee is enough for the tx weight and current blockchain state.
fn check_fee(
tx_weight: usize,
fee: u64,
context: &BlockchainContext,
) -> Result<(), RelayRuleError> {
let base_reward = calculate_block_reward(
1,
context.effective_median_weight,
context.already_generated_coins,
context.current_hf,
);
let fee_per_byte = dynamic_base_fee(base_reward, context.effective_median_weight);
let needed_fee = usize_to_u64(tx_weight) * fee_per_byte;
let needed_fee = needed_fee.div_ceil(FEE_MASK) * FEE_MASK;
if fee < (needed_fee - needed_fee / 50) {
tracing::debug!(fee, needed_fee, "Tx fee is below minimum.");
return Err(RelayRuleError::FeeBelowMinimum);
}
Ok(())
}
/// Calculates the base fee per byte for tx relay.
fn dynamic_base_fee(base_reward: u64, effective_media_block_weight: usize) -> u64 {
let median_block_weight = effective_media_block_weight as u128;
let fee_per_byte_100 = u128::from(base_reward) * DYNAMIC_FEE_REFERENCE_TRANSACTION_WEIGHT
/ median_block_weight
/ median_block_weight;
let fee_per_byte = fee_per_byte_100 - fee_per_byte_100 / 20;
#[expect(clippy::cast_possible_truncation)]
max(fee_per_byte as u64, 1)
}

View file

@ -21,17 +21,16 @@ cargo install mdbook
## Building
To build a book, go into a book's directory and build:
To build a book, from the root of Cuprate:
```bash
# This build Cuprate's user book.
cd user/
mdbook build
mdbook build ./books/user
```
The output will be in the `book` subdirectory (`user/book` for the above example). To open the book, you can open it in
your web browser like so:
```bash
mdbook build --open
mdbook build ./books/user --open
```

View file

@ -24,7 +24,6 @@ cargo doc --open --package cuprate-blockchain
| Crate | In-tree path | Purpose |
|-------|--------------|---------|
| [`cuprate-epee-encoding`](https://doc.cuprate.org/cuprate_epee_encoding) | [`net/epee-encoding/`](https://github.com/Cuprate/cuprate/tree/main/net/epee-encoding) | Epee (de)serialization
| [`cuprate-fixed-bytes`](https://doc.cuprate.org/cuprate_fixed_bytes) | [`net/fixed-bytes/`](https://github.com/Cuprate/cuprate/tree/main/net/fixed-bytes) | Fixed byte containers backed by `byte::Byte`
| [`cuprate-levin`](https://doc.cuprate.org/cuprate_levin) | [`net/levin/`](https://github.com/Cuprate/cuprate/tree/main/net/levin) | Levin bucket protocol implementation
| [`cuprate-wire`](https://doc.cuprate.org/cuprate_wire) | [`net/wire/`](https://github.com/Cuprate/cuprate/tree/main/net/wire) | TODO
@ -46,6 +45,13 @@ cargo doc --open --package cuprate-blockchain
| [`cuprate-database-service`](https://doc.cuprate.org/cuprate_database_service) | [`storage/database-service/`](https://github.com/Cuprate/cuprate/tree/main/storage/database-service) | `tower::Service` + thread-pool abstraction built on-top of `cuprate-database`
| [`cuprate-txpool`](https://doc.cuprate.org/cuprate_txpool) | [`storage/txpool/`](https://github.com/Cuprate/cuprate/tree/main/storage/txpool) | Transaction pool database built on-top of `cuprate-database` & `cuprate-database-service`
## Types
| Crate | In-tree path | Purpose |
|-------|--------------|---------|
| [`cuprate-types`](https://doc.cuprate.org/cuprate_types) | [`types/types/`](https://github.com/Cuprate/cuprate/tree/main/types/types) | General types used throughout Cuprate |
| [`cuprate-hex`](https://doc.cuprate.org/cuprate_hex) | [`types/hex/`](https://github.com/Cuprate/cuprate/tree/main/types/hex) | Hexadecimal data types |
| [`cuprate-fixed-bytes`](https://doc.cuprate.org/cuprate_fixed_bytes) | [`types/fixed-bytes/`](https://github.com/Cuprate/cuprate/tree/main/net/fixed-bytes) | Fixed byte containers backed by `byte::Byte`
## RPC
| Crate | In-tree path | Purpose |
|-------|--------------|---------|
@ -66,5 +72,4 @@ cargo doc --open --package cuprate-blockchain
| [`cuprate-cryptonight`](https://doc.cuprate.org/cuprate_cryptonight) | [`cryptonight/`](https://github.com/Cuprate/cuprate/tree/main/cryptonight) | CryptoNight hash functions
| [`cuprate-pruning`](https://doc.cuprate.org/cuprate_pruning) | [`pruning/`](https://github.com/Cuprate/cuprate/tree/main/pruning) | Monero pruning logic/types
| [`cuprate-helper`](https://doc.cuprate.org/cuprate_helper) | [`helper/`](https://github.com/Cuprate/cuprate/tree/main/helper) | Kitchen-sink helper crate for Cuprate
| [`cuprate-test-utils`](https://doc.cuprate.org/cuprate_test_utils) | [`test-utils/`](https://github.com/Cuprate/cuprate/tree/main/test-utils) | Testing utilities for Cuprate
| [`cuprate-types`](https://doc.cuprate.org/cuprate_types) | [`types/`](https://github.com/Cuprate/cuprate/tree/main/types) | Shared types across Cuprate
| [`cuprate-test-utils`](https://doc.cuprate.org/cuprate_test_utils) | [`test-utils/`](https://github.com/Cuprate/cuprate/tree/main/test-utils) | Testing utilities for Cuprate

View file

@ -3,10 +3,13 @@ authors = ["hinto-janai"]
language = "en"
multilingual = false
src = "src"
title = "Cuprate User Book - v0.0.1"
title = "Cuprate User Book - v0.0.2"
git-repository-url = "https://github.com/Cuprate/cuprate/books/user"
[output.html]
default-theme = "ayu"
preferred-dark-theme = "ayu"
no-section-label = true
[preprocessor.gen_config]
command = "bash ./books/user/gen_config.sh"

16
books/user/gen_config.sh Normal file
View file

@ -0,0 +1,16 @@
#!/bin/bash
# https://rust-lang.github.io/mdBook/for_developers/preprocessors.html
# This script is called twice, the first is just to check support.
if [ "$1" == "supports" ]; then
# return 0 - we support everything.
exit 0;
fi
# Second call - generate config.
cargo run --bin cuprated -- --generate-config > ./books/user/Cuprated.toml
# This looks weird but mdbook hands us 2 JSON maps, we need to return the second with any edits we want to make.
# We don't want to make any edits, so we can just read & return the second JSON map straight away.
jq '.[1]'

View file

@ -11,6 +11,7 @@
- [Running](getting-started/run.md)
- [Configuration](config.md)
- [Command line](cli.md)
- [Resources](resources/intro.md)
@ -19,4 +20,6 @@
- [IP](resources/ip.md)
- [Platform support](platform.md)
- [License](license.md)
- [License](license.md)
<!-- TODO: - [Glossary](glossary/intro.md) or maybe a wiki? -->

View file

@ -13,5 +13,23 @@ Usage: `cuprated [OPTIONS]`
| `--config-file <CONFIG_FILE>` | The PATH of the `cuprated` config file | `Cuprated.toml` |
| `--generate-config` | Generate a config file and print it to stdout | |
| `--skip-config-warning` | Stops the missing config warning and startup delay if a config file is missing | |
| `-v`, `--version` | Print misc version information in JSON | |
| `-h`, `--help` | Print help | |
| `--version` | Print misc version information in JSON | |
| `--help` | Print help | |
## `--version`
The `--version` flag outputs the following info in JSON.
| Field | Type | Description |
|-------------------------|--------|-------------|
| `major_version` | Number | Major version of `cuprated` |
| `minor_version` | Number | Minor version of `cuprated` |
| `patch_version` | Number | Patch version of `cuprated` |
| `rpc_major_version` | Number | Major RPC version (follows `monerod`) |
| `rpc_minor_version` | Number | Minor RPC version (follows `monerod`) |
| `rpc_version` | Number | RPC version (follows `monerod`) |
| `hardfork` | Number | Current hardfork version |
| `blockchain_db_version` | Number | Blockchain database version (separate from `monerod`) |
| `semantic_version` | String | Semantic version of `cuprated` |
| `build` | String | Build of `cuprated`, either `debug` or `release` |
| `commit` | String | `git` commit hash of `cuprated` |
| `killswitch_timestamp` | Number | Timestamp at which `cuprated`'s killswitch activates |

View file

@ -7,10 +7,12 @@
- [OS specific directory](./resources/disk.md)
## `Cuprated.toml`
This is the default configuration file `cuprated` creates and uses, sourced from [here](https://github.com/Cuprate/cuprate/blob/main/binaries/cuprated/config/Cuprated.toml).
This is the default configuration file `cuprated` creates and uses.
If `cuprated` is started with no [`--options`](./cli.md), then the configuration used will be equivalent to this config file.
> Some values may be different for your exact system, generate the config with `cuprated --generate-config` to see the defaults for your system.
```toml
{{#include ../../../binaries/cuprated/config/Cuprated.toml}}
{{#include ../Cuprated.toml}}
```

View file

@ -1,12 +1,12 @@
# Download
For convenience, Cuprate offers pre-built binaries for `cuprated` for the platforms listed in [`Platform support`](../platform.md) using GitHub CI in a non-reproducible way; it is highly recommended to build `cuprated` from source instead, see [`Building from source`](./source.md).
Cuprate offers pre-built binaries for `cuprated` for the platforms listed in [`Platform support`](../platform.md) **using GitHub CI in a non-reproducible way** for convenience; it is highly recommended to build `cuprated` from source instead, see [`Building from source`](./source.md).
| Platform | Download |
|------------------------------|----------|
| Windows x86_64 | <https://github.com/Cuprate/cuprate/releases/download/cuprated-0.0.1/cuprated-0.0.1-windows-x64.zip>
| macOS x86_64 | <https://github.com/Cuprate/cuprate/releases/download/cuprated-0.0.1/cuprated-0.0.1-macos-x64.tar.gz>
| macOS ARM64 | <https://github.com/Cuprate/cuprate/releases/download/cuprated-0.0.1/cuprated-0.0.1-macos-arm64.tar.gz>
| Linux x86_64 (glibc >= 2.36) | <https://github.com/Cuprate/cuprate/releases/download/cuprated-0.0.1/cuprated-0.0.1-linux-x64.tar.gz>
| Linux ARM64 (glibc >= 2.36) | <https://github.com/Cuprate/cuprate/releases/download/cuprated-0.0.1/cuprated-0.0.1-linux-arm64.tar.gz>
| Windows x86_64 | <https://github.com/Cuprate/cuprate/releases/download/cuprated-0.0.2/cuprated-0.0.2-windows-x64.zip>
| macOS x86_64 | <https://github.com/Cuprate/cuprate/releases/download/cuprated-0.0.2/cuprated-0.0.2-macos-x64.tar.gz>
| macOS ARM64 | <https://github.com/Cuprate/cuprate/releases/download/cuprated-0.0.2/cuprated-0.0.2-macos-arm64.tar.gz>
| Linux x86_64 (glibc >= 2.36) | <https://github.com/Cuprate/cuprate/releases/download/cuprated-0.0.2/cuprated-0.0.2-linux-x64.tar.gz>
| Linux ARM64 (glibc >= 2.36) | <https://github.com/Cuprate/cuprate/releases/download/cuprated-0.0.2/cuprated-0.0.2-linux-arm64.tar.gz>
All release files are archived and also available at <https://archive.hinto.rs>.

View file

@ -18,7 +18,7 @@ Install the required system dependencies:
```bash
# Debian/Ubuntu
sudo apt install -y build-essentials cmake git
sudo apt install -y build-essential cmake git
# Arch
sudo pacman -Syu base-devel cmake git

View file

@ -17,7 +17,7 @@ Frequently asked questions about Cuprate.
## Who?
Cuprate was started by [SyntheticBird45](https://github.com/SyntheticBird45) in [early 2023](https://github.com/Cuprate/cuprate/commit/2c7cb27548c727550ce4684cb31d0eafcf852c8e) and was later joined by [boog900](https://github.com/boog900), [hinto-janai](https://github.com/hinto-janai), and [other contributors](https://github.com/Cuprate/cuprate/graphs/contributors).
A few Cuprate contributors are funded by Monero's [Community Crowdfunding System](http://ccs.getmonero.org) to work on Cuprate and occasionally `monerod`.
A few Cuprate contributors are funded by Monero's [Community Crowdfunding System](https://ccs.getmonero.org) to work on Cuprate and occasionally `monerod`.
## What is `cuprated`?
`monerod` is the [daemon](https://en.wikipedia.org/wiki/Daemon_(computing)) of the Monero project, the Monero node.
@ -39,7 +39,7 @@ No.
## Is it safe to run `cuprated`?
**⚠️ This project is still in development; do NOT use `cuprated` for any serious purposes ⚠️**
`cuprated` is fine to run for casual purposes and has a similar attack surface to other network connected services.
`cuprated` is fine to run for non-serious purposes and has a similar attack surface to other network connected services.
See [`Resources`](./resources/intro.md) for information on what system resources `cuprated` will use.

View file

@ -29,6 +29,7 @@ Tier 2 targets can be thought of as "guaranteed to build".
| Target | Notes |
|-----------------------------|--------|
| `x86_64-pc-windows-msvc` | x64 Windows (MSVC, Windows Server 2022+)
| `x86_64-apple-darwin` | x64 macOS
## Tier 3
@ -42,5 +43,4 @@ Official builds are not available, but may eventually be planned.
| `aarch64-unknown-linux-musl` | ARM64 Linux (musl 1.2.3)
| `x86_64-unknown-freebsd` | x64 FreeBSD
| `aarch64-unknown-freebsd` | ARM64 FreeBSD
| `aarch64-pc-windows-msvc` | ARM64 Windows (MSVC, Windows Server 2022+)
| `x86_64-apple-darwin` | x64 macOS
| `aarch64-pc-windows-msvc` | ARM64 Windows (MSVC, Windows Server 2022+)

View file

@ -1,5 +1,7 @@
# Ports
`cuprated` currently uses a single port to accept incoming P2P connections.
By default, this port is randomly selected.
See the [`p2p_port` option in the config file](../config.md) to manually set this port.
By default, this port is `18080`.
See the [`p2p_port` option in the config file](../config.md) to manually set this port.
Setting the port to `0` will disable incoming P2P connections.

View file

@ -52,9 +52,10 @@ impl AltChainContextCache {
block_weight: usize,
long_term_block_weight: usize,
timestamp: u64,
cumulative_difficulty: u128,
) {
if let Some(difficulty_cache) = &mut self.difficulty_cache {
difficulty_cache.new_block(height, timestamp, difficulty_cache.cumulative_difficulty());
difficulty_cache.new_block(height, timestamp, cumulative_difficulty);
}
if let Some(weight_cache) = &mut self.weight_cache {

View file

@ -36,17 +36,11 @@ pub struct DifficultyCacheConfig {
pub window: usize,
pub cut: usize,
pub lag: usize,
/// If [`Some`] the difficulty cache will always return this value as the current difficulty.
pub fixed_difficulty: Option<u128>,
}
impl DifficultyCacheConfig {
/// Create a new difficulty cache config.
///
/// # Notes
/// You probably do not need this, use [`DifficultyCacheConfig::main_net`] instead.
pub const fn new(window: usize, cut: usize, lag: usize) -> Self {
Self { window, cut, lag }
}
/// Returns the total amount of blocks we need to track to calculate difficulty
pub const fn total_block_count(&self) -> usize {
self.window + self.lag
@ -64,6 +58,7 @@ impl DifficultyCacheConfig {
window: DIFFICULTY_WINDOW,
cut: DIFFICULTY_CUT,
lag: DIFFICULTY_LAG,
fixed_difficulty: None,
}
}
}
@ -297,6 +292,10 @@ fn next_difficulty(
cumulative_difficulties: &VecDeque<u128>,
hf: HardFork,
) -> u128 {
if let Some(fixed_difficulty) = config.fixed_difficulty {
return fixed_difficulty;
}
if timestamps.len() <= 1 {
return 1;
}
@ -349,7 +348,7 @@ fn get_window_start_and_end(
if window_len <= accounted_window {
(0, window_len)
} else {
let start = (window_len - (accounted_window) + 1) / 2;
let start = (window_len - (accounted_window)).div_ceil(2);
(start, start + accounted_window)
}
}

View file

@ -36,7 +36,10 @@ pub mod weight;
mod alt_chains;
mod task;
use cuprate_types::{Chain, ChainInfo, FeeEstimate, HardForkInfo};
use cuprate_types::{
rpc::{ChainInfo, FeeEstimate, HardForkInfo},
Chain,
};
use difficulty::DifficultyCache;
use rx_vms::RandomXVm;
use weight::BlockWeightsCache;

View file

@ -24,6 +24,9 @@ tokio = { workspace = true, features = ["full"] }
tower = { workspace = true }
[dev-dependencies]
proptest = { workspace = true }
tokio-test = { workspace = true }
tempfile = { workspace = true }
[lints]
workspace = true

View file

@ -98,6 +98,24 @@ pub async fn validate_entries<N: NetworkZone>(
let mut hashes_stop_diff_last_height = last_height - hashes_stop_height;
// get the hashes we are missing to create the first fast-sync hash.
let BlockchainResponse::BlockHashInRange(starting_hashes) = blockchain_read_handle
.ready()
.await?
.call(BlockchainReadRequest::BlockHashInRange(
hashes_start_height..start_height,
Chain::Main,
))
.await?
else {
unreachable!()
};
// If we don't have enough hashes to make up a batch we can't validate any.
if amount_of_hashes + starting_hashes.len() < FAST_SYNC_BATCH_LEN {
return Ok((VecDeque::new(), entries));
}
let mut unknown = VecDeque::new();
// start moving from the back of the batches taking enough hashes out so we are only left with hashes
@ -125,23 +143,10 @@ pub async fn validate_entries<N: NetworkZone>(
unknown.push_front(back);
}
// get the hashes we are missing to create the first fast-sync hash.
let BlockchainResponse::BlockHashInRange(hashes) = blockchain_read_handle
.ready()
.await?
.call(BlockchainReadRequest::BlockHashInRange(
hashes_start_height..start_height,
Chain::Main,
))
.await?
else {
unreachable!()
};
// Start verifying the hashes.
let mut hasher = Hasher::default();
let mut last_i = 1;
for (i, hash) in hashes
for (i, hash) in starting_hashes
.iter()
.chain(entries.iter().flat_map(|e| e.ids.iter()))
.enumerate()
@ -245,3 +250,148 @@ pub fn block_to_verified_block_information(
block,
}
}
#[cfg(test)]
mod tests {
use std::{collections::VecDeque, slice, sync::LazyLock};
use proptest::proptest;
use cuprate_p2p::block_downloader::ChainEntry;
use cuprate_p2p_core::{client::InternalPeerID, handles::HandleBuilder, ClearNet};
use crate::{
fast_sync_stop_height, set_fast_sync_hashes, validate_entries, FAST_SYNC_BATCH_LEN,
};
static HASHES: LazyLock<&[[u8; 32]]> = LazyLock::new(|| {
let hashes = (0..FAST_SYNC_BATCH_LEN * 2000)
.map(|i| {
let mut ret = [0; 32];
ret[..8].copy_from_slice(&i.to_le_bytes());
ret
})
.collect::<Vec<_>>();
let hashes = hashes.leak();
let fast_sync_hashes = hashes
.chunks(FAST_SYNC_BATCH_LEN)
.map(|chunk| {
let len = chunk.len() * 32;
let bytes = chunk.as_ptr().cast::<u8>();
// SAFETY:
// We are casting a valid [[u8; 32]] to a [u8], no alignment requirements and we are using it
// within the [[u8; 32]]'s lifetime.
unsafe { blake3::hash(slice::from_raw_parts(bytes, len)).into() }
})
.collect::<Vec<_>>();
set_fast_sync_hashes(fast_sync_hashes.leak());
hashes
});
proptest! {
#[test]
fn valid_entry(len in 0_usize..1_500_000) {
let mut ids = HASHES.to_vec();
ids.resize(len, [0_u8; 32]);
let handle = HandleBuilder::new().build();
let entry = ChainEntry {
ids,
peer: InternalPeerID::Unknown(1),
handle: handle.1
};
let data_dir = tempfile::tempdir().unwrap();
tokio_test::block_on(async move {
let blockchain_config = cuprate_blockchain::config::ConfigBuilder::new()
.data_directory(data_dir.path().to_path_buf())
.build();
let (mut blockchain_read_handle, _, _) =
cuprate_blockchain::service::init(blockchain_config).unwrap();
let ret = validate_entries::<ClearNet>(VecDeque::from([entry]), 0, &mut blockchain_read_handle).await.unwrap();
let len_left = ret.0.iter().map(|e| e.ids.len()).sum::<usize>();
let len_right = ret.1.iter().map(|e| e.ids.len()).sum::<usize>();
assert_eq!(len_left + len_right, len);
assert!(len_left <= fast_sync_stop_height());
assert!(len_right < FAST_SYNC_BATCH_LEN || len > fast_sync_stop_height());
});
}
#[test]
fn single_hash_entries(len in 0_usize..1_500_000) {
let handle = HandleBuilder::new().build();
let entries = (0..len).map(|i| {
ChainEntry {
ids: vec![HASHES.get(i).copied().unwrap_or_default()],
peer: InternalPeerID::Unknown(1),
handle: handle.1.clone()
}
}).collect();
let data_dir = tempfile::tempdir().unwrap();
tokio_test::block_on(async move {
let blockchain_config = cuprate_blockchain::config::ConfigBuilder::new()
.data_directory(data_dir.path().to_path_buf())
.build();
let (mut blockchain_read_handle, _, _) =
cuprate_blockchain::service::init(blockchain_config).unwrap();
let ret = validate_entries::<ClearNet>(entries, 0, &mut blockchain_read_handle).await.unwrap();
let len_left = ret.0.iter().map(|e| e.ids.len()).sum::<usize>();
let len_right = ret.1.iter().map(|e| e.ids.len()).sum::<usize>();
assert_eq!(len_left + len_right, len);
assert!(len_left <= fast_sync_stop_height());
assert!(len_right < FAST_SYNC_BATCH_LEN || len > fast_sync_stop_height());
});
}
#[test]
fn not_enough_hashes(len in 0_usize..FAST_SYNC_BATCH_LEN) {
let hashes_start_height = FAST_SYNC_BATCH_LEN * 1234;
let handle = HandleBuilder::new().build();
let entry = ChainEntry {
ids: HASHES[hashes_start_height..(hashes_start_height + len)].to_vec(),
peer: InternalPeerID::Unknown(1),
handle: handle.1
};
let data_dir = tempfile::tempdir().unwrap();
tokio_test::block_on(async move {
let blockchain_config = cuprate_blockchain::config::ConfigBuilder::new()
.data_directory(data_dir.path().to_path_buf())
.build();
let (mut blockchain_read_handle, _, _) =
cuprate_blockchain::service::init(blockchain_config).unwrap();
let ret = validate_entries::<ClearNet>(VecDeque::from([entry]), 0, &mut blockchain_read_handle).await.unwrap();
let len_left = ret.0.iter().map(|e| e.ids.len()).sum::<usize>();
let len_right = ret.1.iter().map(|e| e.ids.len()).sum::<usize>();
assert_eq!(len_right, len);
assert_eq!(len_left, 0);
});
}
}
}

View file

@ -173,6 +173,7 @@ where
block_info.weight,
block_info.long_term_weight,
block_info.block.header.timestamp,
cumulative_difficulty,
);
// Add this alt cache back to the context service.

View file

@ -17,8 +17,12 @@ const TEST_LAG: usize = 2;
const TEST_TOTAL_ACCOUNTED_BLOCKS: usize = TEST_WINDOW + TEST_LAG;
pub(crate) const TEST_DIFFICULTY_CONFIG: DifficultyCacheConfig =
DifficultyCacheConfig::new(TEST_WINDOW, TEST_CUT, TEST_LAG);
pub(crate) const TEST_DIFFICULTY_CONFIG: DifficultyCacheConfig = DifficultyCacheConfig {
window: TEST_WINDOW,
cut: TEST_CUT,
lag: TEST_LAG,
fixed_difficulty: None,
};
#[tokio::test]
async fn first_3_blocks_fixed_difficulty() -> Result<(), tower::BoxError> {

View file

@ -131,17 +131,20 @@ pub async fn get_output_cache<D: Database>(
txs_verification_data: impl Iterator<Item = &TransactionVerificationData>,
mut database: D,
) -> Result<OutputCache, ExtendedConsensusError> {
let mut output_ids = IndexMap::new();
let mut outputs = IndexMap::new();
for tx_v_data in txs_verification_data {
insert_ring_member_ids(&tx_v_data.tx.prefix().inputs, &mut output_ids)
insert_ring_member_ids(&tx_v_data.tx.prefix().inputs, &mut outputs)
.map_err(ConsensusError::Transaction)?;
}
let BlockchainResponse::Outputs(outputs) = database
.ready()
.await?
.call(BlockchainReadRequest::Outputs(output_ids))
.call(BlockchainReadRequest::Outputs {
outputs,
get_txid: false,
})
.await?
else {
unreachable!();
@ -160,10 +163,10 @@ pub async fn batch_get_ring_member_info<D: Database>(
mut database: D,
cache: Option<&OutputCache>,
) -> Result<Vec<TxRingMembersInfo>, ExtendedConsensusError> {
let mut output_ids = IndexMap::new();
let mut outputs = IndexMap::new();
for tx_v_data in txs_verification_data.clone() {
insert_ring_member_ids(&tx_v_data.tx.prefix().inputs, &mut output_ids)
insert_ring_member_ids(&tx_v_data.tx.prefix().inputs, &mut outputs)
.map_err(ConsensusError::Transaction)?;
}
@ -173,7 +176,10 @@ pub async fn batch_get_ring_member_info<D: Database>(
let BlockchainResponse::Outputs(outputs) = database
.ready()
.await?
.call(BlockchainReadRequest::Outputs(output_ids))
.call(BlockchainReadRequest::Outputs {
outputs,
get_txid: false,
})
.await?
else {
unreachable!();

View file

@ -31,7 +31,7 @@ fn dummy_database(outputs: BTreeMap<u64, OutputOnChain>) -> impl Database + Clon
BlockchainReadRequest::NumberOutputsWithAmount(_) => {
BlockchainResponse::NumberOutputsWithAmount(HashMap::new())
}
BlockchainReadRequest::Outputs(outs) => {
BlockchainReadRequest::Outputs { outputs: outs, .. } => {
let idxs = &outs[&0];
let mut ret = IndexMap::new();
@ -73,6 +73,7 @@ macro_rules! test_verify_valid_v2_tx {
time_lock: Timelock::None,
commitment: CompressedEdwardsY(hex_literal::hex!($commitment)),
key: CompressedEdwardsY(hex_literal::hex!($ring_member)),
txid: None,
}),)+)+
];
@ -100,6 +101,7 @@ macro_rules! test_verify_valid_v2_tx {
time_lock: Timelock::None,
commitment: ED25519_BASEPOINT_COMPRESSED,
key: CompressedEdwardsY(hex_literal::hex!($ring_member)),
txid: None,
}),)+)+
];

View file

@ -49,7 +49,7 @@ impl CnSlowHashState {
&self.b
}
fn get_keccak_bytes_mut(&mut self) -> &mut [u8; KECCAK1600_BYTE_SIZE] {
const fn get_keccak_bytes_mut(&mut self) -> &mut [u8; KECCAK1600_BYTE_SIZE] {
&mut self.b
}

View file

@ -83,7 +83,7 @@ ignore = [
#{ crate = "a-crate-that-is-yanked@0.1.1", reason = "you can specify why you are ignoring the yanked crate" },
# TODO: check this is sorted before a beta release.
{ id = "RUSTSEC-2024-0370", reason = "unmaintained crate, not necessarily vulnerable yet." }
{ id = "RUSTSEC-2024-0436", reason = "`paste` unmaintained, not necessarily vulnerable yet." }
]
# If this is true, then cargo deny will use the git executable to fetch advisory database.
# If this is false, then it uses a built-in git library.

View file

@ -23,6 +23,7 @@ map = ["cast", "dep:monero-serai", "dep:cuprate-constants"]
time = ["dep:chrono", "std"]
thread = ["std", "dep:target_os_lib"]
tx = ["dep:monero-serai"]
fmt = ["map", "std"]
[dependencies]
cuprate-constants = { workspace = true, optional = true, features = ["block"] }

View file

@ -3,6 +3,18 @@
//! This modules provides utilities for casting between types.
//!
//! `#[no_std]` compatible.
//!
//! # 64-bit invariant
//! This module is available on 32-bit arches although panics
//! will occur between lossy casts, e.g. [`u64_to_usize`] where
//! the input is larger than [`u32::MAX`].
//!
//! On 64-bit arches, all functions are lossless.
// TODO:
// These casting functions are heavily used throughout the codebase
// yet it is not enforced that all usages are correct in 32-bit cases.
// Panicking may be a short-term solution - find a better fix for 32-bit arches.
#![allow(clippy::cast_possible_truncation)]
@ -10,44 +22,78 @@
//============================ SAFETY: DO NOT REMOVE ===========================//
// //
// //
// Only allow building 64-bit targets. //
// This allows us to assume 64-bit invariants in this file. //
#[cfg(not(target_pointer_width = "64"))]
compile_error!("Cuprate is only compatible with 64-bit CPUs");
// Only allow building {32,64}-bit targets. //
// This allows us to assume {32,64}-bit invariants in this file. //
#[cfg(not(any(target_pointer_width = "64", target_pointer_width = "32")))]
compile_error!("This module is only compatible with {32,64}-bit CPUs");
// //
// //
//============================ SAFETY: DO NOT REMOVE ===========================//
/// Cast [`u64`] to [`usize`].
#[inline(always)]
pub const fn u64_to_usize(u: u64) -> usize {
u as usize
#[cfg(target_pointer_width = "64")]
mod functions {
/// Cast [`u64`] to [`usize`].
#[inline(always)]
pub const fn u64_to_usize(u: u64) -> usize {
u as usize
}
/// Cast [`i64`] to [`isize`].
#[inline(always)]
pub const fn i64_to_isize(i: i64) -> isize {
i as isize
}
}
#[cfg(target_pointer_width = "32")]
mod functions {
/// Cast [`u64`] to [`usize`].
///
/// # Panics
/// This panics on 32-bit arches if `u` is larger than [`u32::MAX`].
#[inline(always)]
pub const fn u64_to_usize(u: u64) -> usize {
if u > u32::MAX as u64 {
panic!()
} else {
u as usize
}
}
/// Cast [`i64`] to [`isize`].
///
/// # Panics
/// This panics on 32-bit arches if `i` is lesser than [`i32::MIN`] or greater [`i32::MAX`].
#[inline(always)]
pub const fn i64_to_isize(i: i64) -> isize {
if i < i32::MIN as i64 || i > i32::MAX as i64 {
panic!()
} else {
i as isize
}
}
}
pub use functions::{i64_to_isize, u64_to_usize};
/// Cast [`u32`] to [`usize`].
#[inline(always)]
pub const fn u32_to_usize(u: u32) -> usize {
u as usize
}
/// Cast [`usize`] to [`u64`].
#[inline(always)]
pub const fn usize_to_u64(u: usize) -> u64 {
u as u64
}
/// Cast [`i64`] to [`isize`].
#[inline(always)]
pub const fn i64_to_isize(i: i64) -> isize {
i as isize
}
/// Cast [`i32`] to [`isize`].
#[inline(always)]
pub const fn i32_to_isize(i: i32) -> isize {
i as isize
}
/// Cast [`usize`] to [`u64`].
#[inline(always)]
pub const fn usize_to_u64(u: usize) -> u64 {
u as u64
}
/// Cast [`isize`] to [`i64`].
#[inline(always)]
pub const fn isize_to_i64(i: isize) -> i64 {
@ -60,7 +106,8 @@ mod test {
use super::*;
#[test]
fn max_unsigned() {
#[cfg(target_pointer_width = "64")]
fn max_64bit() {
assert_eq!(u32_to_usize(u32::MAX), usize::try_from(u32::MAX).unwrap());
assert_eq!(usize_to_u64(u32_to_usize(u32::MAX)), u64::from(u32::MAX));
@ -69,10 +116,7 @@ mod test {
assert_eq!(usize_to_u64(usize::MAX), u64::MAX);
assert_eq!(u64_to_usize(usize_to_u64(usize::MAX)), usize::MAX);
}
#[test]
fn max_signed() {
assert_eq!(i32_to_isize(i32::MAX), isize::try_from(i32::MAX).unwrap());
assert_eq!(isize_to_i64(i32_to_isize(i32::MAX)), i64::from(i32::MAX));
@ -82,4 +126,25 @@ mod test {
assert_eq!(isize_to_i64(isize::MAX), i64::MAX);
assert_eq!(i64_to_isize(isize_to_i64(isize::MAX)), isize::MAX);
}
#[test]
#[cfg(target_pointer_width = "32")]
#[should_panic]
fn panic_u64_32bit() {
u64_to_usize(u64::from(u32::MAX + 1));
}
#[test]
#[cfg(target_pointer_width = "32")]
#[should_panic]
fn panic_i64_lesser_32bit() {
i64_to_usize(i64::from(i32::MIN - 1));
}
#[test]
#[cfg(target_pointer_width = "32")]
#[should_panic]
fn panic_i64_greater_32bit() {
i64_to_usize(i64::from(i32::MAX + 1));
}
}

44
helper/src/fmt.rs Normal file
View file

@ -0,0 +1,44 @@
//! String formatting.
/// A type that can be represented in hexadecimal (with a `0x` prefix).
pub trait HexPrefix {
/// Turn `self` into a hexadecimal string prefixed with `0x`.
fn hex_prefix(self) -> String;
}
macro_rules! impl_hex_prefix {
($(
$t:ty
),*) => {
$(
impl HexPrefix for $t {
fn hex_prefix(self) -> String {
format!("{:#x}", self)
}
}
)*
};
}
impl_hex_prefix!(u8, u16, u32, u64, u128, i8, i16, i32, i64, i128, usize, isize);
impl HexPrefix for (u64, u64) {
/// Combine the low and high bits of a [`u128`] as a lower-case hexadecimal string prefixed with `0x`.
///
/// ```rust
/// # use cuprate_helper::fmt::HexPrefix;
/// assert_eq!((0, 0).hex_prefix(), "0x0");
/// assert_eq!((0, u64::MAX).hex_prefix(), "0xffffffffffffffff0000000000000000");
/// assert_eq!((u64::MAX, 0).hex_prefix(), "0xffffffffffffffff");
/// assert_eq!((u64::MAX, u64::MAX).hex_prefix(), "0xffffffffffffffffffffffffffffffff");
/// ```
fn hex_prefix(self) -> String {
format!(
"{:#x}",
crate::map::combine_low_high_bits_to_u128(self.0, self.1)
)
}
}
#[cfg(test)]
mod tests {}

View file

@ -11,7 +11,7 @@ pub mod atomic;
#[cfg(feature = "cast")]
pub mod cast;
#[cfg(all(feature = "fs", feature = "std"))]
#[cfg(feature = "fs")]
pub mod fs;
pub mod network;
@ -33,6 +33,6 @@ pub mod tx;
#[cfg(feature = "crypto")]
pub mod crypto;
//---------------------------------------------------------------------------------------------------- Private Usage
//----------------------------------------------------------------------------------------------------
#[cfg(feature = "fmt")]
pub mod fmt;

View file

@ -5,6 +5,8 @@
//! `#[no_std]` compatible.
//---------------------------------------------------------------------------------------------------- Use
use core::net::Ipv4Addr;
use monero_serai::transaction::Timelock;
use cuprate_constants::block::MAX_BLOCK_HEIGHT;
@ -28,6 +30,7 @@ use crate::cast::{u64_to_usize, usize_to_u64};
/// let high = u64::MAX;
///
/// assert_eq!(split_u128_into_low_high_bits(value), (low, high));
/// assert_eq!(split_u128_into_low_high_bits(0), (0, 0));
/// ```
#[inline]
pub const fn split_u128_into_low_high_bits(value: u128) -> (u64, u64) {
@ -52,6 +55,7 @@ pub const fn split_u128_into_low_high_bits(value: u128) -> (u64, u64) {
/// let high = u64::MAX;
///
/// assert_eq!(combine_low_high_bits_to_u128(low, high), value);
/// assert_eq!(combine_low_high_bits_to_u128(0, 0), 0);
/// ```
#[inline]
pub const fn combine_low_high_bits_to_u128(low_bits: u64, high_bits: u64) -> u128 {
@ -59,6 +63,24 @@ pub const fn combine_low_high_bits_to_u128(low_bits: u64, high_bits: u64) -> u12
res | (low_bits as u128)
}
//---------------------------------------------------------------------------------------------------- IPv4
/// Convert an [`Ipv4Addr`] to a [`u32`].
///
/// For why this exists, see: <https://architecture.cuprate.org/oddities/le-ipv4.html>.
#[inline]
pub const fn ipv4_from_u32(ip: u32) -> Ipv4Addr {
let [a, b, c, d] = ip.to_le_bytes();
Ipv4Addr::new(a, b, c, d)
}
/// Convert a [`u32`] to an [`Ipv4Addr`].
///
/// For why this exists, see: <https://architecture.cuprate.org/oddities/le-ipv4.html>.
#[inline]
pub const fn u32_from_ipv4(ip: Ipv4Addr) -> u32 {
u32::from_le_bytes(ip.octets())
}
//---------------------------------------------------------------------------------------------------- Timelock
/// Map a [`u64`] to a [`Timelock`].
///

View file

@ -1,2 +1,4 @@
# Changelogs
This directory holds changelog files for binaries/libraries.
This directory holds changelog files for binaries/libraries.
The `latest.md` file is a symlink to the latest changelog.

View file

@ -0,0 +1,36 @@
# cuprated 0.0.2 Molybdenite (2025-04-09)
Cuprate is an alternative Monero node implementation.
This is the second release of the Cuprate node, `cuprated`.
To get started, see: <https://user.cuprate.org>.
For an FAQ on Cuprate, see: <https://user.cuprate.org/#faq>.
## Changes
- User book changes, config documentation ([#402](https://github.com/Cuprate/cuprate/pull/402), [#406](https://github.com/Cuprate/cuprate/pull/406), [#418](https://github.com/Cuprate/cuprate/pull/418))
- Blockchain reorganizations fixes ([#408](https://github.com/Cuprate/cuprate/pull/408))
- STDOUT/STDIN fixes ([#415](https://github.com/Cuprate/cuprate/pull/415))
- Replace remaining `println` with `tracing` ([#417](https://github.com/Cuprate/cuprate/pull/417))
- Fix blockchain database error mappings ([#419](https://github.com/Cuprate/cuprate/pull/419))
- Update `fast-sync` to height `3384832` ([#427](https://github.com/Cuprate/cuprate/pull/427))
## Downloads
For convenience, The following binaries are produced using GitHub CI in a non-reproducible way; it is highly recommended to build `cuprated` from source instead, see <https://user.cuprate.org/getting-started/source>.
| OS | Architecture | Download |
|---------|--------------|----------|
| Linux | x64 | <https://github.com/Cuprate/cuprate/releases/download/cuprated-0.0.2/cuprated-0.0.2-linux-x64.tar.gz>
| Linux | ARM64 | <https://github.com/Cuprate/cuprate/releases/download/cuprated-0.0.2/cuprated-0.0.2-linux-arm64.tar.gz>
| macOS | x64 | <https://github.com/Cuprate/cuprate/releases/download/cuprated-0.0.2/cuprated-0.0.2-macos-x64.tar.gz>
| macOS | ARM64 | <https://github.com/Cuprate/cuprate/releases/download/cuprated-0.0.2/cuprated-0.0.2-macos-arm64.tar.gz>
| Windows | x64 | <https://github.com/Cuprate/cuprate/releases/download/cuprated-0.0.2/cuprated-0.0.2-windows-x64.zip>
## Contributors
Thank you to everyone who directly contributed to this release:
- @Boog900
- @hinto-janai
- @jermanuts
There are other contributors that are not listed here, thank you to them as well.

View file

@ -0,0 +1 @@
0.0.2.md

View file

@ -16,6 +16,7 @@ std = ["dep:thiserror", "bytes/std", "cuprate-fixed-bytes/std"]
[dependencies]
cuprate-fixed-bytes = { workspace = true, default-features = false }
cuprate-hex = { workspace = true, default-features = false }
paste = "1.0.15"
ref-cast = "1.0.23"

View file

@ -27,6 +27,10 @@ impl Error {
}
}
#[expect(
clippy::missing_const_for_fn,
reason = "False-positive, `<String as Deref>::deref` is not const"
)]
fn field_data(&self) -> &str {
match self {
Self::IO(data) | Self::Format(data) => data,

View file

@ -7,6 +7,7 @@ use core::fmt::Debug;
use bytes::{Buf, BufMut, Bytes, BytesMut};
use cuprate_fixed_bytes::{ByteArray, ByteArrayVec};
use cuprate_hex::{Hex, HexVec};
use crate::{
io::{checked_read_primitive, checked_write_primitive},
@ -392,6 +393,53 @@ impl<const N: usize> EpeeValue for Vec<[u8; N]> {
}
}
impl<const N: usize> EpeeValue for Hex<N> {
const MARKER: Marker = <[u8; N] as EpeeValue>::MARKER;
fn read<B: Buf>(r: &mut B, marker: &Marker) -> Result<Self> {
Ok(Self(<[u8; N] as EpeeValue>::read(r, marker)?))
}
fn write<B: BufMut>(self, w: &mut B) -> Result<()> {
<[u8; N] as EpeeValue>::write(self.0, w)
}
}
impl<const N: usize> EpeeValue for Vec<Hex<N>> {
const MARKER: Marker = Vec::<[u8; N]>::MARKER;
fn read<B: Buf>(r: &mut B, marker: &Marker) -> Result<Self> {
Ok(Vec::<[u8; N]>::read(r, marker)?
.into_iter()
.map(Hex)
.collect())
}
fn should_write(&self) -> bool {
!self.is_empty()
}
fn epee_default_value() -> Option<Self> {
Some(Self::new())
}
fn write<B: BufMut>(self, w: &mut B) -> Result<()> {
write_iterator(self.into_iter(), w)
}
}
impl EpeeValue for HexVec {
const MARKER: Marker = <Vec<u8> as EpeeValue>::MARKER;
fn read<B: Buf>(r: &mut B, marker: &Marker) -> Result<Self> {
Ok(Self(<Vec<u8> as EpeeValue>::read(r, marker)?))
}
fn write<B: BufMut>(self, w: &mut B) -> Result<()> {
<Vec<u8> as EpeeValue>::write(self.0, w)
}
}
macro_rules! epee_seq {
($val:ty) => {
impl EpeeValue for Vec<$val> {
@ -458,6 +506,7 @@ epee_seq!(u16);
epee_seq!(f64);
epee_seq!(bool);
epee_seq!(Vec<u8>);
epee_seq!(HexVec);
epee_seq!(String);
epee_seq!(Bytes);
epee_seq!(BytesMut);

View file

@ -194,11 +194,11 @@ impl<C: LevinCommand> BucketBuilder<C> {
}
}
pub fn set_signature(&mut self, sig: u64) {
pub const fn set_signature(&mut self, sig: u64) {
self.signature = Some(sig);
}
pub fn set_message_type(&mut self, ty: MessageType) {
pub const fn set_message_type(&mut self, ty: MessageType) {
self.ty = Some(ty);
}
@ -206,11 +206,11 @@ impl<C: LevinCommand> BucketBuilder<C> {
self.command = Some(command);
}
pub fn set_return_code(&mut self, code: i32) {
pub const fn set_return_code(&mut self, code: i32) {
self.return_code = Some(code);
}
pub fn set_protocol_version(&mut self, version: u32) {
pub const fn set_protocol_version(&mut self, version: u32) {
self.protocol_version = Some(version);
}

View file

@ -417,7 +417,8 @@ impl<Z: BorshNetworkZone> Service<AddressBookRequest<Z>> for AddressBook<Z> {
AddressBookRequest::GetBan(addr) => Ok(AddressBookResponse::GetBan {
unban_instant: self.peer_unban_instant(&addr).map(Instant::into_std),
}),
AddressBookRequest::PeerlistSize
AddressBookRequest::Peerlist
| AddressBookRequest::PeerlistSize
| AddressBookRequest::ConnectionCount
| AddressBookRequest::SetBan(_)
| AddressBookRequest::GetBans

View file

@ -67,8 +67,11 @@ pub async fn init_address_book<Z: BorshNetworkZone>(
Ok(res) => res,
Err(e) if e.kind() == ErrorKind::NotFound => (vec![], vec![]),
Err(e) => {
tracing::error!("Failed to open peer list, {}", e);
panic!("{e}");
tracing::error!(
"Error: Failed to open peer list,\n{},\nstarting with an empty list",
e
);
(vec![], vec![])
}
};

View file

@ -41,10 +41,12 @@ pub(crate) fn save_peers_to_disk<Z: BorshNetworkZone>(
let dir = cfg.peer_store_directory.clone();
let file = dir.join(Z::NAME);
let mut tmp_file = file.clone();
tmp_file.set_extension("tmp");
spawn_blocking(move || {
fs::create_dir_all(dir)?;
fs::write(&file, &data)
fs::write(&tmp_file, &data).and_then(|()| fs::rename(tmp_file, file))
})
}

View file

@ -108,7 +108,8 @@ impl<N: NetworkZone> Service<AddressBookRequest<N>> for DummyAddressBook {
AddressBookRequest::GetBan(_) => AddressBookResponse::GetBan {
unban_instant: None,
},
AddressBookRequest::PeerlistSize
AddressBookRequest::Peerlist
| AddressBookRequest::PeerlistSize
| AddressBookRequest::ConnectionCount
| AddressBookRequest::SetBan(_)
| AddressBookRequest::GetBans

View file

@ -45,7 +45,7 @@ impl<N: NetworkZone> WeakClient<N> {
/// Create a [`WeakBroadcastClient`] from this [`WeakClient`].
///
/// See the docs for [`WeakBroadcastClient`] for what this type can do.
pub fn broadcast_client(&mut self) -> WeakBroadcastClient<'_, N> {
pub const fn broadcast_client(&mut self) -> WeakBroadcastClient<'_, N> {
WeakBroadcastClient(self)
}
}

View file

@ -6,7 +6,7 @@ use cuprate_wire::{CoreSyncData, PeerListEntryBase};
use crate::{
client::InternalPeerID,
handles::ConnectionHandle,
types::{BanState, ConnectionInfo, SetBan},
types::{BanState, ConnectionInfo, Peerlist, SetBan},
NetZoneAddress, NetworkAddressIncorrectZone, NetworkZone,
};
@ -115,6 +115,9 @@ pub enum AddressBookRequest<Z: NetworkZone> {
/// Gets the specified number of white peers, or less if we don't have enough.
GetWhitePeers(usize),
/// Get info on all peers, white & grey.
Peerlist,
/// Get the amount of white & grey peers.
PeerlistSize,
@ -152,6 +155,9 @@ pub enum AddressBookResponse<Z: NetworkZone> {
/// Response to [`AddressBookRequest::GetWhitePeers`].
Peers(Vec<ZoneSpecificPeerListEntryBase<Z::Addr>>),
/// Response to [`AddressBookRequest::Peerlist`].
Peerlist(Peerlist<Z::Addr>),
/// Response to [`AddressBookRequest::PeerlistSize`].
PeerlistSize { white: usize, grey: usize },

View file

@ -5,7 +5,7 @@ use std::time::{Duration, Instant};
use cuprate_pruning::PruningSeed;
use cuprate_types::{AddressType, ConnectionState};
use crate::NetZoneAddress;
use crate::{NetZoneAddress, ZoneSpecificPeerListEntryBase};
/// Data within [`crate::services::AddressBookRequest::SetBan`].
pub struct SetBan<A: NetZoneAddress> {
@ -94,3 +94,10 @@ pub struct Span<A: NetZoneAddress> {
pub speed: u32,
pub start_block_height: u64,
}
/// Used in RPC's `/get_peer_list`.
#[derive(Clone, Debug, Default, PartialEq, Eq)]
pub struct Peerlist<A: NetZoneAddress> {
pub white: Vec<ZoneSpecificPeerListEntryBase<A>>,
pub grey: Vec<ZoneSpecificPeerListEntryBase<A>>,
}

View file

@ -159,6 +159,13 @@ pub struct BroadcastSvc<N: NetworkZone> {
tx_broadcast_channel_inbound: broadcast::Sender<BroadcastTxInfo<N>>,
}
impl<N: NetworkZone> BroadcastSvc<N> {
/// Create a mock [`BroadcastSvc`] that does nothing, useful for testing.
pub fn mock() -> Self {
init_broadcast_channels(BroadcastConfig::default()).0
}
}
impl<N: NetworkZone> Service<BroadcastRequest<N>> for BroadcastSvc<N> {
type Response = ();
type Error = std::convert::Infallible;

View file

@ -98,10 +98,7 @@ where
/// Connects to random seeds to get peers and immediately disconnects
#[instrument(level = "info", skip(self))]
#[expect(
clippy::significant_drop_in_scrutinee,
clippy::significant_drop_tightening
)]
#[expect(clippy::significant_drop_tightening)]
async fn connect_to_random_seeds(&mut self) -> Result<(), OutboundConnectorError> {
let seeds = self
.config
@ -161,7 +158,6 @@ where
tokio::spawn(
async move {
#[expect(clippy::significant_drop_in_scrutinee)]
if let Ok(Ok(peer)) = timeout(HANDSHAKE_TIMEOUT, connection_fut).await {
drop(new_peers_tx.send(peer).await);
}

View file

@ -15,7 +15,7 @@ dummy = ["dep:cuprate-helper", "dep:futures"]
[dependencies]
cuprate-epee-encoding = { workspace = true, default-features = false }
cuprate-json-rpc = { workspace = true, default-features = false }
cuprate-rpc-types = { workspace = true, features = ["serde", "epee"], default-features = false }
cuprate-rpc-types = { workspace = true, features = ["serde", "epee", "from"], default-features = false }
cuprate-helper = { workspace = true, features = ["asynch"], default-features = false, optional = true }
anyhow = { workspace = true }

View file

@ -59,7 +59,7 @@ The error type must always be [`anyhow::Error`].
The `RpcHandler` must also hold some state that is required
for RPC server operation.
The only state currently needed is [`RpcHandler::restricted`], which determines if an RPC
The only state currently needed is [`RpcHandler::is_restricted`], which determines if an RPC
server is restricted or not, and thus, if some endpoints/methods are allowed or not.
# Unknown endpoint behavior

View file

@ -72,7 +72,13 @@ macro_rules! generate_endpoints_inner {
paste::paste! {
{
// Check if restricted.
if [<$variant Request>]::IS_RESTRICTED && $handler.restricted() {
//
// INVARIANT:
// The RPC handler functions in `cuprated` depend on this line existing,
// the functions themselves do not check if they are being called
// from an (un)restricted context. This line must be here or all
// methods will be allowed to be called freely.
if [<$variant Request>]::IS_RESTRICTED && $handler.is_restricted() {
// TODO: mimic `monerod` behavior.
return Err(StatusCode::FORBIDDEN);
}

View file

@ -1,15 +1,10 @@
//! JSON-RPC 2.0 endpoint route functions.
//---------------------------------------------------------------------------------------------------- Import
use std::borrow::Cow;
use axum::{extract::State, http::StatusCode, Json};
use tower::ServiceExt;
use cuprate_json_rpc::{
error::{ErrorCode, ErrorObject},
Id, Response,
};
use cuprate_json_rpc::{Id, Response};
use cuprate_rpc_types::{
json::{JsonRpcRequest, JsonRpcResponse},
RpcCallValue,
@ -37,16 +32,18 @@ pub(crate) async fn json_rpc<H: RpcHandler>(
// Return early if this RPC server is restricted and
// the requested method is only for non-restricted RPC.
if request.body.is_restricted() && handler.restricted() {
let error_object = ErrorObject {
code: ErrorCode::ServerError(-1 /* TODO */),
message: Cow::Borrowed("Restricted. TODO: mimic monerod message"),
data: None,
};
let response = Response::err(id, error_object);
return Ok(Json(response));
//
// INVARIANT:
// The RPC handler functions in `cuprated` depend on this line existing,
// the functions themselves do not check if they are being called
// from an (un)restricted context. This line must be here or all
// methods will be allowed to be called freely.
if request.body.is_restricted() && handler.is_restricted() {
// The error when a restricted JSON-RPC method is called as per:
//
// - <https://github.com/monero-project/monero/blob/893916ad091a92e765ce3241b94e706ad012b62a/contrib/epee/include/net/http_server_handlers_map2.h#L244-L252>
// - <https://github.com/monero-project/monero/blob/cc73fe71162d564ffda8e549b79a350bca53c454/src/rpc/core_rpc_server.h#L188>
return Ok(Json(Response::method_not_found(id)));
}
// Send request.

View file

@ -6,4 +6,4 @@
pub(crate) mod bin;
pub(crate) mod fallback;
pub(crate) mod json_rpc;
pub(crate) mod other;
pub(crate) mod other_json;

View file

@ -75,7 +75,13 @@ macro_rules! generate_endpoints_inner {
paste::paste! {
{
// Check if restricted.
if [<$variant Request>]::IS_RESTRICTED && $handler.restricted() {
//
// INVARIANT:
// The RPC handler functions in `cuprated` depend on this line existing,
// the functions themselves do not check if they are being called
// from an (un)restricted context. This line must be here or all
// methods will be allowed to be called freely.
if [<$variant Request>]::IS_RESTRICTED && $handler.is_restricted() {
// TODO: mimic `monerod` behavior.
return Err(StatusCode::FORBIDDEN);
}

View file

@ -4,7 +4,7 @@
use axum::Router;
use crate::{
route::{bin, fallback, json_rpc, other},
route::{bin, fallback, json_rpc, other_json},
rpc_handler::RpcHandler,
};
@ -140,36 +140,36 @@ generate_router_builder! {
json_rpc => "/json_rpc" => json_rpc::json_rpc => (get, post),
// Other JSON routes.
other_get_height => "/get_height" => other::get_height => (get, post),
other_getheight => "/getheight" => other::get_height => (get, post),
other_get_transactions => "/get_transactions" => other::get_transactions => (get, post),
other_gettransactions => "/gettransactions" => other::get_transactions => (get, post),
other_get_alt_blocks_hashes => "/get_alt_blocks_hashes" => other::get_alt_blocks_hashes => (get, post),
other_is_key_image_spent => "/is_key_image_spent" => other::is_key_image_spent => (get, post),
other_send_raw_transaction => "/send_raw_transaction" => other::send_raw_transaction => (get, post),
other_sendrawtransaction => "/sendrawtransaction" => other::send_raw_transaction => (get, post),
other_start_mining => "/start_mining" => other::start_mining => (get, post),
other_stop_mining => "/stop_mining" => other::stop_mining => (get, post),
other_mining_status => "/mining_status" => other::mining_status => (get, post),
other_save_bc => "/save_bc" => other::save_bc => (get, post),
other_get_peer_list => "/get_peer_list" => other::get_peer_list => (get, post),
other_get_public_nodes => "/get_public_nodes" => other::get_public_nodes => (get, post),
other_set_log_hash_rate => "/set_log_hash_rate" => other::set_log_hash_rate => (get, post),
other_set_log_level => "/set_log_level" => other::set_log_level => (get, post),
other_set_log_categories => "/set_log_categories" => other::set_log_categories => (get, post),
other_get_transaction_pool => "/get_transaction_pool" => other::get_transaction_pool => (get, post),
other_get_transaction_pool_hashes => "/get_transaction_pool_hashes" => other::get_transaction_pool_hashes => (get, post),
other_get_transaction_pool_stats => "/get_transaction_pool_stats" => other::get_transaction_pool_stats => (get, post),
other_set_bootstrap_daemon => "/set_bootstrap_daemon" => other::set_bootstrap_daemon => (get, post),
other_stop_daemon => "/stop_daemon" => other::stop_daemon => (get, post),
other_get_net_stats => "/get_net_stats" => other::get_net_stats => (get, post),
other_get_limit => "/get_limit" => other::get_limit => (get, post),
other_set_limit => "/set_limit" => other::set_limit => (get, post),
other_out_peers => "/out_peers" => other::out_peers => (get, post),
other_in_peers => "/in_peers" => other::in_peers => (get, post),
other_get_outs => "/get_outs" => other::get_outs => (get, post),
other_update => "/update" => other::update => (get, post),
other_pop_blocks => "/pop_blocks" => other::pop_blocks => (get, post),
other_get_height => "/get_height" => other_json::get_height => (get, post),
other_getheight => "/getheight" => other_json::get_height => (get, post),
other_get_transactions => "/get_transactions" => other_json::get_transactions => (get, post),
other_gettransactions => "/gettransactions" => other_json::get_transactions => (get, post),
other_get_alt_blocks_hashes => "/get_alt_blocks_hashes" => other_json::get_alt_blocks_hashes => (get, post),
other_is_key_image_spent => "/is_key_image_spent" => other_json::is_key_image_spent => (get, post),
other_send_raw_transaction => "/send_raw_transaction" => other_json::send_raw_transaction => (get, post),
other_sendrawtransaction => "/sendrawtransaction" => other_json::send_raw_transaction => (get, post),
other_start_mining => "/start_mining" => other_json::start_mining => (get, post),
other_stop_mining => "/stop_mining" => other_json::stop_mining => (get, post),
other_mining_status => "/mining_status" => other_json::mining_status => (get, post),
other_save_bc => "/save_bc" => other_json::save_bc => (get, post),
other_get_peer_list => "/get_peer_list" => other_json::get_peer_list => (get, post),
other_get_public_nodes => "/get_public_nodes" => other_json::get_public_nodes => (get, post),
other_set_log_hash_rate => "/set_log_hash_rate" => other_json::set_log_hash_rate => (get, post),
other_set_log_level => "/set_log_level" => other_json::set_log_level => (get, post),
other_set_log_categories => "/set_log_categories" => other_json::set_log_categories => (get, post),
other_get_transaction_pool => "/get_transaction_pool" => other_json::get_transaction_pool => (get, post),
other_get_transaction_pool_hashes => "/get_transaction_pool_hashes" => other_json::get_transaction_pool_hashes => (get, post),
other_get_transaction_pool_stats => "/get_transaction_pool_stats" => other_json::get_transaction_pool_stats => (get, post),
other_set_bootstrap_daemon => "/set_bootstrap_daemon" => other_json::set_bootstrap_daemon => (get, post),
other_stop_daemon => "/stop_daemon" => other_json::stop_daemon => (get, post),
other_get_net_stats => "/get_net_stats" => other_json::get_net_stats => (get, post),
other_get_limit => "/get_limit" => other_json::get_limit => (get, post),
other_set_limit => "/set_limit" => other_json::set_limit => (get, post),
other_out_peers => "/out_peers" => other_json::out_peers => (get, post),
other_in_peers => "/in_peers" => other_json::in_peers => (get, post),
other_get_outs => "/get_outs" => other_json::get_outs => (get, post),
other_update => "/update" => other_json::update => (get, post),
other_pop_blocks => "/pop_blocks" => other_json::pop_blocks => (get, post),
// Binary routes.
bin_get_blocks => "/get_blocks.bin" => bin::get_blocks => (get, post),

View file

@ -46,5 +46,5 @@ pub trait RpcHandler:
///
/// will automatically be denied access when using the
/// [`axum::Router`] provided by [`RouterBuilder`](crate::RouterBuilder).
fn restricted(&self) -> bool;
fn is_restricted(&self) -> bool;
}

View file

@ -31,7 +31,7 @@ use crate::rpc_handler::RpcHandler;
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Hash)]
#[cfg_attr(feature = "serde", derive(Deserialize, Serialize))]
pub struct RpcHandlerDummy {
/// Should this RPC server be [restricted](RpcHandler::restricted)?
/// Should this RPC server be [restricted](RpcHandler::is_restricted)?
///
/// The dummy will honor this [`bool`]
/// on restricted methods/endpoints.
@ -39,7 +39,7 @@ pub struct RpcHandlerDummy {
}
impl RpcHandler for RpcHandlerDummy {
fn restricted(&self) -> bool {
fn is_restricted(&self) -> bool {
self.restricted
}
}
@ -59,6 +59,7 @@ impl Service<JsonRpcRequest> for RpcHandlerDummy {
#[expect(clippy::default_trait_access)]
let resp = match req {
Req::GetBlockTemplate(_) => Resp::GetBlockTemplate(Default::default()),
Req::GetBlockCount(_) => Resp::GetBlockCount(Default::default()),
Req::OnGetBlockHash(_) => Resp::OnGetBlockHash(Default::default()),
Req::SubmitBlock(_) => Resp::SubmitBlock(Default::default()),

Some files were not shown because too many files have changed in this diff Show more