Compare commits

...

48 commits

Author SHA1 Message Date
hinto.janai
31cd1b28f2
fix tests
Some checks failed
Audit / audit (push) Has been cancelled
Deny / audit (push) Has been cancelled
2024-10-10 21:01:51 -04:00
hinto.janai
b931ffc7ad
apply diffs 2024-10-10 20:46:18 -04:00
hinto-janai
9923d8d69d
cuprated: internal signatures required for RPC (#297)
Some checks failed
CI / fmt (push) Has been cancelled
CI / typo (push) Has been cancelled
CI / ci (macos-latest, stable, bash) (push) Has been cancelled
CI / ci (ubuntu-latest, stable, bash) (push) Has been cancelled
CI / ci (windows-latest, stable-x86_64-pc-windows-gnu, msys2 {0}) (push) Has been cancelled
Doc / build (push) Has been cancelled
Doc / deploy (push) Has been cancelled
* add request methods

* add p2p messages

* add txpool msgs

* add blockchain_context msgs

* add blockchain msgs

* fmt

* blockchain_manager msgs

* blockchain manager msg types

* add DB fn signatures

* add statics module

* p2p msg changes, docs

* txpool docs/types

* blockchain docs/types

* `AlternateChains`, docs

* fixes

* remove blockchain write handle, fix docs

* remove `BlockchainReadRequest::Difficulty`

* remove `BlockchainReadRequest::MinerData`

* fix p2p ban types

* `CurrentRxVm` -> `CurrentRxVms`

* storage: remove `Clone` off write handle

* Update p2p/p2p-core/src/ban.rs

Co-authored-by: Boog900 <boog900@tutanota.com>

* fix merge

---------

Co-authored-by: Boog900 <boog900@tutanota.com>
2024-10-08 22:57:09 +01:00
8be369846e
cuprated: Blockchain Manager (#274)
Some checks failed
CI / fmt (push) Waiting to run
CI / typo (push) Waiting to run
CI / ci (macos-latest, stable, bash) (push) Waiting to run
CI / ci (ubuntu-latest, stable, bash) (push) Waiting to run
CI / ci (windows-latest, stable-x86_64-pc-windows-gnu, msys2 {0}) (push) Waiting to run
Doc / build (push) Waiting to run
Doc / deploy (push) Blocked by required conditions
Audit / audit (push) Has been cancelled
Deny / audit (push) Has been cancelled
* add cuprated skeleton

* fmt and add deny exception

* add main chain batch handler

* add blockchain init

* very rough block manager

* misc changes

* move more config values

* add new tables & types

* add function to fully add an alt block

* resolve current todo!s

* add new requests

* WIP: starting re-orgs

* add last service request

* commit Cargo.lock

* add test

* more docs + cleanup + alt blocks request

* clippy + fmt

* document types

* move tx_fee to helper

* more doc updates

* fmt

* fix imports

* remove config files

* fix merge errors

* fix generated coins

* handle more p2p requests + alt blocks

* clean up handler code

* add function for incoming blocks

* add docs to handler functions

* broadcast new blocks + add commands

* add fluffy block handler

* fix new block handling

* small cleanup

* increase outbound peer count

* fix merge

* clean up the blockchain manger

* add more docs + cleanup imports

* fix typo

* fix doc

* remove unrelated changes

* improve interface globals

* manger -> manager

* enums instead of bools

* move chain service to separate file

* more review fixes

* add link to issue

* fix syncer + update comment

* fmt
2024-10-08 18:26:07 +01:00
Dmitry Holodov
00bdd6ffaa
cryptonight in pure Rust (#271)
* removed FORCE_USE_HEAP to from c code

* removed unused headers

* simplifying C code to better understand it

* more c code simplifications

* removed conditional code for the v4 register size

* got one version of keccak working

* not so important hash_process unwound

* got keccak working using the sha3 lib

* hash state unions created

* slow hash through VARIANT1_PORTABLE_INIT is working

* variant 2 init working

* ported version of random_math_init compiling, but not yet passing tests

* fixed hash algorithm, tests working

* formatting

* more macro reduction

* monero AES working in Rust

* fixed AES key expansion expected key size

* first 75% of slow hash converted and working correctly

* adjusted key format for aesb_single_round

* converted some macros to functions

* variant2_integer_math working with test cases

* broke sqrt out of variant2_integer_math for code coverage

* variant2_portable_shuffle_add working with unit tests

* added skein and jh hashes

* 524287 iteration loop producing correct results

* all tests working in Rust

* subarray macros added

* aes simplifications

* code cleanups

* code cleanups part 2

* removed unused blake C code as prep for port to rust

* original blake algorithm in pure rust is working

* converted macro in compress to a lamda

* added module documentation for blake256

* Gave Blake256 a Digest trait

* adding more documentation

* more documentation and cleanup

* more slow hash tests

* removed C code

* misc refactoring

* fix

* lint fix

* additional linting

* downgraded deps to latest stable versions

* made thiserror a workspace dep

* removed commented dead code

* lint fixes

* fixed lint issues in test code

* limited util macro scopes to the crate

* Reformatted dependencies using:
group_imports = "StdExternalCrate"
reorder_modules = true
reorder_impl_items = true
imports_granularity = "crate"

* converted util macros to inline functions

* hex dep comes from workspace

Co-authored-by: hinto-janai <hinto.janai@protonmail.com>

* panic subarray tests

Co-authored-by: hinto-janai <hinto.janai@protonmail.com>

* updates to doc comments

* removes extra parens in hash_v4.rs

Co-authored-by: hinto-janai <hinto.janai@protonmail.com>

* early return to remove indentation in hash_v2.rs

Co-authored-by: hinto-janai <hinto.janai@protonmail.com>

* gropuing expect annotations in hash_v2.rs

Co-authored-by: hinto-janai <hinto.janai@protonmail.com>

* use matches macro to simplify code hash_v4.rs

Co-authored-by: hinto-janai <hinto.janai@protonmail.com>

* remove extra paren in hash_v4.rs

Co-authored-by: hinto-janai <hinto.janai@protonmail.com>

* eary return to remove indentation in hash_v2.rs

Co-authored-by: hinto-janai <hinto.janai@protonmail.com>

* minor comment fixes

* early loop continue to remove indentation in hash_v4.rs

Co-authored-by: hinto-janai <hinto.janai@protonmail.com>

* convert non-capturing llamda to fn in hash_v2.rs

Co-authored-by: hinto-janai <hinto.janai@protonmail.com>

* another lamda to fn conversion in hash_v2.rs

Co-authored-by: hinto-janai <hinto.janai@protonmail.com>

* llamda to fn conversion in cnaes.rs

Co-authored-by: hinto-janai <hinto.janai@protonmail.com>

* 2nd llamda to fn conversion in cnaes.rs

Co-authored-by: hinto-janai <hinto.janai@protonmail.com>

* test lamdas in lib.rs are now functions

* round_fwd optimized

* added myself as an author

* fixed place that needed wrapping_add

* clippy allow->expect change needed after merging master

* moving state to u128

* round_fwd changes sped up fuzzer by 10%

* 1st working version using u128 for long state

* text converted to u128 array

* removed LongState union

* simplified long_state's initialization

* aes round keys now use u128

* CRYPTONIGHT_SBOX is now u32 instead of u8

* cleaner hash_v4 loop unrolling semantics (same peformance)

* switched to a better maintained loop unrolling macro
2024-10-08 16:03:56 +01:00
ca882512fc
P2P: move seed nodes to config (#306)
* move seed nodes to config

* fix tests
2024-10-07 23:36:46 +01:00
hinto-janai
80bfe0a34c
types: JSON representation types (#300)
* add `cuprate_types::json`

* docs

* `Option` -> flattened enums + prefix structs

* output enum

* docs

* todo!() epee impl

* cuprate-rpc-types: add comments

* cuprate-rpc-types: common `TxEntry` fields into prefix struct

* remove epee

* docs

* add `hex` module

* `From` serai types

* cleanup

* proofs

* tx from impls

* fix tx timelock

* add block value tests

* add ringct types

* add tx_v1, tx_rct_3 test

* clsag bulletproofs tx test

* clsag bulletproofs plus tx test

* docs

* fix hex bytes

* typo

* docs
2024-10-05 01:47:44 +01:00
hinto-janai
a003e0588d
Add constants/ crate (#280)
Some checks failed
Architecture mdBook / build (push) Has been cancelled
Audit / audit (push) Has been cancelled
CI / fmt (push) Has been cancelled
CI / typo (push) Has been cancelled
CI / ci (macos-latest, stable, bash) (push) Has been cancelled
CI / ci (ubuntu-latest, stable, bash) (push) Has been cancelled
CI / ci (windows-latest, stable-x86_64-pc-windows-gnu, msys2 {0}) (push) Has been cancelled
Deny / audit (push) Has been cancelled
Doc / build (push) Has been cancelled
Doc / deploy (push) Has been cancelled
* add `constants/`

* ci: add `A-constants` labeler

* add modules, move `cuprate_helper::constants`

* add `genesis.rs`

* `rpc.rs` docs

* remove todos

* `CRYPTONOTE_MAX_BLOCK_HEIGHT`

* add genesis data for all networks

* features

* fix feature cfgs

* test fixes

* add to architecture book

* fix comment

* remove `genesis` add other constants

* fixes

* revert

* fix
2024-10-02 18:51:58 +01:00
521bf877db
P2P: give the protocol handler access to the peer info (#302)
Some checks are pending
Audit / audit (push) Waiting to run
CI / fmt (push) Waiting to run
CI / typo (push) Waiting to run
CI / ci (macos-latest, stable, bash) (push) Waiting to run
CI / ci (ubuntu-latest, stable, bash) (push) Waiting to run
CI / ci (windows-latest, stable-x86_64-pc-windows-gnu, msys2 {0}) (push) Waiting to run
Deny / audit (push) Waiting to run
Doc / build (push) Waiting to run
Doc / deploy (push) Blocked by required conditions
* give the protocol handler access to the peer info

* add trait alias

* clippy + fmt

* update doc

* simplify trait aliases

* use tower `Shared`

* clean import

* fmt

* Update Cargo.toml

Co-authored-by: hinto-janai <hinto.janai@protonmail.com>

* fix merge

---------

Co-authored-by: hinto-janai <hinto.janai@protonmail.com>
2024-09-30 23:19:53 +01:00
6da9d2d734
P2P: remove peer sync service (#299)
* remove peer sync service

* change `p2p` to not use the peer sync service

* fmt & clippy

* doc updates

* review fixes

* add a little more detail to comment
2024-09-30 22:15:48 +01:00
hinto-janai
12bbadd749
cuprated: add constants & statics modules (#301)
* add modules

* docs

* test

* rename

* tabs -> spaces
2024-09-28 01:41:34 +01:00
a072d44a0d
P2P: fix connection disconnect on Client drop (#298)
fix connection disconnect on `Client` drop
2024-09-25 20:56:57 +01:00
hinto-janai
88605b081f
books/architecture: port database design document (#267)
Some checks failed
Architecture mdBook / build (push) Has been cancelled
CI / fmt (push) Has been cancelled
CI / typo (push) Has been cancelled
CI / ci (macos-latest, stable, bash) (push) Has been cancelled
CI / ci (ubuntu-latest, stable, bash) (push) Has been cancelled
CI / ci (windows-latest, stable-x86_64-pc-windows-gnu, msys2 {0}) (push) Has been cancelled
Doc / build (push) Has been cancelled
Doc / deploy (push) Has been cancelled
* add chapters

* add files, intro

* db abstraction

* backends

* abstraction

* syncing

* serde

* issues

* common/types

* common/ops

* common/service

* service diagram

* service/resize

* service/thread-model

* service/shutdown

* storage/blockchain

* update md files

* cleanup

* fixes

* update for https://github.com/Cuprate/cuprate/pull/290

* review fix
2024-09-24 17:23:22 +01:00
hinto-janai
5eb712f4de
cargo upgrade (#296)
Some checks failed
Audit / audit (push) Has been cancelled
CI / fmt (push) Has been cancelled
CI / typo (push) Has been cancelled
CI / ci (macos-latest, stable, bash) (push) Has been cancelled
CI / ci (ubuntu-latest, stable, bash) (push) Has been cancelled
CI / ci (windows-latest, stable-x86_64-pc-windows-gnu, msys2 {0}) (push) Has been cancelled
Deny / audit (push) Has been cancelled
Doc / build (push) Has been cancelled
Doc / deploy (push) Has been cancelled
cargo upgrade

Co-authored-by: Boog900 <boog900@tutanota.com>
2024-09-22 19:34:20 +01:00
hinto-janai
848a6a71c4
p2p/p2p-core: enable workspace lints (#288)
Some checks failed
Audit / audit (push) Has been cancelled
CI / fmt (push) Has been cancelled
CI / typo (push) Has been cancelled
CI / ci (macos-latest, stable, bash) (push) Has been cancelled
CI / ci (ubuntu-latest, stable, bash) (push) Has been cancelled
CI / ci (windows-latest, stable-x86_64-pc-windows-gnu, msys2 {0}) (push) Has been cancelled
Deny / audit (push) Has been cancelled
Doc / build (push) Has been cancelled
Doc / deploy (push) Has been cancelled
* p2p-core: enable workspace lints

* fmt

* fix tests

* fixes

* fixes

* fixes

* expect reason
2024-09-21 01:37:06 +01:00
hinto-janai
f4c88b6f05
p2p: enable workspace lints (#289)
* p2p: enable workspace lints

* fmt

* fixes

* fixes

* fixes

* review fixes
2024-09-21 01:36:39 +01:00
hinto-janai
c840053854
consensus: enable workspace lints (#295)
* consensus: enable workspace lints

* rules/fast-sync: enable workspace lints

* typos

* fixes

* `PoW` -> proof-of-work
2024-09-21 01:32:03 +01:00
hinto-janai
57af45e01d
epee-encoding: enable workspace lints (#294)
* epee-encoding: enable workspace lints

* fmt

* fixes

* fixes

* fmt
2024-09-20 15:13:55 +01:00
hinto-janai
5588671501
levin: enable workspace lints (#292)
* levin: enable workspace lints

* use `drop()`

* dep fixes
2024-09-20 15:11:27 +01:00
hinto-janai
19150df355
p2p/dandelion-tower: enable workspace lints (#287)
* dandelion-tower: add/fix workspace lints

* fmt

* fixes

* todos

* fixes

* fixes

* expect reason
2024-09-20 14:36:34 +01:00
Asurar
e7c6bba63d
Database: Split BlockBlobs table + Miscellaneous fixes (#290)
Some checks are pending
Audit / audit (push) Waiting to run
CI / fmt (push) Waiting to run
CI / typo (push) Waiting to run
CI / ci (macos-latest, stable, bash) (push) Waiting to run
CI / ci (ubuntu-latest, stable, bash) (push) Waiting to run
CI / ci (windows-latest, stable-x86_64-pc-windows-gnu, msys2 {0}) (push) Waiting to run
Deny / audit (push) Waiting to run
Doc / build (push) Waiting to run
Doc / deploy (push) Blocked by required conditions
* Split `BlockBlobs` database table + misc fixes

- Split the `BlockBlobs` database table into two new tables: `BlockHeaderBlobs` and `BlockTxsHashes`.
- `add_block`, `pop_block` and `get_block_extended_header` have been edited consequently.
- `VerifiedBlockInformation` now have a `mining_tx_index: u64` field.
- Made `cuprate-helper`'s `thread` feature a dependency of the `service` feature
- Edited service test mapping of output. It is now a full iterator.

* fix fmt

* Update storage/blockchain/src/types.rs

Co-authored-by: Boog900 <boog900@tutanota.com>

* Update storage/blockchain/src/ops/block.rs

Co-authored-by: Boog900 <boog900@tutanota.com>

* fix warning

---------

Co-authored-by: Boog900 <boog900@tutanota.com>
2024-09-19 20:05:41 +01:00
4169c45c58
Blockchain: add alt-block handling (#260)
* add new tables & types

* add function to fully add an alt block

* resolve current todo!s

* add new requests

* WIP: starting re-orgs

* add last service request

* commit Cargo.lock

* add test

* more docs + cleanup + alt blocks request

* clippy + fmt

* document types

* move tx_fee to helper

* more doc updates

* fmt

* fix imports

* fix fee

* Apply suggestions from code review

Co-authored-by: hinto-janai <hinto.janai@protonmail.com>

* remove default features from `cuprate-helper`

* review fixes

* fix find_block

* add a test and fix some issues in chain history

* fix clippy

* fmt

* Apply suggestions from code review

Co-authored-by: hinto-janai <hinto.janai@protonmail.com>

* add dev dep

* cargo update

* move `flush_alt_blocks`

* review fixes

* more review fixes

* fix clippy

* remove INVARIANT comments

---------

Co-authored-by: hinto-janai <hinto.janai@protonmail.com>
2024-09-19 16:55:28 +01:00
hinto-janai
e3a918bca5
wire: enable workspace lints (#291)
* wire: enable workspace lints

* revert match arm formatting
2024-09-18 23:19:32 +01:00
hinto-janai
a1267619ef
p2p/address-book: enable workspace lints (#286)
* address-book: enable workspace lints

* fix

* fixes
2024-09-18 23:18:31 +01:00
hinto-janai
2afc0e8373
test-utils: enable workspace lints (#283)
* test-utils: enable workspace lints + fix

* `allow` -> `expect`

* fixes
2024-09-18 23:14:31 +01:00
hinto-janai
b9842fcb18
fixed-bytes: enable workspace lints (#293) 2024-09-18 23:12:35 +01:00
hinto-janai
8b4b403c5c
pruning: enable workspace lints (#284)
pruning: enable/fix workspace lints
2024-09-18 22:44:23 +01:00
hinto-janai
6502729d8c
lints: replace allow with expect (#285)
Some checks are pending
Audit / audit (push) Waiting to run
CI / fmt (push) Waiting to run
CI / typo (push) Waiting to run
CI / ci (macos-latest, stable, bash) (push) Waiting to run
CI / ci (ubuntu-latest, stable, bash) (push) Waiting to run
CI / ci (windows-latest, stable-x86_64-pc-windows-gnu, msys2 {0}) (push) Waiting to run
Deny / audit (push) Waiting to run
Doc / build (push) Waiting to run
Doc / deploy (push) Blocked by required conditions
* cargo.toml: add `allow_attributes` lint

* fix lints

* fixes

* fmt

* fix docs

* fix docs

* fix expect msg
2024-09-18 21:31:08 +01:00
Asurar
2291a96795
P2P: Add latest clearnet mainnet seed nodes. (#281)
Add Monerod latest clearnet mainnet seed nodes
2024-09-14 14:01:43 +01:00
90027143f0
consensus: misc fixes (#276)
Some checks failed
CI / fmt (push) Has been cancelled
CI / typo (push) Has been cancelled
CI / ci (macos-latest, stable, bash) (push) Has been cancelled
CI / ci (ubuntu-latest, stable, bash) (push) Has been cancelled
CI / ci (windows-latest, stable-x86_64-pc-windows-gnu, msys2 {0}) (push) Has been cancelled
Doc / build (push) Has been cancelled
Doc / deploy (push) Has been cancelled
* fix decoy checks + fee calculation

* fmt
2024-09-10 01:18:26 +01:00
49d1344aa1
Storage: use saturating_add for cumulative_generated_coins (#275)
* use `saturating_add` for `cumulative_generated_coins`

* cargo fmt
2024-09-10 01:15:04 +01:00
Asurar
967537fae1
P2P: Implement incoming ping request handling over maximum inbound limit (#277)
Implement incoming ping request handling over maximum inbound limit

- If the maximum inbound connection semaphore reach its limit, `inbound_server` fn will
open a tokio task to check if the node wanted to ping us. If it is the case we respond, otherwise
drop the connection.
- Added some documentation to the `inbound_server` fn.
2024-09-09 23:12:06 +01:00
hinto-janai
01625535fa
book/architecture: add resource index (#268)
Some checks failed
Architecture mdBook / build (push) Has been cancelled
Audit / audit (push) Has been cancelled
CI / fmt (push) Has been cancelled
CI / typo (push) Has been cancelled
CI / ci (macos-latest, stable, bash) (push) Has been cancelled
CI / ci (ubuntu-latest, stable, bash) (push) Has been cancelled
CI / ci (windows-latest, stable-x86_64-pc-windows-gnu, msys2 {0}) (push) Has been cancelled
Deny / audit (push) Has been cancelled
Doc / build (push) Has been cancelled
Doc / deploy (push) Has been cancelled
* resource index

* index

* cap

* cleanup
2024-09-08 18:31:58 +01:00
hinto-janai
92800810d9
cuprated: initial RPC module skeleton (#262)
* readme

* cuprated: add all workspace deps

* cuprated: add lints

* !!

* add state, fn signatures

* fixes

* error signatures

* interface: handle json-rpc concepts

* split rpc calls into 3 `Service`s

* interface: extract out to `RpcService`

* fix merge

* remove crate lints

* use `BoxFuture`

* rpc/interface: impl `thiserror::Error`

* split state from main handler struct

* cleanup

* fix imports

* replace `RpcError` with `anyhow::Error`

* interface: update error

* cuprated: update error type
2024-09-08 15:52:17 +01:00
hinto-janai
4653ac5884
rpc/interface: separate RpcHandler into 3 tower::Services (#266)
Some checks failed
CI / typo (push) Has been cancelled
CI / ci (macos-latest, stable, bash) (push) Has been cancelled
CI / fmt (push) Has been cancelled
CI / ci (ubuntu-latest, stable, bash) (push) Has been cancelled
CI / ci (windows-latest, stable-x86_64-pc-windows-gnu, msys2 {0}) (push) Has been cancelled
Doc / build (push) Has been cancelled
Doc / deploy (push) Has been cancelled
* apply diff

* cleanup

* fix test
2024-09-05 16:53:16 +01:00
hinto-janai
0941f68efc
helper: fix clippy (#265)
Some checks failed
Audit / audit (push) Has been cancelled
CI / fmt (push) Has been cancelled
CI / typo (push) Has been cancelled
CI / ci (macos-latest, stable, bash) (push) Has been cancelled
CI / ci (ubuntu-latest, stable, bash) (push) Has been cancelled
CI / ci (windows-latest, stable-x86_64-pc-windows-gnu, msys2 {0}) (push) Has been cancelled
Deny / audit (push) Has been cancelled
Doc / build (push) Has been cancelled
Doc / deploy (push) Has been cancelled
* helper: fix lints

* fix tests
2024-09-02 22:46:11 +01:00
hinto-janai
eead49beb0
lints: opt in manual lint crates (#263)
* cargo.toml: transfer existing lints

* rpc/interface: lints

* rpc/json-rpc: lints

* rpc/types: lints

* storage/blockchain: lints

* rpc/types: fix lints

* cargo.toml: fix lint group priority

* storage/blockchain: fix lints

* fix misc lints

* storage/database: fixes

* storage/txpool: opt in lints + fixes

* types: opt in + fixes

* helper: opt in + fixes

* types: remove borsh

* rpc/interface: fix test

* test fixes

* database: fix lints

* fix lint

* tabs -> spaces

* blockchain: `config/` -> `config.rs`
2024-09-02 18:12:54 +01:00
hinto-janai
b837d350a4
workspace: add naming convention lints (#261)
* add lint to {Cargo,clippy}.toml

* `RandomXVM` -> `RandomXVm`

* epee: `TT` -> `T2`
2024-09-02 18:10:45 +01:00
hinto-janai
bec8cc0aa4
helper: add and use cast module (#264)
* helper: add `cast` module

* fix crates

* spacing
2024-09-02 18:09:52 +01:00
fdd1689665
Storage: tx-pool database (#238)
Some checks failed
CI / ci (ubuntu-latest, stable, bash) (push) Has been cancelled
Deny / audit (push) Has been cancelled
Doc / build (push) Has been cancelled
Audit / audit (push) Has been cancelled
CI / fmt (push) Has been cancelled
CI / typo (push) Has been cancelled
CI / ci (macos-latest, stable, bash) (push) Has been cancelled
CI / ci (windows-latest, stable-x86_64-pc-windows-gnu, msys2 {0}) (push) Has been cancelled
Doc / deploy (push) Has been cancelled
* split the DB service abstraction

* fix ci

* misc changes

* init tx-pool DBs

* add some comments

* move more types to `/types`

* add some ops

* add config & more ops functions & open function

* add read & write svcs

* add more docs

* add write functions + docs

* fix merge

* fix test

* fix ci

* move `TxPoolWriteError`

* add more docs

* fix toml formatting

* fix some docs

* fix clippy

* review fixes

* update docs

* fix merge

* fix docs

* fix tests

* fix tests

* add back lints

* Update storage/txpool/README.md

Co-authored-by: hinto-janai <hinto.janai@protonmail.com>

---------

Co-authored-by: hinto-janai <hinto.janai@protonmail.com>
2024-08-22 02:09:07 +01:00
8655a3f5e5
dandelion-tower: improve API (#257)
* init

* reduce the jobs handled by the dandelion pool

* fix docs

* resolve todo

* review changes

* Update p2p/dandelion-tower/src/pool/incoming_tx.rs

Co-authored-by: hinto-janai <hinto.janai@protonmail.com>

* Update p2p/dandelion-tower/src/pool/incoming_tx.rs

Co-authored-by: hinto-janai <hinto.janai@protonmail.com>

* `PId` -> `PeerId`

---------

Co-authored-by: hinto-janai <hinto.janai@protonmail.com>
2024-08-22 01:18:44 +01:00
SyntheticBird
ccff75057e
Update Zed in ENVIRONMENT-ADVICE.md (#259)
Some checks failed
CI / ci (macos-latest, stable, bash) (push) Has been cancelled
Audit / audit (push) Has been cancelled
CI / fmt (push) Has been cancelled
CI / typo (push) Has been cancelled
CI / ci (ubuntu-latest, stable, bash) (push) Has been cancelled
CI / ci (windows-latest, stable-x86_64-pc-windows-gnu, msys2 {0}) (push) Has been cancelled
Deny / audit (push) Has been cancelled
Doc / build (push) Has been cancelled
Doc / deploy (push) Has been cancelled
Update ENVIRONMENT-ADVICE.md

Updated information on Zed editor
2024-08-22 00:33:21 +01:00
7207fbd17b
Binaries: add cuprated skeleton (#258)
* add cuprated skeleton

* fmt and add deny exception
2024-08-20 23:56:18 +01:00
hinto-janai
5648bf0da0
rpc: remove temporary lints (#255)
* rpc: remove temporary lints for types

* rpc: remove temporary lints for json-rpc

* rpc: remove temporary lints for interface

* cfgs `1 tab` -> `4 spaces`
2024-08-20 23:50:31 +01:00
hinto-janai
aeb070ae8d
Replace OnceLock + fn with LazyLock (#256)
* `consensus/`

* `helper/`

* `test-utils/`

* `storage/`

* fix docs + tests + lints

* decomposed_amount: remove `LazyLock`

* clippy
2024-08-20 22:53:32 +01:00
hinto-janai
59adf6dcf8
std::mem::{size,align}_of -> {size,align}_of (#254) 2024-08-10 00:09:25 +01:00
hinto-janai
bca062d2f5
workspace: add 1.78..=1.80 lints (#253)
cargo.toml: add lints
2024-08-10 00:08:56 +01:00
hinto-janai
ca3b149b39
ci: fix book CI (#252)
Some checks failed
CI / fmt (push) Has been cancelled
CI / typo (push) Has been cancelled
CI / ci (macos-latest, stable, bash) (push) Has been cancelled
CI / ci (ubuntu-latest, stable, bash) (push) Has been cancelled
CI / ci (windows-latest, stable-x86_64-pc-windows-gnu, msys2 {0}) (push) Has been cancelled
Doc / build (push) Has been cancelled
Doc / deploy (push) Has been cancelled
* ci: fix book ci

* ci: add `--locked`
2024-08-09 20:44:53 +01:00
400 changed files with 16162 additions and 18820 deletions

4
.github/labeler.yml vendored
View file

@ -56,6 +56,10 @@ A-cryptonight:
- changed-files:
- any-glob-to-any-file: cryptonight/**
A-constants:
- changed-files:
- any-glob-to-any-file: constants/**
A-storage:
- changed-files:
- any-glob-to-any-file: storage/**

View file

@ -4,8 +4,11 @@ name: Architecture mdBook
on:
push:
paths:
- 'books/architecture/**'
branches: ['main']
paths: ['books/architecture/**']
pull_request:
paths: ['books/architecture/**']
workflow_dispatch:
env:
# Version of `mdbook` to install.
@ -30,8 +33,8 @@ jobs:
- name: Install mdBook
run: |
cargo install --version ${MDBOOK_VERSION} mdbook
cargo install --version ${MDBOOK_LAST_CHANGED_VERSION} mdbook-last-changed
cargo install --locked --version ${MDBOOK_VERSION} mdbook || echo "mdbook already exists"
cargo install --locked --version ${MDBOOK_LAST_CHANGED_VERSION} mdbook-last-changed || echo "mdbook-last-changed already exists"
- name: Build
run: mdbook build books/architecture

View file

@ -7,6 +7,7 @@ on:
paths:
- '**/Cargo.toml'
- '**/Cargo.lock'
workflow_dispatch:
env:
CARGO_TERM_COLOR: always

View file

@ -7,6 +7,7 @@ on:
paths:
- '**/Cargo.toml'
- '**/Cargo.lock'
workflow_dispatch:
env:
CARGO_TERM_COLOR: always

View file

@ -4,8 +4,11 @@ name: Monero mdBook
on:
push:
paths:
- 'books/protocol/**'
branches: ['main']
paths: ['books/protocol/**']
pull_request:
paths: ['books/protocol/**']
workflow_dispatch:
env:
# Version of `mdbook` to install.
@ -30,8 +33,8 @@ jobs:
- name: Install mdBook
run: |
cargo install --version ${MDBOOK_VERSION} mdbook
cargo install --version ${MDBOOK_SVGBOB_VERSION} mdbook-svgbob
cargo install --locked --version ${MDBOOK_VERSION} mdbook || echo "mdbook already exists"
cargo install --locked --version ${MDBOOK_SVGBOB_VERSION} mdbook-svgbob || echo "mdbook-svgbob already exists"
- name: Build
run: mdbook build books/protocol

View file

@ -4,8 +4,11 @@ name: User mdBook
on:
push:
paths:
- 'books/user/**'
branches: ['main']
paths: ['books/user/**']
pull_request:
paths: ['books/user/**']
workflow_dispatch:
env:
# Version of `mdbook` to install.
@ -30,8 +33,8 @@ jobs:
- name: Install mdBook
run: |
cargo install --version ${MDBOOK_VERSION} mdbook
cargo install --version ${MDBOOK_LAST_CHANGED_VERSION} mdbook-last-changed
cargo install --locked --version ${MDBOOK_VERSION} mdbook || echo "mdbook already exists"
cargo install --locked --version ${MDBOOK_LAST_CHANGED_VERSION} mdbook-last-changed || echo "mdbook-last-changed already exists"
- name: Build
run: mdbook build books/user

1176
Cargo.lock generated

File diff suppressed because it is too large Load diff

View file

@ -2,6 +2,8 @@
resolver = "2"
members = [
"binaries/cuprated",
"constants",
"consensus",
"consensus/fast-sync",
"consensus/rules",
@ -47,47 +49,49 @@ opt-level = 1
opt-level = 3
[workspace.dependencies]
async-trait = { version = "0.1.74", default-features = false }
bitflags = { version = "2.4.2", default-features = false }
borsh = { version = "1.2.1", default-features = false }
bytemuck = { version = "1.14.3", default-features = false }
bytes = { version = "1.5.0", default-features = false }
anyhow = { version = "1.0.89", default-features = false }
async-trait = { version = "0.1.82", default-features = false }
bitflags = { version = "2.6.0", default-features = false }
borsh = { version = "1.5.1", default-features = false }
bytemuck = { version = "1.18.0", default-features = false }
bytes = { version = "1.7.2", default-features = false }
cfg-if = { version = "1.0.0", default-features = false }
clap = { version = "4.4.7", default-features = false }
chrono = { version = "0.4.31", default-features = false }
clap = { version = "4.5.17", default-features = false }
chrono = { version = "0.4.38", default-features = false }
crypto-bigint = { version = "0.5.5", default-features = false }
crossbeam = { version = "0.8.4", default-features = false }
const_format = { version = "0.2.33", default-features = false }
curve25519-dalek = { version = "4.1.3", default-features = false }
dashmap = { version = "5.5.3", default-features = false }
dirs = { version = "5.0.1", default-features = false }
futures = { version = "0.3.29", default-features = false }
futures = { version = "0.3.30", default-features = false }
hex = { version = "0.4.3", default-features = false }
hex-literal = { version = "0.4", default-features = false }
indexmap = { version = "2.2.5", default-features = false }
indexmap = { version = "2.5.0", default-features = false }
monero-serai = { git = "https://github.com/Cuprate/serai.git", rev = "d5205ce", default-features = false }
paste = { version = "1.0.14", default-features = false }
pin-project = { version = "1.1.3", default-features = false }
paste = { version = "1.0.15", default-features = false }
pin-project = { version = "1.1.5", default-features = false }
randomx-rs = { git = "https://github.com/Cuprate/randomx-rs.git", rev = "0028464", default-features = false }
rand = { version = "0.8.5", default-features = false }
rand_distr = { version = "0.4.3", default-features = false }
rayon = { version = "1.9.0", default-features = false }
serde_bytes = { version = "0.11.12", default-features = false }
serde_json = { version = "1.0.108", default-features = false }
serde = { version = "1.0.190", default-features = false }
thiserror = { version = "1.0.50", default-features = false }
thread_local = { version = "1.1.7", default-features = false }
tokio-util = { version = "0.7.10", default-features = false }
tokio-stream = { version = "0.1.14", default-features = false }
tokio = { version = "1.33.0", default-features = false }
tower = { version = "0.4.13", default-features = false }
tracing-subscriber = { version = "0.3.17", default-features = false }
rayon = { version = "1.10.0", default-features = false }
serde_bytes = { version = "0.11.15", default-features = false }
serde_json = { version = "1.0.128", default-features = false }
serde = { version = "1.0.210", default-features = false }
thiserror = { version = "1.0.63", default-features = false }
thread_local = { version = "1.1.8", default-features = false }
tokio-util = { version = "0.7.12", default-features = false }
tokio-stream = { version = "0.1.16", default-features = false }
tokio = { version = "1.40.0", default-features = false }
tower = { git = "https://github.com/Cuprate/tower.git", rev = "6c7faf0", default-features = false } # <https://github.com/tower-rs/tower/pull/796>
tracing-subscriber = { version = "0.3.18", default-features = false }
tracing = { version = "0.1.40", default-features = false }
## workspace.dev-dependencies
monero-rpc = { git = "https://github.com/Cuprate/serai.git", rev = "d5205ce" }
monero-simple-request-rpc = { git = "https://github.com/Cuprate/serai.git", rev = "d5205ce" }
tempfile = { version = "3" }
pretty_assertions = { version = "1.4.0" }
pretty_assertions = { version = "1.4.1" }
proptest = { version = "1" }
proptest-derive = { version = "0.4.0" }
tokio-test = { version = "0.4.4" }
@ -111,7 +115,10 @@ cast_lossless = "deny"
cast_ptr_alignment = "deny"
checked_conversions = "deny"
cloned_instead_of_copied = "deny"
const_is_empty = "deny"
doc_lazy_continuation = "deny"
doc_link_with_quotes = "deny"
duplicated_attributes = "deny"
empty_enum = "deny"
enum_glob_use = "deny"
expl_impl_clone_on_copy = "deny"
@ -128,21 +135,28 @@ invalid_upcast_comparisons = "deny"
iter_filter_is_ok = "deny"
iter_filter_is_some = "deny"
implicit_clone = "deny"
legacy_numeric_constants = "deny"
manual_c_str_literals = "deny"
manual_pattern_char_comparison = "deny"
manual_instant_elapsed = "deny"
manual_inspect = "deny"
manual_is_variant_and = "deny"
manual_let_else = "deny"
manual_ok_or = "deny"
manual_string_new = "deny"
manual_unwrap_or_default = "deny"
map_unwrap_or = "deny"
match_bool = "deny"
match_same_arms = "deny"
match_wildcard_for_single_variants = "deny"
mismatching_type_param_order = "deny"
missing_transmute_annotations = "deny"
mut_mut = "deny"
needless_bitwise_bool = "deny"
needless_character_iteration = "deny"
needless_continue = "deny"
needless_for_each = "deny"
needless_maybe_sized = "deny"
needless_raw_string_hashes = "deny"
no_effect_underscore_binding = "deny"
no_mangle_with_rust_abi = "deny"
@ -198,11 +212,11 @@ unseparated_literal_suffix = "deny"
unnecessary_safety_doc = "deny"
unnecessary_safety_comment = "deny"
unnecessary_self_imports = "deny"
tests_outside_test_module = "deny"
string_to_string = "deny"
rest_pat_in_fully_bound_structs = "deny"
redundant_type_annotations = "deny"
infinite_loop = "deny"
zero_repeat_side_effects = "deny"
# Warm
cast_possible_truncation = "deny"
@ -250,6 +264,8 @@ empty_structs_with_brackets = "deny"
empty_enum_variants_with_brackets = "deny"
empty_drop = "deny"
clone_on_ref_ptr = "deny"
upper_case_acronyms = "deny"
allow_attributes = "deny"
# Hot
# inline_always = "deny"
@ -266,13 +282,15 @@ clone_on_ref_ptr = "deny"
# allow_attributes_without_reason = "deny"
# missing_assert_message = "deny"
# missing_docs_in_private_items = "deny"
# undocumented_unsafe_blocks = "deny"
undocumented_unsafe_blocks = "deny"
# multiple_unsafe_ops_per_block = "deny"
# single_char_lifetime_names = "deny"
# wildcard_enum_match_arm = "deny"
[workspace.lints.rust]
# Cold
future_incompatible = { level = "deny", priority = -1 }
nonstandard_style = { level = "deny", priority = -1 }
absolute_paths_not_starting_with_crate = "deny"
explicit_outlives_requirements = "deny"
keyword_idents_2018 = "deny"
@ -280,6 +298,7 @@ keyword_idents_2024 = "deny"
missing_abi = "deny"
non_ascii_idents = "deny"
non_local_definitions = "deny"
redundant_lifetimes = "deny"
single_use_lifetimes = "deny"
trivial_casts = "deny"
trivial_numeric_casts = "deny"
@ -292,10 +311,11 @@ ambiguous_glob_imports = "deny"
unused_unsafe = "deny"
# Warm
let_underscore_drop = "deny"
let_underscore = { level = "deny", priority = -1 }
unreachable_pub = "deny"
unused_qualifications = "deny"
variant_size_differences = "deny"
non_camel_case_types = "deny"
# Hot
# unused_results = "deny"

View file

@ -0,0 +1,83 @@
[package]
name = "cuprated"
version = "0.0.1"
edition = "2021"
description = "The Cuprate Monero Rust node."
license = "AGPL-3.0-only"
authors = ["Boog900", "hinto-janai", "SyntheticBird45"]
repository = "https://github.com/Cuprate/cuprate/tree/main/binaries/cuprated"
[dependencies]
# TODO: after v1.0.0, remove unneeded dependencies.
cuprate-consensus = { path = "../../consensus" }
cuprate-fast-sync = { path = "../../consensus/fast-sync" }
cuprate-consensus-rules = { path = "../../consensus/rules" }
cuprate-cryptonight = { path = "../../cryptonight" }
cuprate-helper = { path = "../../helper" }
cuprate-epee-encoding = { path = "../../net/epee-encoding" }
cuprate-fixed-bytes = { path = "../../net/fixed-bytes" }
cuprate-levin = { path = "../../net/levin" }
cuprate-wire = { path = "../../net/wire" }
cuprate-p2p = { path = "../../p2p/p2p" }
cuprate-p2p-core = { path = "../../p2p/p2p-core" }
cuprate-dandelion-tower = { path = "../../p2p/dandelion-tower" }
cuprate-async-buffer = { path = "../../p2p/async-buffer" }
cuprate-address-book = { path = "../../p2p/address-book" }
cuprate-blockchain = { path = "../../storage/blockchain", features = ["service"] }
cuprate-database-service = { path = "../../storage/service" }
cuprate-txpool = { path = "../../storage/txpool" }
cuprate-database = { path = "../../storage/database" }
cuprate-pruning = { path = "../../pruning" }
cuprate-test-utils = { path = "../../test-utils" }
cuprate-types = { path = "../../types" }
cuprate-json-rpc = { path = "../../rpc/json-rpc" }
cuprate-rpc-interface = { path = "../../rpc/interface" }
cuprate-rpc-types = { path = "../../rpc/types" }
# TODO: after v1.0.0, remove unneeded dependencies.
anyhow = { workspace = true }
async-trait = { workspace = true }
bitflags = { workspace = true }
borsh = { workspace = true }
bytemuck = { workspace = true }
bytes = { workspace = true }
cfg-if = { workspace = true }
clap = { workspace = true, features = ["cargo"] }
chrono = { workspace = true }
crypto-bigint = { workspace = true }
crossbeam = { workspace = true }
curve25519-dalek = { workspace = true }
const_format = { workspace = true }
dashmap = { workspace = true }
dirs = { workspace = true }
futures = { workspace = true }
hex = { workspace = true }
hex-literal = { workspace = true }
indexmap = { workspace = true }
monero-serai = { workspace = true }
paste = { workspace = true }
pin-project = { workspace = true }
randomx-rs = { workspace = true }
rand = { workspace = true }
rand_distr = { workspace = true }
rayon = { workspace = true }
serde_bytes = { workspace = true }
serde_json = { workspace = true }
serde = { workspace = true }
thiserror = { workspace = true }
thread_local = { workspace = true }
tokio-util = { workspace = true }
tokio-stream = { workspace = true }
tokio = { workspace = true }
tower = { workspace = true }
tracing-subscriber = { workspace = true, features = ["std", "fmt", "default"] }
tracing = { workspace = true }
[lints]
workspace = true
[profile.dev]
panic = "abort"
[profile.release]
panic = "abort"

View file

@ -0,0 +1,2 @@
# `cuprated`
TODO

View file

@ -0,0 +1,101 @@
//! Blockchain
//!
//! Contains the blockchain manager, syncer and an interface to mutate the blockchain.
use std::sync::Arc;
use futures::FutureExt;
use tokio::sync::{mpsc, Notify};
use tower::{BoxError, Service, ServiceExt};
use cuprate_blockchain::service::{BlockchainReadHandle, BlockchainWriteHandle};
use cuprate_consensus::{generate_genesis_block, BlockChainContextService, ContextConfig};
use cuprate_cryptonight::cryptonight_hash_v0;
use cuprate_p2p::{block_downloader::BlockDownloaderConfig, NetworkInterface};
use cuprate_p2p_core::{ClearNet, Network};
use cuprate_types::{
blockchain::{BlockchainReadRequest, BlockchainWriteRequest},
VerifiedBlockInformation,
};
use crate::constants::PANIC_CRITICAL_SERVICE_ERROR;
mod chain_service;
pub mod interface;
mod manager;
mod syncer;
mod types;
use types::{
ConcreteBlockVerifierService, ConcreteTxVerifierService, ConsensusBlockchainReadHandle,
};
/// Checks if the genesis block is in the blockchain and adds it if not.
pub async fn check_add_genesis(
blockchain_read_handle: &mut BlockchainReadHandle,
blockchain_write_handle: &mut BlockchainWriteHandle,
network: Network,
) {
// Try to get the chain height, will fail if the genesis block is not in the DB.
if blockchain_read_handle
.ready()
.await
.expect(PANIC_CRITICAL_SERVICE_ERROR)
.call(BlockchainReadRequest::ChainHeight)
.await
.is_ok()
{
return;
}
let genesis = generate_genesis_block(network);
assert_eq!(genesis.miner_transaction.prefix().outputs.len(), 1);
assert!(genesis.transactions.is_empty());
blockchain_write_handle
.ready()
.await
.expect(PANIC_CRITICAL_SERVICE_ERROR)
.call(BlockchainWriteRequest::WriteBlock(
VerifiedBlockInformation {
block_blob: genesis.serialize(),
txs: vec![],
block_hash: genesis.hash(),
pow_hash: cryptonight_hash_v0(&genesis.serialize_pow_hash()),
height: 0,
generated_coins: genesis.miner_transaction.prefix().outputs[0]
.amount
.unwrap(),
weight: genesis.miner_transaction.weight(),
long_term_weight: genesis.miner_transaction.weight(),
cumulative_difficulty: 1,
block: genesis,
},
))
.await
.expect(PANIC_CRITICAL_SERVICE_ERROR);
}
/// Initializes the consensus services.
pub async fn init_consensus(
blockchain_read_handle: BlockchainReadHandle,
context_config: ContextConfig,
) -> Result<
(
ConcreteBlockVerifierService,
ConcreteTxVerifierService,
BlockChainContextService,
),
BoxError,
> {
let read_handle = ConsensusBlockchainReadHandle::new(blockchain_read_handle, BoxError::from);
let ctx_service =
cuprate_consensus::initialize_blockchain_context(context_config, read_handle.clone())
.await?;
let (block_verifier_svc, tx_verifier_svc) =
cuprate_consensus::initialize_verifier(read_handle, ctx_service.clone());
Ok((block_verifier_svc, tx_verifier_svc, ctx_service))
}

View file

@ -0,0 +1,72 @@
use std::task::{Context, Poll};
use futures::{future::BoxFuture, FutureExt, TryFutureExt};
use tower::Service;
use cuprate_blockchain::service::BlockchainReadHandle;
use cuprate_p2p::block_downloader::{ChainSvcRequest, ChainSvcResponse};
use cuprate_types::blockchain::{BlockchainReadRequest, BlockchainResponse};
/// That service that allows retrieving the chain state to give to the P2P crates, so we can figure out
/// what blocks we need.
///
/// This has a more minimal interface than [`BlockchainReadRequest`] to make using the p2p crates easier.
#[derive(Clone)]
pub struct ChainService(pub BlockchainReadHandle);
impl Service<ChainSvcRequest> for ChainService {
type Response = ChainSvcResponse;
type Error = tower::BoxError;
type Future = BoxFuture<'static, Result<Self::Response, Self::Error>>;
fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
self.0.poll_ready(cx).map_err(Into::into)
}
fn call(&mut self, req: ChainSvcRequest) -> Self::Future {
let map_res = |res: BlockchainResponse| match res {
BlockchainResponse::CompactChainHistory {
block_ids,
cumulative_difficulty,
} => ChainSvcResponse::CompactHistory {
block_ids,
cumulative_difficulty,
},
BlockchainResponse::FindFirstUnknown(res) => ChainSvcResponse::FindFirstUnknown(res),
_ => unreachable!(),
};
match req {
ChainSvcRequest::CompactHistory => self
.0
.call(BlockchainReadRequest::CompactChainHistory)
.map_ok(map_res)
.map_err(Into::into)
.boxed(),
ChainSvcRequest::FindFirstUnknown(req) => self
.0
.call(BlockchainReadRequest::FindFirstUnknown(req))
.map_ok(map_res)
.map_err(Into::into)
.boxed(),
ChainSvcRequest::CumulativeDifficulty => self
.0
.call(BlockchainReadRequest::CompactChainHistory)
.map_ok(|res| {
// TODO create a custom request instead of hijacking this one.
// TODO: use the context cache.
let BlockchainResponse::CompactChainHistory {
cumulative_difficulty,
..
} = res
else {
unreachable!()
};
ChainSvcResponse::CumulativeDifficulty(cumulative_difficulty)
})
.map_err(Into::into)
.boxed(),
}
}
}

View file

@ -0,0 +1,161 @@
//! The blockchain manager interface.
//!
//! This module contains all the functions to mutate the blockchain's state in any way, through the
//! blockchain manager.
use std::{
collections::{HashMap, HashSet},
sync::{LazyLock, Mutex, OnceLock},
};
use monero_serai::{block::Block, transaction::Transaction};
use rayon::prelude::*;
use tokio::sync::{mpsc, oneshot};
use tower::{Service, ServiceExt};
use cuprate_blockchain::service::BlockchainReadHandle;
use cuprate_consensus::transactions::new_tx_verification_data;
use cuprate_helper::cast::usize_to_u64;
use cuprate_types::{
blockchain::{BlockchainReadRequest, BlockchainResponse},
Chain,
};
use crate::{
blockchain::manager::{BlockchainManagerCommand, IncomingBlockOk},
constants::PANIC_CRITICAL_SERVICE_ERROR,
};
/// The channel used to send [`BlockchainManagerCommand`]s to the blockchain manager.
///
/// This channel is initialized in [`init_blockchain_manager`](super::manager::init_blockchain_manager), the functions
/// in this file document what happens if this is not initialized when they are called.
pub(super) static COMMAND_TX: OnceLock<mpsc::Sender<BlockchainManagerCommand>> = OnceLock::new();
/// An error that can be returned from [`handle_incoming_block`].
#[derive(Debug, thiserror::Error)]
pub enum IncomingBlockError {
/// Some transactions in the block were unknown.
///
/// The inner values are the block hash and the indexes of the missing txs in the block.
#[error("Unknown transactions in block.")]
UnknownTransactions([u8; 32], Vec<u64>),
/// We are missing the block's parent.
#[error("The block has an unknown parent.")]
Orphan,
/// The block was invalid.
#[error(transparent)]
InvalidBlock(anyhow::Error),
}
/// Try to add a new block to the blockchain.
///
/// On success returns [`IncomingBlockOk`].
///
/// # Errors
///
/// This function will return an error if:
/// - the block was invalid
/// - we are missing transactions
/// - the block's parent is unknown
pub async fn handle_incoming_block(
block: Block,
given_txs: Vec<Transaction>,
blockchain_read_handle: &mut BlockchainReadHandle,
) -> Result<IncomingBlockOk, IncomingBlockError> {
/// A [`HashSet`] of block hashes that the blockchain manager is currently handling.
///
/// This lock prevents sending the same block to the blockchain manager from multiple connections
/// before one of them actually gets added to the chain, allowing peers to do other things.
///
/// This is used over something like a dashmap as we expect a lot of collisions in a short amount of
/// time for new blocks, so we would lose the benefit of sharded locks. A dashmap is made up of `RwLocks`
/// which are also more expensive than `Mutex`s.
static BLOCKS_BEING_HANDLED: LazyLock<Mutex<HashSet<[u8; 32]>>> =
LazyLock::new(|| Mutex::new(HashSet::new()));
// FIXME: we should look in the tx-pool for txs when that is ready.
if !block_exists(block.header.previous, blockchain_read_handle)
.await
.expect(PANIC_CRITICAL_SERVICE_ERROR)
{
return Err(IncomingBlockError::Orphan);
}
let block_hash = block.hash();
if block_exists(block_hash, blockchain_read_handle)
.await
.expect(PANIC_CRITICAL_SERVICE_ERROR)
{
return Ok(IncomingBlockOk::AlreadyHave);
}
// TODO: remove this when we have a working tx-pool.
if given_txs.len() != block.transactions.len() {
return Err(IncomingBlockError::UnknownTransactions(
block_hash,
(0..usize_to_u64(block.transactions.len())).collect(),
));
}
// TODO: check we actually got given the right txs.
let prepped_txs = given_txs
.into_par_iter()
.map(|tx| {
let tx = new_tx_verification_data(tx)?;
Ok((tx.tx_hash, tx))
})
.collect::<Result<_, anyhow::Error>>()
.map_err(IncomingBlockError::InvalidBlock)?;
let Some(incoming_block_tx) = COMMAND_TX.get() else {
// We could still be starting up the blockchain manager.
return Ok(IncomingBlockOk::NotReady);
};
// Add the blocks hash to the blocks being handled.
if !BLOCKS_BEING_HANDLED.lock().unwrap().insert(block_hash) {
// If another place is already adding this block then we can stop.
return Ok(IncomingBlockOk::AlreadyHave);
}
// From this point on we MUST not early return without removing the block hash from `BLOCKS_BEING_HANDLED`.
let (response_tx, response_rx) = oneshot::channel();
incoming_block_tx
.send(BlockchainManagerCommand::AddBlock {
block,
prepped_txs,
response_tx,
})
.await
.expect("TODO: don't actually panic here, an err means we are shutting down");
let res = response_rx
.await
.expect("The blockchain manager will always respond")
.map_err(IncomingBlockError::InvalidBlock);
// Remove the block hash from the blocks being handled.
BLOCKS_BEING_HANDLED.lock().unwrap().remove(&block_hash);
res
}
/// Check if we have a block with the given hash.
async fn block_exists(
block_hash: [u8; 32],
blockchain_read_handle: &mut BlockchainReadHandle,
) -> Result<bool, anyhow::Error> {
let BlockchainResponse::FindBlock(chain) = blockchain_read_handle
.ready()
.await?
.call(BlockchainReadRequest::FindBlock(block_hash))
.await?
else {
unreachable!();
};
Ok(chain.is_some())
}

View file

@ -0,0 +1,143 @@
use std::{collections::HashMap, sync::Arc};
use futures::StreamExt;
use monero_serai::block::Block;
use tokio::sync::{mpsc, oneshot, Notify};
use tower::{Service, ServiceExt};
use tracing::error;
use cuprate_blockchain::service::{BlockchainReadHandle, BlockchainWriteHandle};
use cuprate_consensus::{
context::RawBlockChainContext, BlockChainContextRequest, BlockChainContextResponse,
BlockChainContextService, BlockVerifierService, ExtendedConsensusError, TxVerifierService,
VerifyBlockRequest, VerifyBlockResponse, VerifyTxRequest, VerifyTxResponse,
};
use cuprate_p2p::{
block_downloader::{BlockBatch, BlockDownloaderConfig},
BroadcastSvc, NetworkInterface,
};
use cuprate_p2p_core::ClearNet;
use cuprate_types::{
blockchain::{BlockchainReadRequest, BlockchainResponse},
Chain, TransactionVerificationData,
};
use crate::{
blockchain::{
chain_service::ChainService,
interface::COMMAND_TX,
syncer,
types::{ConcreteBlockVerifierService, ConsensusBlockchainReadHandle},
},
constants::PANIC_CRITICAL_SERVICE_ERROR,
};
mod commands;
mod handler;
pub use commands::{BlockchainManagerCommand, IncomingBlockOk};
/// Initialize the blockchain manager.
///
/// This function sets up the [`BlockchainManager`] and the [`syncer`] so that the functions in [`interface`](super::interface)
/// can be called.
pub async fn init_blockchain_manager(
clearnet_interface: NetworkInterface<ClearNet>,
blockchain_write_handle: BlockchainWriteHandle,
blockchain_read_handle: BlockchainReadHandle,
mut blockchain_context_service: BlockChainContextService,
block_verifier_service: ConcreteBlockVerifierService,
block_downloader_config: BlockDownloaderConfig,
) {
// TODO: find good values for these size limits
let (batch_tx, batch_rx) = mpsc::channel(1);
let stop_current_block_downloader = Arc::new(Notify::new());
let (command_tx, command_rx) = mpsc::channel(3);
COMMAND_TX.set(command_tx).unwrap();
tokio::spawn(syncer::syncer(
blockchain_context_service.clone(),
ChainService(blockchain_read_handle.clone()),
clearnet_interface.clone(),
batch_tx,
Arc::clone(&stop_current_block_downloader),
block_downloader_config,
));
let BlockChainContextResponse::Context(blockchain_context) = blockchain_context_service
.ready()
.await
.expect(PANIC_CRITICAL_SERVICE_ERROR)
.call(BlockChainContextRequest::Context)
.await
.expect(PANIC_CRITICAL_SERVICE_ERROR)
else {
unreachable!()
};
let manager = BlockchainManager {
blockchain_write_handle,
blockchain_read_handle,
blockchain_context_service,
cached_blockchain_context: blockchain_context.unchecked_blockchain_context().clone(),
block_verifier_service,
stop_current_block_downloader,
broadcast_svc: clearnet_interface.broadcast_svc(),
};
tokio::spawn(manager.run(batch_rx, command_rx));
}
/// The blockchain manager.
///
/// This handles all mutation of the blockchain, anything that changes the state of the blockchain must
/// go through this.
///
/// Other parts of Cuprate can interface with this by using the functions in [`interface`](super::interface).
pub struct BlockchainManager {
/// The [`BlockchainWriteHandle`], this is the _only_ part of Cuprate where a [`BlockchainWriteHandle`]
/// is held.
blockchain_write_handle: BlockchainWriteHandle,
/// A [`BlockchainReadHandle`].
blockchain_read_handle: BlockchainReadHandle,
// TODO: Improve the API of the cache service.
// TODO: rename the cache service -> `BlockchainContextService`.
/// The blockchain context cache, this caches the current state of the blockchain to quickly calculate/retrieve
/// values without needing to go to a [`BlockchainReadHandle`].
blockchain_context_service: BlockChainContextService,
/// A cached context representing the current state.
cached_blockchain_context: RawBlockChainContext,
/// The block verifier service, to verify incoming blocks.
block_verifier_service: ConcreteBlockVerifierService,
/// A [`Notify`] to tell the [syncer](syncer::syncer) that we want to cancel this current download
/// attempt.
stop_current_block_downloader: Arc<Notify>,
/// The broadcast service, to broadcast new blocks.
broadcast_svc: BroadcastSvc<ClearNet>,
}
impl BlockchainManager {
/// The [`BlockchainManager`] task.
pub async fn run(
mut self,
mut block_batch_rx: mpsc::Receiver<BlockBatch>,
mut command_rx: mpsc::Receiver<BlockchainManagerCommand>,
) {
loop {
tokio::select! {
Some(batch) = block_batch_rx.recv() => {
self.handle_incoming_block_batch(
batch,
).await;
}
Some(incoming_command) = command_rx.recv() => {
self.handle_command(incoming_command).await;
}
else => {
todo!("TODO: exit the BC manager")
}
}
}
}
}

View file

@ -0,0 +1,32 @@
//! This module contains the commands for the blockchain manager.
use std::collections::HashMap;
use monero_serai::block::Block;
use tokio::sync::oneshot;
use cuprate_types::TransactionVerificationData;
/// The blockchain manager commands.
pub enum BlockchainManagerCommand {
/// Attempt to add a new block to the blockchain.
AddBlock {
/// The [`Block`] to add.
block: Block,
/// All the transactions defined in [`Block::transactions`].
prepped_txs: HashMap<[u8; 32], TransactionVerificationData>,
/// The channel to send the response down.
response_tx: oneshot::Sender<Result<IncomingBlockOk, anyhow::Error>>,
},
}
/// The [`Ok`] response for an incoming block.
pub enum IncomingBlockOk {
/// The block was added to the main-chain.
AddedToMainChain,
/// The blockchain manager is not ready yet.
NotReady,
/// The block was added to an alt-chain.
AddedToAltChain,
/// We already have the block.
AlreadyHave,
}

View file

@ -0,0 +1,484 @@
//! The blockchain manager handler functions.
use bytes::Bytes;
use futures::{TryFutureExt, TryStreamExt};
use monero_serai::{block::Block, transaction::Transaction};
use rayon::prelude::*;
use std::ops::ControlFlow;
use std::{collections::HashMap, sync::Arc};
use tower::{Service, ServiceExt};
use tracing::info;
use cuprate_blockchain::service::{BlockchainReadHandle, BlockchainWriteHandle};
use cuprate_consensus::{
block::PreparedBlock, context::NewBlockData, transactions::new_tx_verification_data,
BlockChainContextRequest, BlockChainContextResponse, BlockVerifierService,
ExtendedConsensusError, VerifyBlockRequest, VerifyBlockResponse, VerifyTxRequest,
VerifyTxResponse,
};
use cuprate_helper::cast::usize_to_u64;
use cuprate_p2p::{block_downloader::BlockBatch, constants::LONG_BAN, BroadcastRequest};
use cuprate_types::{
blockchain::{BlockchainReadRequest, BlockchainResponse, BlockchainWriteRequest},
AltBlockInformation, HardFork, TransactionVerificationData, VerifiedBlockInformation,
};
use crate::blockchain::manager::commands::IncomingBlockOk;
use crate::{
blockchain::{
manager::commands::BlockchainManagerCommand, types::ConsensusBlockchainReadHandle,
},
constants::PANIC_CRITICAL_SERVICE_ERROR,
signals::REORG_LOCK,
};
impl super::BlockchainManager {
/// Handle an incoming command from another part of Cuprate.
///
/// # Panics
///
/// This function will panic if any internal service returns an unexpected error that we cannot
/// recover from.
pub async fn handle_command(&mut self, command: BlockchainManagerCommand) {
match command {
BlockchainManagerCommand::AddBlock {
block,
prepped_txs,
response_tx,
} => {
let res = self.handle_incoming_block(block, prepped_txs).await;
drop(response_tx.send(res));
}
}
}
/// Broadcast a valid block to the network.
async fn broadcast_block(&mut self, block_bytes: Bytes, blockchain_height: usize) {
self.broadcast_svc
.ready()
.await
.expect("Broadcast service is Infallible.")
.call(BroadcastRequest::Block {
block_bytes,
current_blockchain_height: usize_to_u64(blockchain_height),
})
.await
.expect("Broadcast service is Infallible.");
}
/// Handle an incoming [`Block`].
///
/// This function will route to [`Self::handle_incoming_alt_block`] if the block does not follow
/// the top of the main chain.
///
/// Otherwise, this function will validate and add the block to the main chain.
///
/// # Panics
///
/// This function will panic if any internal service returns an unexpected error that we cannot
/// recover from.
pub async fn handle_incoming_block(
&mut self,
block: Block,
prepared_txs: HashMap<[u8; 32], TransactionVerificationData>,
) -> Result<IncomingBlockOk, anyhow::Error> {
if block.header.previous != self.cached_blockchain_context.top_hash {
self.handle_incoming_alt_block(block, prepared_txs).await?;
return Ok(IncomingBlockOk::AddedToAltChain);
}
let VerifyBlockResponse::MainChain(verified_block) = self
.block_verifier_service
.ready()
.await
.expect(PANIC_CRITICAL_SERVICE_ERROR)
.call(VerifyBlockRequest::MainChain {
block,
prepared_txs,
})
.await?
else {
unreachable!();
};
let block_blob = Bytes::copy_from_slice(&verified_block.block_blob);
self.add_valid_block_to_main_chain(verified_block).await;
self.broadcast_block(block_blob, self.cached_blockchain_context.chain_height)
.await;
Ok(IncomingBlockOk::AddedToMainChain)
}
/// Handle an incoming [`BlockBatch`].
///
/// This function will route to [`Self::handle_incoming_block_batch_main_chain`] or [`Self::handle_incoming_block_batch_alt_chain`]
/// depending on if the first block in the batch follows from the top of our chain.
///
/// # Panics
///
/// This function will panic if the batch is empty or if any internal service returns an unexpected
/// error that we cannot recover from or if the incoming batch contains no blocks.
pub async fn handle_incoming_block_batch(&mut self, batch: BlockBatch) {
let (first_block, _) = batch
.blocks
.first()
.expect("Block batch should not be empty");
if first_block.header.previous == self.cached_blockchain_context.top_hash {
self.handle_incoming_block_batch_main_chain(batch).await;
} else {
self.handle_incoming_block_batch_alt_chain(batch).await;
}
}
/// Handles an incoming [`BlockBatch`] that follows the main chain.
///
/// This function will handle validating the blocks in the batch and adding them to the blockchain
/// database and context cache.
///
/// This function will also handle banning the peer and canceling the block downloader if the
/// block is invalid.
///
/// # Panics
///
/// This function will panic if any internal service returns an unexpected error that we cannot
/// recover from or if the incoming batch contains no blocks.
async fn handle_incoming_block_batch_main_chain(&mut self, batch: BlockBatch) {
info!(
"Handling batch to main chain height: {}",
batch.blocks.first().unwrap().0.number().unwrap()
);
let batch_prep_res = self
.block_verifier_service
.ready()
.await
.expect(PANIC_CRITICAL_SERVICE_ERROR)
.call(VerifyBlockRequest::MainChainBatchPrepareBlocks {
blocks: batch.blocks,
})
.await;
let prepped_blocks = match batch_prep_res {
Ok(VerifyBlockResponse::MainChainBatchPrepped(prepped_blocks)) => prepped_blocks,
Err(_) => {
batch.peer_handle.ban_peer(LONG_BAN);
self.stop_current_block_downloader.notify_one();
return;
}
_ => unreachable!(),
};
for (block, txs) in prepped_blocks {
let verify_res = self
.block_verifier_service
.ready()
.await
.expect(PANIC_CRITICAL_SERVICE_ERROR)
.call(VerifyBlockRequest::MainChainPrepped { block, txs })
.await;
let verified_block = match verify_res {
Ok(VerifyBlockResponse::MainChain(verified_block)) => verified_block,
Err(_) => {
batch.peer_handle.ban_peer(LONG_BAN);
self.stop_current_block_downloader.notify_one();
return;
}
_ => unreachable!(),
};
self.add_valid_block_to_main_chain(verified_block).await;
}
}
/// Handles an incoming [`BlockBatch`] that does not follow the main-chain.
///
/// This function will handle validating the alt-blocks to add them to our cache and reorging the
/// chain if the alt-chain has a higher cumulative difficulty.
///
/// This function will also handle banning the peer and canceling the block downloader if the
/// alt block is invalid or if a reorg fails.
///
/// # Panics
///
/// This function will panic if any internal service returns an unexpected error that we cannot
/// recover from.
async fn handle_incoming_block_batch_alt_chain(&mut self, mut batch: BlockBatch) {
// TODO: this needs testing (this whole section does but alt-blocks specifically).
let mut blocks = batch.blocks.into_iter();
while let Some((block, txs)) = blocks.next() {
// async blocks work as try blocks.
let res = async {
let txs = txs
.into_par_iter()
.map(|tx| {
let tx = new_tx_verification_data(tx)?;
Ok((tx.tx_hash, tx))
})
.collect::<Result<_, anyhow::Error>>()?;
let reorged = self.handle_incoming_alt_block(block, txs).await?;
Ok::<_, anyhow::Error>(reorged)
}
.await;
match res {
Err(e) => {
batch.peer_handle.ban_peer(LONG_BAN);
self.stop_current_block_downloader.notify_one();
return;
}
Ok(AddAltBlock::Reorged) => {
// Collect the remaining blocks and add them to the main chain instead.
batch.blocks = blocks.collect();
self.handle_incoming_block_batch_main_chain(batch).await;
return;
}
// continue adding alt blocks.
Ok(AddAltBlock::Cached) => (),
}
}
}
/// Handles an incoming alt [`Block`].
///
/// This function will do some pre-validation of the alt block, then if the cumulative difficulty
/// of the alt chain is higher than the main chain it will attempt a reorg otherwise it will add
/// the alt block to the alt block cache.
///
/// # Errors
///
/// This will return an [`Err`] if:
/// - The alt block was invalid.
/// - An attempt to reorg the chain failed.
///
/// # Panics
///
/// This function will panic if any internal service returns an unexpected error that we cannot
/// recover from.
async fn handle_incoming_alt_block(
&mut self,
block: Block,
prepared_txs: HashMap<[u8; 32], TransactionVerificationData>,
) -> Result<AddAltBlock, anyhow::Error> {
let VerifyBlockResponse::AltChain(alt_block_info) = self
.block_verifier_service
.ready()
.await
.expect(PANIC_CRITICAL_SERVICE_ERROR)
.call(VerifyBlockRequest::AltChain {
block,
prepared_txs,
})
.await?
else {
unreachable!();
};
// TODO: check in consensus crate if alt block with this hash already exists.
// If this alt chain
if alt_block_info.cumulative_difficulty
> self.cached_blockchain_context.cumulative_difficulty
{
self.try_do_reorg(alt_block_info).await?;
return Ok(AddAltBlock::Reorged);
}
self.blockchain_write_handle
.ready()
.await
.expect(PANIC_CRITICAL_SERVICE_ERROR)
.call(BlockchainWriteRequest::WriteAltBlock(alt_block_info))
.await?;
Ok(AddAltBlock::Cached)
}
/// Attempt a re-org with the given top block of the alt-chain.
///
/// This function will take a write lock on [`REORG_LOCK`] and then set up the blockchain database
/// and context cache to verify the alt-chain. It will then attempt to verify and add each block
/// in the alt-chain to the main-chain. Releasing the lock on [`REORG_LOCK`] when finished.
///
/// # Errors
///
/// This function will return an [`Err`] if the re-org was unsuccessful, if this happens the chain
/// will be returned back into its state it was at when then function was called.
///
/// # Panics
///
/// This function will panic if any internal service returns an unexpected error that we cannot
/// recover from.
async fn try_do_reorg(
&mut self,
top_alt_block: AltBlockInformation,
) -> Result<(), anyhow::Error> {
let _guard = REORG_LOCK.write().await;
let BlockchainResponse::AltBlocksInChain(mut alt_blocks) = self
.blockchain_read_handle
.ready()
.await
.expect(PANIC_CRITICAL_SERVICE_ERROR)
.call(BlockchainReadRequest::AltBlocksInChain(
top_alt_block.chain_id,
))
.await?
else {
unreachable!();
};
alt_blocks.push(top_alt_block);
let split_height = alt_blocks[0].height;
let current_main_chain_height = self.cached_blockchain_context.chain_height;
let BlockchainResponse::PopBlocks(old_main_chain_id) = self
.blockchain_write_handle
.ready()
.await
.expect(PANIC_CRITICAL_SERVICE_ERROR)
.call(BlockchainWriteRequest::PopBlocks(
current_main_chain_height - split_height + 1,
))
.await
.expect(PANIC_CRITICAL_SERVICE_ERROR)
else {
unreachable!();
};
self.blockchain_context_service
.ready()
.await
.expect(PANIC_CRITICAL_SERVICE_ERROR)
.call(BlockChainContextRequest::PopBlocks {
numb_blocks: current_main_chain_height - split_height + 1,
})
.await
.expect(PANIC_CRITICAL_SERVICE_ERROR);
let reorg_res = self.verify_add_alt_blocks_to_main_chain(alt_blocks).await;
match reorg_res {
Ok(()) => Ok(()),
Err(e) => {
todo!("Reverse reorg")
}
}
}
/// Verify and add a list of [`AltBlockInformation`]s to the main-chain.
///
/// This function assumes the first [`AltBlockInformation`] is the next block in the blockchain
/// for the blockchain database and the context cache, or in other words that the blockchain database
/// and context cache have already had the top blocks popped to where the alt-chain meets the main-chain.
///
/// # Errors
///
/// This function will return an [`Err`] if the alt-blocks were invalid, in this case the re-org should
/// be aborted and the chain should be returned to its previous state.
///
/// # Panics
///
/// This function will panic if any internal service returns an unexpected error that we cannot
/// recover from.
async fn verify_add_alt_blocks_to_main_chain(
&mut self,
alt_blocks: Vec<AltBlockInformation>,
) -> Result<(), anyhow::Error> {
for mut alt_block in alt_blocks {
let prepped_txs = alt_block
.txs
.drain(..)
.map(|tx| Ok(Arc::new(tx.try_into()?)))
.collect::<Result<_, anyhow::Error>>()?;
let prepped_block = PreparedBlock::new_alt_block(alt_block)?;
let VerifyBlockResponse::MainChain(verified_block) = self
.block_verifier_service
.ready()
.await
.expect(PANIC_CRITICAL_SERVICE_ERROR)
.call(VerifyBlockRequest::MainChainPrepped {
block: prepped_block,
txs: prepped_txs,
})
.await?
else {
unreachable!();
};
self.add_valid_block_to_main_chain(verified_block).await;
}
Ok(())
}
/// Adds a [`VerifiedBlockInformation`] to the main-chain.
///
/// This function will update the blockchain database and the context cache, it will also
/// update [`Self::cached_blockchain_context`].
///
/// # Panics
///
/// This function will panic if any internal service returns an unexpected error that we cannot
/// recover from.
pub async fn add_valid_block_to_main_chain(
&mut self,
verified_block: VerifiedBlockInformation,
) {
self.blockchain_context_service
.ready()
.await
.expect(PANIC_CRITICAL_SERVICE_ERROR)
.call(BlockChainContextRequest::Update(NewBlockData {
block_hash: verified_block.block_hash,
height: verified_block.height,
timestamp: verified_block.block.header.timestamp,
weight: verified_block.weight,
long_term_weight: verified_block.long_term_weight,
generated_coins: verified_block.generated_coins,
vote: HardFork::from_vote(verified_block.block.header.hardfork_signal),
cumulative_difficulty: verified_block.cumulative_difficulty,
}))
.await
.expect(PANIC_CRITICAL_SERVICE_ERROR);
self.blockchain_write_handle
.ready()
.await
.expect(PANIC_CRITICAL_SERVICE_ERROR)
.call(BlockchainWriteRequest::WriteBlock(verified_block))
.await
.expect(PANIC_CRITICAL_SERVICE_ERROR);
let BlockChainContextResponse::Context(blockchain_context) = self
.blockchain_context_service
.ready()
.await
.expect(PANIC_CRITICAL_SERVICE_ERROR)
.call(BlockChainContextRequest::Context)
.await
.expect(PANIC_CRITICAL_SERVICE_ERROR)
else {
unreachable!();
};
self.cached_blockchain_context = blockchain_context.unchecked_blockchain_context().clone();
}
}
/// The result from successfully adding an alt-block.
enum AddAltBlock {
/// The alt-block was cached.
Cached,
/// The chain was reorged.
Reorged,
}

View file

@ -0,0 +1,143 @@
// FIXME: This whole module is not great and should be rewritten when the PeerSet is made.
use std::{pin::pin, sync::Arc, time::Duration};
use futures::StreamExt;
use tokio::time::interval;
use tokio::{
sync::{mpsc, Notify},
time::sleep,
};
use tower::{Service, ServiceExt};
use tracing::instrument;
use cuprate_consensus::{BlockChainContext, BlockChainContextRequest, BlockChainContextResponse};
use cuprate_p2p::{
block_downloader::{BlockBatch, BlockDownloaderConfig, ChainSvcRequest, ChainSvcResponse},
NetworkInterface,
};
use cuprate_p2p_core::ClearNet;
const CHECK_SYNC_FREQUENCY: Duration = Duration::from_secs(30);
/// An error returned from the [`syncer`].
#[derive(Debug, thiserror::Error)]
pub enum SyncerError {
#[error("Incoming block channel closed.")]
IncomingBlockChannelClosed,
#[error("One of our services returned an error: {0}.")]
ServiceError(#[from] tower::BoxError),
}
/// The syncer tasks that makes sure we are fully synchronised with our connected peers.
#[expect(
clippy::significant_drop_tightening,
reason = "Client pool which will be removed"
)]
#[instrument(level = "debug", skip_all)]
pub async fn syncer<C, CN>(
mut context_svc: C,
our_chain: CN,
clearnet_interface: NetworkInterface<ClearNet>,
incoming_block_batch_tx: mpsc::Sender<BlockBatch>,
stop_current_block_downloader: Arc<Notify>,
block_downloader_config: BlockDownloaderConfig,
) -> Result<(), SyncerError>
where
C: Service<
BlockChainContextRequest,
Response = BlockChainContextResponse,
Error = tower::BoxError,
>,
C::Future: Send + 'static,
CN: Service<ChainSvcRequest, Response = ChainSvcResponse, Error = tower::BoxError>
+ Clone
+ Send
+ 'static,
CN::Future: Send + 'static,
{
tracing::info!("Starting blockchain syncer");
let mut check_sync_interval = interval(CHECK_SYNC_FREQUENCY);
let BlockChainContextResponse::Context(mut blockchain_ctx) = context_svc
.ready()
.await?
.call(BlockChainContextRequest::Context)
.await?
else {
unreachable!();
};
let client_pool = clearnet_interface.client_pool();
tracing::debug!("Waiting for new sync info in top sync channel");
loop {
check_sync_interval.tick().await;
tracing::trace!("Checking connected peers to see if we are behind",);
check_update_blockchain_context(&mut context_svc, &mut blockchain_ctx).await?;
let raw_blockchain_context = blockchain_ctx.unchecked_blockchain_context();
if !client_pool.contains_client_with_more_cumulative_difficulty(
raw_blockchain_context.cumulative_difficulty,
) {
continue;
}
tracing::debug!(
"We are behind peers claimed cumulative difficulty, starting block downloader"
);
let mut block_batch_stream =
clearnet_interface.block_downloader(our_chain.clone(), block_downloader_config);
loop {
tokio::select! {
() = stop_current_block_downloader.notified() => {
tracing::info!("Stopping block downloader");
break;
}
batch = block_batch_stream.next() => {
let Some(batch) = batch else {
break;
};
tracing::debug!("Got batch, len: {}", batch.blocks.len());
if incoming_block_batch_tx.send(batch).await.is_err() {
return Err(SyncerError::IncomingBlockChannelClosed);
}
}
}
}
}
}
/// Checks if we should update the given [`BlockChainContext`] and updates it if needed.
async fn check_update_blockchain_context<C>(
context_svc: C,
old_context: &mut BlockChainContext,
) -> Result<(), tower::BoxError>
where
C: Service<
BlockChainContextRequest,
Response = BlockChainContextResponse,
Error = tower::BoxError,
>,
C::Future: Send + 'static,
{
if old_context.blockchain_context().is_ok() {
return Ok(());
}
let BlockChainContextResponse::Context(ctx) = context_svc
.oneshot(BlockChainContextRequest::Context)
.await?
else {
unreachable!();
};
*old_context = ctx;
Ok(())
}

View file

@ -0,0 +1,24 @@
use std::task::{Context, Poll};
use futures::future::BoxFuture;
use futures::{FutureExt, TryFutureExt};
use tower::{util::MapErr, Service};
use cuprate_blockchain::{cuprate_database::RuntimeError, service::BlockchainReadHandle};
use cuprate_consensus::{BlockChainContextService, BlockVerifierService, TxVerifierService};
use cuprate_p2p::block_downloader::{ChainSvcRequest, ChainSvcResponse};
use cuprate_types::blockchain::{BlockchainReadRequest, BlockchainResponse};
/// The [`BlockVerifierService`] with all generic types defined.
pub type ConcreteBlockVerifierService = BlockVerifierService<
BlockChainContextService,
ConcreteTxVerifierService,
ConsensusBlockchainReadHandle,
>;
/// The [`TxVerifierService`] with all generic types defined.
pub type ConcreteTxVerifierService = TxVerifierService<ConsensusBlockchainReadHandle>;
/// The [`BlockchainReadHandle`] with the [`tower::Service::Error`] mapped to conform to what the consensus crate requires.
pub type ConsensusBlockchainReadHandle =
MapErr<BlockchainReadHandle, fn(RuntimeError) -> tower::BoxError>;

View file

@ -0,0 +1 @@
//! cuprated config

View file

@ -0,0 +1,38 @@
//! General constants used throughout `cuprated`.
use const_format::formatcp;
/// `cuprated`'s semantic version (`MAJOR.MINOR.PATCH`) as string.
pub const VERSION: &str = clap::crate_version!();
/// [`VERSION`] + the build type.
///
/// If a debug build, the suffix is `-debug`, else it is `-release`.
pub const VERSION_BUILD: &str = if cfg!(debug_assertions) {
formatcp!("{VERSION}-debug")
} else {
formatcp!("{VERSION}-release")
};
/// The panic message used when cuprated encounters a critical service error.
pub const PANIC_CRITICAL_SERVICE_ERROR: &str =
"A service critical to Cuprate's function returned an unexpected error.";
#[cfg(test)]
mod test {
use super::*;
#[test]
fn version() {
assert_eq!(VERSION, "0.0.1");
}
#[test]
fn version_build() {
if cfg!(debug_assertions) {
assert_eq!(VERSION_BUILD, "0.0.1-debug");
} else {
assert_eq!(VERSION_BUILD, "0.0.1-release");
}
}
}

View file

@ -0,0 +1,30 @@
#![doc = include_str!("../README.md")]
#![cfg_attr(docsrs, feature(doc_cfg))]
#![allow(
unused_imports,
unreachable_pub,
unreachable_code,
unused_crate_dependencies,
dead_code,
unused_variables,
clippy::needless_pass_by_value,
clippy::unused_async,
reason = "TODO: remove after v1.0.0"
)]
mod blockchain;
mod config;
mod constants;
mod p2p;
mod rpc;
mod signals;
mod statics;
mod txpool;
fn main() {
// Initialize global static `LazyLock` data.
statics::init_lazylock_statics();
// TODO: everything else.
todo!()
}

View file

@ -0,0 +1,5 @@
//! P2P
//!
//! Will handle initiating the P2P and contains a protocol request handler.
pub mod request_handler;

View file

@ -0,0 +1 @@

View file

@ -0,0 +1,11 @@
//! RPC
//!
//! Will contain the code to initiate the RPC and a request handler.
mod bin;
mod handler;
mod json;
mod other;
mod request;
pub use handler::CupratedRpcHandler;

View file

@ -0,0 +1,85 @@
use anyhow::Error;
use cuprate_rpc_types::{
bin::{
BinRequest, BinResponse, GetBlocksByHeightRequest, GetBlocksByHeightResponse,
GetBlocksRequest, GetBlocksResponse, GetHashesRequest, GetHashesResponse,
GetOutputIndexesRequest, GetOutputIndexesResponse, GetOutsRequest, GetOutsResponse,
GetTransactionPoolHashesRequest, GetTransactionPoolHashesResponse,
},
json::{GetOutputDistributionRequest, GetOutputDistributionResponse},
};
use crate::rpc::CupratedRpcHandler;
/// Map a [`BinRequest`] to the function that will lead to a [`BinResponse`].
pub(super) async fn map_request(
state: CupratedRpcHandler,
request: BinRequest,
) -> Result<BinResponse, Error> {
use BinRequest as Req;
use BinResponse as Resp;
Ok(match request {
Req::GetBlocks(r) => Resp::GetBlocks(get_blocks(state, r).await?),
Req::GetBlocksByHeight(r) => Resp::GetBlocksByHeight(get_blocks_by_height(state, r).await?),
Req::GetHashes(r) => Resp::GetHashes(get_hashes(state, r).await?),
Req::GetOutputIndexes(r) => Resp::GetOutputIndexes(get_output_indexes(state, r).await?),
Req::GetOuts(r) => Resp::GetOuts(get_outs(state, r).await?),
Req::GetTransactionPoolHashes(r) => {
Resp::GetTransactionPoolHashes(get_transaction_pool_hashes(state, r).await?)
}
Req::GetOutputDistribution(r) => {
Resp::GetOutputDistribution(get_output_distribution(state, r).await?)
}
})
}
async fn get_blocks(
state: CupratedRpcHandler,
request: GetBlocksRequest,
) -> Result<GetBlocksResponse, Error> {
todo!()
}
async fn get_blocks_by_height(
state: CupratedRpcHandler,
request: GetBlocksByHeightRequest,
) -> Result<GetBlocksByHeightResponse, Error> {
todo!()
}
async fn get_hashes(
state: CupratedRpcHandler,
request: GetHashesRequest,
) -> Result<GetHashesResponse, Error> {
todo!()
}
async fn get_output_indexes(
state: CupratedRpcHandler,
request: GetOutputIndexesRequest,
) -> Result<GetOutputIndexesResponse, Error> {
todo!()
}
async fn get_outs(
state: CupratedRpcHandler,
request: GetOutsRequest,
) -> Result<GetOutsResponse, Error> {
todo!()
}
async fn get_transaction_pool_hashes(
state: CupratedRpcHandler,
request: GetTransactionPoolHashesRequest,
) -> Result<GetTransactionPoolHashesResponse, Error> {
todo!()
}
async fn get_output_distribution(
state: CupratedRpcHandler,
request: GetOutputDistributionRequest,
) -> Result<GetOutputDistributionResponse, Error> {
todo!()
}

View file

@ -0,0 +1,183 @@
//! Dummy implementation of [`RpcHandler`].
use std::task::{Context, Poll};
use anyhow::Error;
use futures::future::BoxFuture;
use monero_serai::block::Block;
use tower::Service;
use cuprate_blockchain::service::{BlockchainReadHandle, BlockchainWriteHandle};
use cuprate_rpc_interface::RpcHandler;
use cuprate_rpc_types::{
bin::{BinRequest, BinResponse},
json::{JsonRpcRequest, JsonRpcResponse},
other::{OtherRequest, OtherResponse},
};
use cuprate_txpool::service::{TxpoolReadHandle, TxpoolWriteHandle};
use crate::rpc::{bin, json, other};
/// TODO: use real type when public.
#[derive(Clone)]
#[expect(clippy::large_enum_variant)]
pub enum BlockchainManagerRequest {
/// Pop blocks off the top of the blockchain.
///
/// Input is the amount of blocks to pop.
PopBlocks { amount: usize },
/// Start pruning the blockchain.
Prune,
/// Is the blockchain pruned?
Pruned,
/// Relay a block to the network.
RelayBlock(Block),
/// Is the blockchain in the middle of syncing?
///
/// This returning `false` does not necessarily
/// mean [`BlockchainManagerRequest::Synced`] will
/// return `true`, for example, if the network has been
/// cut off and we have no peers, this will return `false`,
/// however, [`BlockchainManagerRequest::Synced`] may return
/// `true` if the latest known chain tip is equal to our height.
Syncing,
/// Is the blockchain fully synced?
Synced,
/// Current target block time.
Target,
/// The height of the next block in the chain.
TargetHeight,
}
/// TODO: use real type when public.
#[derive(Clone)]
pub enum BlockchainManagerResponse {
/// General OK response.
///
/// Response to:
/// - [`BlockchainManagerRequest::Prune`]
/// - [`BlockchainManagerRequest::RelayBlock`]
Ok,
/// Response to [`BlockchainManagerRequest::PopBlocks`]
PopBlocks { new_height: usize },
/// Response to [`BlockchainManagerRequest::Pruned`]
Pruned(bool),
/// Response to [`BlockchainManagerRequest::Syncing`]
Syncing(bool),
/// Response to [`BlockchainManagerRequest::Synced`]
Synced(bool),
/// Response to [`BlockchainManagerRequest::Target`]
Target(std::time::Duration),
/// Response to [`BlockchainManagerRequest::TargetHeight`]
TargetHeight { height: usize },
}
/// TODO: use real type when public.
pub type BlockchainManagerHandle = cuprate_database_service::DatabaseReadService<
BlockchainManagerRequest,
BlockchainManagerResponse,
>;
/// TODO
#[derive(Clone)]
pub struct CupratedRpcHandler {
/// Should this RPC server be [restricted](RpcHandler::restricted)?
///
/// This is not `pub` on purpose, as it should not be mutated after [`Self::new`].
restricted: bool,
/// Read handle to the blockchain database.
pub blockchain_read: BlockchainReadHandle,
/// Handle to the blockchain manager.
pub blockchain_manager: BlockchainManagerHandle,
/// Read handle to the transaction pool database.
pub txpool_read: TxpoolReadHandle,
/// TODO: handle to txpool service.
pub txpool_manager: std::convert::Infallible,
}
impl CupratedRpcHandler {
/// Create a new [`Self`].
pub const fn new(
restricted: bool,
blockchain_read: BlockchainReadHandle,
blockchain_manager: BlockchainManagerHandle,
txpool_read: TxpoolReadHandle,
txpool_manager: std::convert::Infallible,
) -> Self {
Self {
restricted,
blockchain_read,
blockchain_manager,
txpool_read,
txpool_manager,
}
}
}
impl RpcHandler for CupratedRpcHandler {
fn restricted(&self) -> bool {
self.restricted
}
}
impl Service<JsonRpcRequest> for CupratedRpcHandler {
type Response = JsonRpcResponse;
type Error = Error;
type Future = BoxFuture<'static, Result<JsonRpcResponse, Error>>;
fn poll_ready(&mut self, _: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
Poll::Ready(Ok(()))
}
fn call(&mut self, request: JsonRpcRequest) -> Self::Future {
let state = self.clone();
Box::pin(json::map_request(state, request))
}
}
impl Service<BinRequest> for CupratedRpcHandler {
type Response = BinResponse;
type Error = Error;
type Future = BoxFuture<'static, Result<BinResponse, Error>>;
fn poll_ready(&mut self, _: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
Poll::Ready(Ok(()))
}
fn call(&mut self, request: BinRequest) -> Self::Future {
let state = self.clone();
Box::pin(bin::map_request(state, request))
}
}
impl Service<OtherRequest> for CupratedRpcHandler {
type Response = OtherResponse;
type Error = Error;
type Future = BoxFuture<'static, Result<OtherResponse, Error>>;
fn poll_ready(&mut self, _: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
Poll::Ready(Ok(()))
}
fn call(&mut self, request: OtherRequest) -> Self::Future {
let state = self.clone();
Box::pin(other::map_request(state, request))
}
}

View file

@ -0,0 +1,294 @@
use std::sync::Arc;
use anyhow::Error;
use tower::ServiceExt;
use cuprate_rpc_types::json::{
AddAuxPowRequest, AddAuxPowResponse, BannedRequest, BannedResponse, CalcPowRequest,
CalcPowResponse, FlushCacheRequest, FlushCacheResponse, FlushTransactionPoolRequest,
FlushTransactionPoolResponse, GenerateBlocksRequest, GenerateBlocksResponse,
GetAlternateChainsRequest, GetAlternateChainsResponse, GetBansRequest, GetBansResponse,
GetBlockCountRequest, GetBlockCountResponse, GetBlockHeaderByHashRequest,
GetBlockHeaderByHashResponse, GetBlockHeaderByHeightRequest, GetBlockHeaderByHeightResponse,
GetBlockHeadersRangeRequest, GetBlockHeadersRangeResponse, GetBlockRequest, GetBlockResponse,
GetCoinbaseTxSumRequest, GetCoinbaseTxSumResponse, GetConnectionsRequest,
GetConnectionsResponse, GetFeeEstimateRequest, GetFeeEstimateResponse, GetInfoRequest,
GetInfoResponse, GetLastBlockHeaderRequest, GetLastBlockHeaderResponse, GetMinerDataRequest,
GetMinerDataResponse, GetOutputHistogramRequest, GetOutputHistogramResponse,
GetTransactionPoolBacklogRequest, GetTransactionPoolBacklogResponse, GetTxIdsLooseRequest,
GetTxIdsLooseResponse, GetVersionRequest, GetVersionResponse, HardForkInfoRequest,
HardForkInfoResponse, JsonRpcRequest, JsonRpcResponse, OnGetBlockHashRequest,
OnGetBlockHashResponse, PruneBlockchainRequest, PruneBlockchainResponse, RelayTxRequest,
RelayTxResponse, SetBansRequest, SetBansResponse, SubmitBlockRequest, SubmitBlockResponse,
SyncInfoRequest, SyncInfoResponse,
};
use crate::rpc::CupratedRpcHandler;
/// Map a [`JsonRpcRequest`] to the function that will lead to a [`JsonRpcResponse`].
pub(super) async fn map_request(
state: CupratedRpcHandler,
request: JsonRpcRequest,
) -> Result<JsonRpcResponse, Error> {
use JsonRpcRequest as Req;
use JsonRpcResponse as Resp;
Ok(match request {
Req::GetBlockCount(r) => Resp::GetBlockCount(get_block_count(state, r).await?),
Req::OnGetBlockHash(r) => Resp::OnGetBlockHash(on_get_block_hash(state, r).await?),
Req::SubmitBlock(r) => Resp::SubmitBlock(submit_block(state, r).await?),
Req::GenerateBlocks(r) => Resp::GenerateBlocks(generate_blocks(state, r).await?),
Req::GetLastBlockHeader(r) => {
Resp::GetLastBlockHeader(get_last_block_header(state, r).await?)
}
Req::GetBlockHeaderByHash(r) => {
Resp::GetBlockHeaderByHash(get_block_header_by_hash(state, r).await?)
}
Req::GetBlockHeaderByHeight(r) => {
Resp::GetBlockHeaderByHeight(get_block_header_by_height(state, r).await?)
}
Req::GetBlockHeadersRange(r) => {
Resp::GetBlockHeadersRange(get_block_headers_range(state, r).await?)
}
Req::GetBlock(r) => Resp::GetBlock(get_block(state, r).await?),
Req::GetConnections(r) => Resp::GetConnections(get_connections(state, r).await?),
Req::GetInfo(r) => Resp::GetInfo(get_info(state, r).await?),
Req::HardForkInfo(r) => Resp::HardForkInfo(hard_fork_info(state, r).await?),
Req::SetBans(r) => Resp::SetBans(set_bans(state, r).await?),
Req::GetBans(r) => Resp::GetBans(get_bans(state, r).await?),
Req::Banned(r) => Resp::Banned(banned(state, r).await?),
Req::FlushTransactionPool(r) => {
Resp::FlushTransactionPool(flush_transaction_pool(state, r).await?)
}
Req::GetOutputHistogram(r) => {
Resp::GetOutputHistogram(get_output_histogram(state, r).await?)
}
Req::GetCoinbaseTxSum(r) => Resp::GetCoinbaseTxSum(get_coinbase_tx_sum(state, r).await?),
Req::GetVersion(r) => Resp::GetVersion(get_version(state, r).await?),
Req::GetFeeEstimate(r) => Resp::GetFeeEstimate(get_fee_estimate(state, r).await?),
Req::GetAlternateChains(r) => {
Resp::GetAlternateChains(get_alternate_chains(state, r).await?)
}
Req::RelayTx(r) => Resp::RelayTx(relay_tx(state, r).await?),
Req::SyncInfo(r) => Resp::SyncInfo(sync_info(state, r).await?),
Req::GetTransactionPoolBacklog(r) => {
Resp::GetTransactionPoolBacklog(get_transaction_pool_backlog(state, r).await?)
}
Req::GetMinerData(r) => Resp::GetMinerData(get_miner_data(state, r).await?),
Req::PruneBlockchain(r) => Resp::PruneBlockchain(prune_blockchain(state, r).await?),
Req::CalcPow(r) => Resp::CalcPow(calc_pow(state, r).await?),
Req::FlushCache(r) => Resp::FlushCache(flush_cache(state, r).await?),
Req::AddAuxPow(r) => Resp::AddAuxPow(add_aux_pow(state, r).await?),
Req::GetTxIdsLoose(r) => Resp::GetTxIdsLoose(get_tx_ids_loose(state, r).await?),
})
}
async fn get_block_count(
state: CupratedRpcHandler,
request: GetBlockCountRequest,
) -> Result<GetBlockCountResponse, Error> {
todo!()
}
async fn on_get_block_hash(
state: CupratedRpcHandler,
request: OnGetBlockHashRequest,
) -> Result<OnGetBlockHashResponse, Error> {
todo!()
}
async fn submit_block(
state: CupratedRpcHandler,
request: SubmitBlockRequest,
) -> Result<SubmitBlockResponse, Error> {
todo!()
}
async fn generate_blocks(
state: CupratedRpcHandler,
request: GenerateBlocksRequest,
) -> Result<GenerateBlocksResponse, Error> {
todo!()
}
async fn get_last_block_header(
state: CupratedRpcHandler,
request: GetLastBlockHeaderRequest,
) -> Result<GetLastBlockHeaderResponse, Error> {
todo!()
}
async fn get_block_header_by_hash(
state: CupratedRpcHandler,
request: GetBlockHeaderByHashRequest,
) -> Result<GetBlockHeaderByHashResponse, Error> {
todo!()
}
async fn get_block_header_by_height(
state: CupratedRpcHandler,
request: GetBlockHeaderByHeightRequest,
) -> Result<GetBlockHeaderByHeightResponse, Error> {
todo!()
}
async fn get_block_headers_range(
state: CupratedRpcHandler,
request: GetBlockHeadersRangeRequest,
) -> Result<GetBlockHeadersRangeResponse, Error> {
todo!()
}
async fn get_block(
state: CupratedRpcHandler,
request: GetBlockRequest,
) -> Result<GetBlockResponse, Error> {
todo!()
}
async fn get_connections(
state: CupratedRpcHandler,
request: GetConnectionsRequest,
) -> Result<GetConnectionsResponse, Error> {
todo!()
}
async fn get_info(
state: CupratedRpcHandler,
request: GetInfoRequest,
) -> Result<GetInfoResponse, Error> {
todo!()
}
async fn hard_fork_info(
state: CupratedRpcHandler,
request: HardForkInfoRequest,
) -> Result<HardForkInfoResponse, Error> {
todo!()
}
async fn set_bans(
state: CupratedRpcHandler,
request: SetBansRequest,
) -> Result<SetBansResponse, Error> {
todo!()
}
async fn get_bans(
state: CupratedRpcHandler,
request: GetBansRequest,
) -> Result<GetBansResponse, Error> {
todo!()
}
async fn banned(
state: CupratedRpcHandler,
request: BannedRequest,
) -> Result<BannedResponse, Error> {
todo!()
}
async fn flush_transaction_pool(
state: CupratedRpcHandler,
request: FlushTransactionPoolRequest,
) -> Result<FlushTransactionPoolResponse, Error> {
todo!()
}
async fn get_output_histogram(
state: CupratedRpcHandler,
request: GetOutputHistogramRequest,
) -> Result<GetOutputHistogramResponse, Error> {
todo!()
}
async fn get_coinbase_tx_sum(
state: CupratedRpcHandler,
request: GetCoinbaseTxSumRequest,
) -> Result<GetCoinbaseTxSumResponse, Error> {
todo!()
}
async fn get_version(
state: CupratedRpcHandler,
request: GetVersionRequest,
) -> Result<GetVersionResponse, Error> {
todo!()
}
async fn get_fee_estimate(
state: CupratedRpcHandler,
request: GetFeeEstimateRequest,
) -> Result<GetFeeEstimateResponse, Error> {
todo!()
}
async fn get_alternate_chains(
state: CupratedRpcHandler,
request: GetAlternateChainsRequest,
) -> Result<GetAlternateChainsResponse, Error> {
todo!()
}
async fn relay_tx(
state: CupratedRpcHandler,
request: RelayTxRequest,
) -> Result<RelayTxResponse, Error> {
todo!()
}
async fn sync_info(
state: CupratedRpcHandler,
request: SyncInfoRequest,
) -> Result<SyncInfoResponse, Error> {
todo!()
}
async fn get_transaction_pool_backlog(
state: CupratedRpcHandler,
request: GetTransactionPoolBacklogRequest,
) -> Result<GetTransactionPoolBacklogResponse, Error> {
todo!()
}
async fn get_miner_data(
state: CupratedRpcHandler,
request: GetMinerDataRequest,
) -> Result<GetMinerDataResponse, Error> {
todo!()
}
async fn prune_blockchain(
state: CupratedRpcHandler,
request: PruneBlockchainRequest,
) -> Result<PruneBlockchainResponse, Error> {
todo!()
}
async fn calc_pow(
state: CupratedRpcHandler,
request: CalcPowRequest,
) -> Result<CalcPowResponse, Error> {
todo!()
}
async fn flush_cache(
state: CupratedRpcHandler,
request: FlushCacheRequest,
) -> Result<FlushCacheResponse, Error> {
todo!()
}
async fn add_aux_pow(
state: CupratedRpcHandler,
request: AddAuxPowRequest,
) -> Result<AddAuxPowResponse, Error> {
todo!()
}
async fn get_tx_ids_loose(
state: CupratedRpcHandler,
request: GetTxIdsLooseRequest,
) -> Result<GetTxIdsLooseResponse, Error> {
todo!()
}

View file

@ -0,0 +1,260 @@
use anyhow::Error;
use cuprate_rpc_types::other::{
GetAltBlocksHashesRequest, GetAltBlocksHashesResponse, GetHeightRequest, GetHeightResponse,
GetLimitRequest, GetLimitResponse, GetNetStatsRequest, GetNetStatsResponse, GetOutsRequest,
GetOutsResponse, GetPeerListRequest, GetPeerListResponse, GetPublicNodesRequest,
GetPublicNodesResponse, GetTransactionPoolHashesRequest, GetTransactionPoolHashesResponse,
GetTransactionPoolRequest, GetTransactionPoolResponse, GetTransactionPoolStatsRequest,
GetTransactionPoolStatsResponse, GetTransactionsRequest, GetTransactionsResponse,
InPeersRequest, InPeersResponse, IsKeyImageSpentRequest, IsKeyImageSpentResponse,
MiningStatusRequest, MiningStatusResponse, OtherRequest, OtherResponse, OutPeersRequest,
OutPeersResponse, PopBlocksRequest, PopBlocksResponse, SaveBcRequest, SaveBcResponse,
SendRawTransactionRequest, SendRawTransactionResponse, SetBootstrapDaemonRequest,
SetBootstrapDaemonResponse, SetLimitRequest, SetLimitResponse, SetLogCategoriesRequest,
SetLogCategoriesResponse, SetLogHashRateRequest, SetLogHashRateResponse, SetLogLevelRequest,
SetLogLevelResponse, StartMiningRequest, StartMiningResponse, StopDaemonRequest,
StopDaemonResponse, StopMiningRequest, StopMiningResponse, UpdateRequest, UpdateResponse,
};
use crate::rpc::CupratedRpcHandler;
/// Map a [`OtherRequest`] to the function that will lead to a [`OtherResponse`].
pub(super) async fn map_request(
state: CupratedRpcHandler,
request: OtherRequest,
) -> Result<OtherResponse, Error> {
use OtherRequest as Req;
use OtherResponse as Resp;
Ok(match request {
Req::GetHeight(r) => Resp::GetHeight(get_height(state, r).await?),
Req::GetTransactions(r) => Resp::GetTransactions(get_transactions(state, r).await?),
Req::GetAltBlocksHashes(r) => {
Resp::GetAltBlocksHashes(get_alt_blocks_hashes(state, r).await?)
}
Req::IsKeyImageSpent(r) => Resp::IsKeyImageSpent(is_key_image_spent(state, r).await?),
Req::SendRawTransaction(r) => {
Resp::SendRawTransaction(send_raw_transaction(state, r).await?)
}
Req::StartMining(r) => Resp::StartMining(start_mining(state, r).await?),
Req::StopMining(r) => Resp::StopMining(stop_mining(state, r).await?),
Req::MiningStatus(r) => Resp::MiningStatus(mining_status(state, r).await?),
Req::SaveBc(r) => Resp::SaveBc(save_bc(state, r).await?),
Req::GetPeerList(r) => Resp::GetPeerList(get_peer_list(state, r).await?),
Req::SetLogHashRate(r) => Resp::SetLogHashRate(set_log_hash_rate(state, r).await?),
Req::SetLogLevel(r) => Resp::SetLogLevel(set_log_level(state, r).await?),
Req::SetLogCategories(r) => Resp::SetLogCategories(set_log_categories(state, r).await?),
Req::SetBootstrapDaemon(r) => {
Resp::SetBootstrapDaemon(set_bootstrap_daemon(state, r).await?)
}
Req::GetTransactionPool(r) => {
Resp::GetTransactionPool(get_transaction_pool(state, r).await?)
}
Req::GetTransactionPoolStats(r) => {
Resp::GetTransactionPoolStats(get_transaction_pool_stats(state, r).await?)
}
Req::StopDaemon(r) => Resp::StopDaemon(stop_daemon(state, r).await?),
Req::GetLimit(r) => Resp::GetLimit(get_limit(state, r).await?),
Req::SetLimit(r) => Resp::SetLimit(set_limit(state, r).await?),
Req::OutPeers(r) => Resp::OutPeers(out_peers(state, r).await?),
Req::InPeers(r) => Resp::InPeers(in_peers(state, r).await?),
Req::GetNetStats(r) => Resp::GetNetStats(get_net_stats(state, r).await?),
Req::GetOuts(r) => Resp::GetOuts(get_outs(state, r).await?),
Req::Update(r) => Resp::Update(update(state, r).await?),
Req::PopBlocks(r) => Resp::PopBlocks(pop_blocks(state, r).await?),
Req::GetTransactionPoolHashes(r) => {
Resp::GetTransactionPoolHashes(get_transaction_pool_hashes(state, r).await?)
}
Req::GetPublicNodes(r) => Resp::GetPublicNodes(get_public_nodes(state, r).await?),
})
}
async fn get_height(
state: CupratedRpcHandler,
request: GetHeightRequest,
) -> Result<GetHeightResponse, Error> {
todo!()
}
async fn get_transactions(
state: CupratedRpcHandler,
request: GetTransactionsRequest,
) -> Result<GetTransactionsResponse, Error> {
todo!()
}
async fn get_alt_blocks_hashes(
state: CupratedRpcHandler,
request: GetAltBlocksHashesRequest,
) -> Result<GetAltBlocksHashesResponse, Error> {
todo!()
}
async fn is_key_image_spent(
state: CupratedRpcHandler,
request: IsKeyImageSpentRequest,
) -> Result<IsKeyImageSpentResponse, Error> {
todo!()
}
async fn send_raw_transaction(
state: CupratedRpcHandler,
request: SendRawTransactionRequest,
) -> Result<SendRawTransactionResponse, Error> {
todo!()
}
async fn start_mining(
state: CupratedRpcHandler,
request: StartMiningRequest,
) -> Result<StartMiningResponse, Error> {
todo!()
}
async fn stop_mining(
state: CupratedRpcHandler,
request: StopMiningRequest,
) -> Result<StopMiningResponse, Error> {
todo!()
}
async fn mining_status(
state: CupratedRpcHandler,
request: MiningStatusRequest,
) -> Result<MiningStatusResponse, Error> {
todo!()
}
async fn save_bc(
state: CupratedRpcHandler,
request: SaveBcRequest,
) -> Result<SaveBcResponse, Error> {
todo!()
}
async fn get_peer_list(
state: CupratedRpcHandler,
request: GetPeerListRequest,
) -> Result<GetPeerListResponse, Error> {
todo!()
}
async fn set_log_hash_rate(
state: CupratedRpcHandler,
request: SetLogHashRateRequest,
) -> Result<SetLogHashRateResponse, Error> {
todo!()
}
async fn set_log_level(
state: CupratedRpcHandler,
request: SetLogLevelRequest,
) -> Result<SetLogLevelResponse, Error> {
todo!()
}
async fn set_log_categories(
state: CupratedRpcHandler,
request: SetLogCategoriesRequest,
) -> Result<SetLogCategoriesResponse, Error> {
todo!()
}
async fn set_bootstrap_daemon(
state: CupratedRpcHandler,
request: SetBootstrapDaemonRequest,
) -> Result<SetBootstrapDaemonResponse, Error> {
todo!()
}
async fn get_transaction_pool(
state: CupratedRpcHandler,
request: GetTransactionPoolRequest,
) -> Result<GetTransactionPoolResponse, Error> {
todo!()
}
async fn get_transaction_pool_stats(
state: CupratedRpcHandler,
request: GetTransactionPoolStatsRequest,
) -> Result<GetTransactionPoolStatsResponse, Error> {
todo!()
}
async fn stop_daemon(
state: CupratedRpcHandler,
request: StopDaemonRequest,
) -> Result<StopDaemonResponse, Error> {
todo!()
}
async fn get_limit(
state: CupratedRpcHandler,
request: GetLimitRequest,
) -> Result<GetLimitResponse, Error> {
todo!()
}
async fn set_limit(
state: CupratedRpcHandler,
request: SetLimitRequest,
) -> Result<SetLimitResponse, Error> {
todo!()
}
async fn out_peers(
state: CupratedRpcHandler,
request: OutPeersRequest,
) -> Result<OutPeersResponse, Error> {
todo!()
}
async fn in_peers(
state: CupratedRpcHandler,
request: InPeersRequest,
) -> Result<InPeersResponse, Error> {
todo!()
}
async fn get_net_stats(
state: CupratedRpcHandler,
request: GetNetStatsRequest,
) -> Result<GetNetStatsResponse, Error> {
todo!()
}
async fn get_outs(
state: CupratedRpcHandler,
request: GetOutsRequest,
) -> Result<GetOutsResponse, Error> {
todo!()
}
async fn update(
state: CupratedRpcHandler,
request: UpdateRequest,
) -> Result<UpdateResponse, Error> {
todo!()
}
async fn pop_blocks(
state: CupratedRpcHandler,
request: PopBlocksRequest,
) -> Result<PopBlocksResponse, Error> {
todo!()
}
async fn get_transaction_pool_hashes(
state: CupratedRpcHandler,
request: GetTransactionPoolHashesRequest,
) -> Result<GetTransactionPoolHashesResponse, Error> {
todo!()
}
async fn get_public_nodes(
state: CupratedRpcHandler,
request: GetPublicNodesRequest,
) -> Result<GetPublicNodesResponse, Error> {
todo!()
}

View file

@ -0,0 +1,19 @@
//! Convenience functions for requests/responses.
//!
//! This module implements many methods for
//! [`CupratedRpcHandler`](crate::rpc::CupratedRpcHandler)
//! that are simple wrappers around the request/response API provided
//! by the multiple [`tower::Service`]s.
//!
//! These exist to prevent noise like `unreachable!()`
//! from being everywhere in the actual handler functions.
//!
//! Each module implements methods for a specific API, e.g.
//! the [`blockchain`] modules contains methods for the
//! blockchain database [`tower::Service`] API.
mod address_book;
mod blockchain;
mod blockchain_context;
mod blockchain_manager;
mod txpool;

View file

@ -0,0 +1,104 @@
//! Functions for TODO: doc enum message.
use std::convert::Infallible;
use anyhow::Error;
use tower::ServiceExt;
use cuprate_helper::cast::usize_to_u64;
use cuprate_p2p_core::{
services::{AddressBookRequest, AddressBookResponse},
AddressBook, NetworkZone,
};
/// [`AddressBookRequest::PeerlistSize`]
pub(super) async fn peerlist_size<Z: NetworkZone>(
address_book: &mut impl AddressBook<Z>,
) -> Result<(u64, u64), Error> {
let AddressBookResponse::PeerlistSize { white, grey } = address_book
.ready()
.await
.expect("TODO")
.call(AddressBookRequest::PeerlistSize)
.await
.expect("TODO")
else {
unreachable!();
};
Ok((usize_to_u64(white), usize_to_u64(grey)))
}
/// [`AddressBookRequest::ConnectionCount`]
pub(super) async fn connection_count<Z: NetworkZone>(
address_book: &mut impl AddressBook<Z>,
) -> Result<(u64, u64), Error> {
let AddressBookResponse::ConnectionCount { incoming, outgoing } = address_book
.ready()
.await
.expect("TODO")
.call(AddressBookRequest::ConnectionCount)
.await
.expect("TODO")
else {
unreachable!();
};
Ok((usize_to_u64(incoming), usize_to_u64(outgoing)))
}
/// [`AddressBookRequest::SetBan`]
pub(super) async fn set_ban<Z: NetworkZone>(
address_book: &mut impl AddressBook<Z>,
peer: cuprate_p2p_core::ban::SetBan<Z::Addr>,
) -> Result<(), Error> {
let AddressBookResponse::Ok = address_book
.ready()
.await
.expect("TODO")
.call(AddressBookRequest::SetBan(peer))
.await
.expect("TODO")
else {
unreachable!();
};
Ok(())
}
/// [`AddressBookRequest::GetBan`]
pub(super) async fn get_ban<Z: NetworkZone>(
address_book: &mut impl AddressBook<Z>,
peer: Z::Addr,
) -> Result<Option<std::time::Instant>, Error> {
let AddressBookResponse::GetBan { unban_instant } = address_book
.ready()
.await
.expect("TODO")
.call(AddressBookRequest::GetBan(peer))
.await
.expect("TODO")
else {
unreachable!();
};
Ok(unban_instant)
}
/// [`AddressBookRequest::GetBans`]
pub(super) async fn get_bans<Z: NetworkZone>(
address_book: &mut impl AddressBook<Z>,
) -> Result<(), Error> {
let AddressBookResponse::GetBans(bans) = address_book
.ready()
.await
.expect("TODO")
.call(AddressBookRequest::GetBans)
.await
.expect("TODO")
else {
unreachable!();
};
Ok(todo!())
}

View file

@ -0,0 +1,308 @@
//! Functions for [`BlockchainReadRequest`].
use std::{
collections::{HashMap, HashSet},
ops::Range,
};
use anyhow::Error;
use cuprate_blockchain::service::BlockchainReadHandle;
use tower::{Service, ServiceExt};
use cuprate_helper::cast::{u64_to_usize, usize_to_u64};
use cuprate_types::{
blockchain::{BlockchainReadRequest, BlockchainResponse},
Chain, CoinbaseTxSum, ExtendedBlockHeader, MinerData, OutputHistogramEntry,
OutputHistogramInput, OutputOnChain,
};
/// [`BlockchainReadRequest::BlockExtendedHeader`].
pub(super) async fn block_extended_header(
mut blockchain_read: BlockchainReadHandle,
height: u64,
) -> Result<ExtendedBlockHeader, Error> {
let BlockchainResponse::BlockExtendedHeader(header) = blockchain_read
.ready()
.await?
.call(BlockchainReadRequest::BlockExtendedHeader(u64_to_usize(
height,
)))
.await?
else {
unreachable!();
};
Ok(header)
}
/// [`BlockchainReadRequest::BlockHash`].
pub(super) async fn block_hash(
mut blockchain_read: BlockchainReadHandle,
height: u64,
chain: Chain,
) -> Result<[u8; 32], Error> {
let BlockchainResponse::BlockHash(hash) = blockchain_read
.ready()
.await?
.call(BlockchainReadRequest::BlockHash(
u64_to_usize(height),
chain,
))
.await?
else {
unreachable!();
};
Ok(hash)
}
/// [`BlockchainReadRequest::FindBlock`].
pub(super) async fn find_block(
mut blockchain_read: BlockchainReadHandle,
block_hash: [u8; 32],
) -> Result<Option<(Chain, usize)>, Error> {
let BlockchainResponse::FindBlock(option) = blockchain_read
.ready()
.await?
.call(BlockchainReadRequest::FindBlock(block_hash))
.await?
else {
unreachable!();
};
Ok(option)
}
/// [`BlockchainReadRequest::FilterUnknownHashes`].
pub(super) async fn filter_unknown_hashes(
mut blockchain_read: BlockchainReadHandle,
block_hashes: HashSet<[u8; 32]>,
) -> Result<HashSet<[u8; 32]>, Error> {
let BlockchainResponse::FilterUnknownHashes(output) = blockchain_read
.ready()
.await?
.call(BlockchainReadRequest::FilterUnknownHashes(block_hashes))
.await?
else {
unreachable!();
};
Ok(output)
}
/// [`BlockchainReadRequest::BlockExtendedHeaderInRange`]
pub(super) async fn block_extended_header_in_range(
mut blockchain_read: BlockchainReadHandle,
range: Range<usize>,
chain: Chain,
) -> Result<Vec<ExtendedBlockHeader>, Error> {
let BlockchainResponse::BlockExtendedHeaderInRange(output) = blockchain_read
.ready()
.await?
.call(BlockchainReadRequest::BlockExtendedHeaderInRange(
range, chain,
))
.await?
else {
unreachable!();
};
Ok(output)
}
/// [`BlockchainReadRequest::ChainHeight`].
pub(super) async fn chain_height(
mut blockchain_read: BlockchainReadHandle,
) -> Result<(u64, [u8; 32]), Error> {
let BlockchainResponse::ChainHeight(height, hash) = blockchain_read
.ready()
.await?
.call(BlockchainReadRequest::ChainHeight)
.await?
else {
unreachable!();
};
Ok((usize_to_u64(height), hash))
}
/// [`BlockchainReadRequest::GeneratedCoins`].
pub(super) async fn generated_coins(
mut blockchain_read: BlockchainReadHandle,
block_height: u64,
) -> Result<u64, Error> {
let BlockchainResponse::GeneratedCoins(generated_coins) = blockchain_read
.ready()
.await?
.call(BlockchainReadRequest::GeneratedCoins(u64_to_usize(
block_height,
)))
.await?
else {
unreachable!();
};
Ok(generated_coins)
}
/// [`BlockchainReadRequest::Outputs`]
pub(super) async fn outputs(
mut blockchain_read: BlockchainReadHandle,
outputs: HashMap<u64, HashSet<u64>>,
) -> Result<HashMap<u64, HashMap<u64, OutputOnChain>>, Error> {
let BlockchainResponse::Outputs(outputs) = blockchain_read
.ready()
.await?
.call(BlockchainReadRequest::Outputs(outputs))
.await?
else {
unreachable!();
};
Ok(outputs)
}
/// [`BlockchainReadRequest::NumberOutputsWithAmount`]
pub(super) async fn number_outputs_with_amount(
mut blockchain_read: BlockchainReadHandle,
output_amounts: Vec<u64>,
) -> Result<HashMap<u64, usize>, Error> {
let BlockchainResponse::NumberOutputsWithAmount(map) = blockchain_read
.ready()
.await?
.call(BlockchainReadRequest::NumberOutputsWithAmount(
output_amounts,
))
.await?
else {
unreachable!();
};
Ok(map)
}
/// [`BlockchainReadRequest::KeyImagesSpent`]
pub(super) async fn key_images_spent(
mut blockchain_read: BlockchainReadHandle,
key_images: HashSet<[u8; 32]>,
) -> Result<bool, Error> {
let BlockchainResponse::KeyImagesSpent(is_spent) = blockchain_read
.ready()
.await?
.call(BlockchainReadRequest::KeyImagesSpent(key_images))
.await?
else {
unreachable!();
};
Ok(is_spent)
}
/// [`BlockchainReadRequest::CompactChainHistory`]
pub(super) async fn compact_chain_history(
mut blockchain_read: BlockchainReadHandle,
) -> Result<(Vec<[u8; 32]>, u128), Error> {
let BlockchainResponse::CompactChainHistory {
block_ids,
cumulative_difficulty,
} = blockchain_read
.ready()
.await?
.call(BlockchainReadRequest::CompactChainHistory)
.await?
else {
unreachable!();
};
Ok((block_ids, cumulative_difficulty))
}
/// [`BlockchainReadRequest::FindFirstUnknown`]
pub(super) async fn find_first_unknown(
mut blockchain_read: BlockchainReadHandle,
hashes: Vec<[u8; 32]>,
) -> Result<Option<(usize, u64)>, Error> {
let BlockchainResponse::FindFirstUnknown(resp) = blockchain_read
.ready()
.await?
.call(BlockchainReadRequest::FindFirstUnknown(hashes))
.await?
else {
unreachable!();
};
Ok(resp.map(|(index, height)| (index, usize_to_u64(height))))
}
/// [`BlockchainReadRequest::TotalTxCount`]
pub(super) async fn total_tx_count(
mut blockchain_read: BlockchainReadHandle,
) -> Result<u64, Error> {
let BlockchainResponse::TotalTxCount(tx_count) = blockchain_read
.ready()
.await?
.call(BlockchainReadRequest::TotalTxCount)
.await?
else {
unreachable!();
};
Ok(usize_to_u64(tx_count))
}
/// [`BlockchainReadRequest::DatabaseSize`]
pub(super) async fn database_size(
mut blockchain_read: BlockchainReadHandle,
) -> Result<(u64, u64), Error> {
let BlockchainResponse::DatabaseSize {
database_size,
free_space,
} = blockchain_read
.ready()
.await?
.call(BlockchainReadRequest::DatabaseSize)
.await?
else {
unreachable!();
};
Ok((database_size, free_space))
}
/// [`BlockchainReadRequest::OutputHistogram`]
pub(super) async fn output_histogram(
mut blockchain_read: BlockchainReadHandle,
input: OutputHistogramInput,
) -> Result<Vec<OutputHistogramEntry>, Error> {
let BlockchainResponse::OutputHistogram(histogram) = blockchain_read
.ready()
.await?
.call(BlockchainReadRequest::OutputHistogram(input))
.await?
else {
unreachable!();
};
Ok(histogram)
}
/// [`BlockchainReadRequest::CoinbaseTxSum`]
pub(super) async fn coinbase_tx_sum(
mut blockchain_read: BlockchainReadHandle,
height: u64,
count: u64,
) -> Result<CoinbaseTxSum, Error> {
let BlockchainResponse::CoinbaseTxSum(sum) = blockchain_read
.ready()
.await?
.call(BlockchainReadRequest::CoinbaseTxSum {
height: u64_to_usize(height),
count,
})
.await?
else {
unreachable!();
};
Ok(sum)
}

View file

@ -0,0 +1,69 @@
//! Functions for [`BlockChainContextRequest`] and [`BlockChainContextResponse`].
use std::convert::Infallible;
use anyhow::Error;
use tower::{Service, ServiceExt};
use cuprate_consensus::context::{
BlockChainContext, BlockChainContextRequest, BlockChainContextResponse,
BlockChainContextService,
};
use cuprate_types::{FeeEstimate, HardFork, HardForkInfo};
/// [`BlockChainContextRequest::Context`].
pub(super) async fn context(
service: &mut BlockChainContextService,
height: u64,
) -> Result<BlockChainContext, Error> {
let BlockChainContextResponse::Context(context) = service
.ready()
.await
.expect("TODO")
.call(BlockChainContextRequest::Context)
.await
.expect("TODO")
else {
unreachable!();
};
Ok(context)
}
/// [`BlockChainContextRequest::HardForkInfo`].
pub(super) async fn hard_fork_info(
service: &mut BlockChainContextService,
hard_fork: HardFork,
) -> Result<HardForkInfo, Error> {
let BlockChainContextResponse::HardForkInfo(hf_info) = service
.ready()
.await
.expect("TODO")
.call(BlockChainContextRequest::HardForkInfo(hard_fork))
.await
.expect("TODO")
else {
unreachable!();
};
Ok(hf_info)
}
/// [`BlockChainContextRequest::FeeEstimate`].
pub(super) async fn fee_estimate(
service: &mut BlockChainContextService,
grace_blocks: u64,
) -> Result<FeeEstimate, Error> {
let BlockChainContextResponse::FeeEstimate(fee) = service
.ready()
.await
.expect("TODO")
.call(BlockChainContextRequest::FeeEstimate { grace_blocks })
.await
.expect("TODO")
else {
unreachable!();
};
Ok(fee)
}

View file

@ -0,0 +1,141 @@
//! Functions for [`BlockchainManagerRequest`] & [`BlockchainManagerResponse`].
use anyhow::Error;
use monero_serai::block::Block;
use tower::{Service, ServiceExt};
use cuprate_helper::cast::{u64_to_usize, usize_to_u64};
use crate::rpc::handler::{
BlockchainManagerHandle, BlockchainManagerRequest, BlockchainManagerResponse,
};
/// [`BlockchainManagerRequest::PopBlocks`]
pub(super) async fn pop_blocks(
blockchain_manager: &mut BlockchainManagerHandle,
amount: u64,
) -> Result<u64, Error> {
let BlockchainManagerResponse::PopBlocks { new_height } = blockchain_manager
.ready()
.await?
.call(BlockchainManagerRequest::PopBlocks {
amount: u64_to_usize(amount),
})
.await?
else {
unreachable!();
};
Ok(usize_to_u64(new_height))
}
/// [`BlockchainManagerRequest::Prune`]
pub(super) async fn prune(blockchain_manager: &mut BlockchainManagerHandle) -> Result<(), Error> {
let BlockchainManagerResponse::Ok = blockchain_manager
.ready()
.await?
.call(BlockchainManagerRequest::Prune)
.await?
else {
unreachable!();
};
Ok(())
}
/// [`BlockchainManagerRequest::Pruned`]
pub(super) async fn pruned(
blockchain_manager: &mut BlockchainManagerHandle,
) -> Result<bool, Error> {
let BlockchainManagerResponse::Pruned(pruned) = blockchain_manager
.ready()
.await?
.call(BlockchainManagerRequest::Pruned)
.await?
else {
unreachable!();
};
Ok(pruned)
}
/// [`BlockchainManagerRequest::RelayBlock`]
pub(super) async fn relay_block(
blockchain_manager: &mut BlockchainManagerHandle,
block: Block,
) -> Result<(), Error> {
let BlockchainManagerResponse::Ok = blockchain_manager
.ready()
.await?
.call(BlockchainManagerRequest::RelayBlock(block))
.await?
else {
unreachable!();
};
Ok(())
}
/// [`BlockchainManagerRequest::Syncing`]
pub(super) async fn syncing(
blockchain_manager: &mut BlockchainManagerHandle,
) -> Result<bool, Error> {
let BlockchainManagerResponse::Syncing(syncing) = blockchain_manager
.ready()
.await?
.call(BlockchainManagerRequest::Syncing)
.await?
else {
unreachable!();
};
Ok(syncing)
}
/// [`BlockchainManagerRequest::Synced`]
pub(super) async fn synced(
blockchain_manager: &mut BlockchainManagerHandle,
) -> Result<bool, Error> {
let BlockchainManagerResponse::Synced(syncing) = blockchain_manager
.ready()
.await?
.call(BlockchainManagerRequest::Synced)
.await?
else {
unreachable!();
};
Ok(syncing)
}
/// [`BlockchainManagerRequest::Target`]
pub(super) async fn target(
blockchain_manager: &mut BlockchainManagerHandle,
) -> Result<std::time::Duration, Error> {
let BlockchainManagerResponse::Target(target) = blockchain_manager
.ready()
.await?
.call(BlockchainManagerRequest::Target)
.await?
else {
unreachable!();
};
Ok(target)
}
/// [`BlockchainManagerRequest::TargetHeight`]
pub(super) async fn target_height(
blockchain_manager: &mut BlockchainManagerHandle,
) -> Result<u64, Error> {
let BlockchainManagerResponse::TargetHeight { height } = blockchain_manager
.ready()
.await?
.call(BlockchainManagerRequest::TargetHeight)
.await?
else {
unreachable!();
};
Ok(usize_to_u64(height))
}

View file

@ -0,0 +1,57 @@
//! Functions for [`TxpoolReadRequest`].
use std::convert::Infallible;
use anyhow::Error;
use tower::{Service, ServiceExt};
use cuprate_helper::cast::usize_to_u64;
use cuprate_txpool::{
service::{
interface::{TxpoolReadRequest, TxpoolReadResponse},
TxpoolReadHandle,
},
TxEntry,
};
/// [`TxpoolReadRequest::Backlog`]
pub(super) async fn backlog(txpool_read: &mut TxpoolReadHandle) -> Result<Vec<TxEntry>, Error> {
let TxpoolReadResponse::Backlog(tx_entries) = txpool_read
.ready()
.await
.expect("TODO")
.call(TxpoolReadRequest::Backlog)
.await
.expect("TODO")
else {
unreachable!();
};
Ok(tx_entries)
}
/// [`TxpoolReadRequest::Size`]
pub(super) async fn size(txpool_read: &mut TxpoolReadHandle) -> Result<u64, Error> {
let TxpoolReadResponse::Size(size) = txpool_read
.ready()
.await
.expect("TODO")
.call(TxpoolReadRequest::Size)
.await
.expect("TODO")
else {
unreachable!();
};
Ok(usize_to_u64(size))
}
/// TODO
#[expect(clippy::needless_pass_by_ref_mut, reason = "TODO: remove after impl")]
pub(super) async fn flush(
txpool_read: &mut TxpoolReadHandle,
tx_hashes: Vec<[u8; 32]>,
) -> Result<(), Error> {
todo!();
Ok(())
}

View file

@ -0,0 +1,12 @@
//! Signals for Cuprate state used throughout the binary.
use tokio::sync::RwLock;
/// Reorg lock.
///
/// A [`RwLock`] where a write lock is taken during a reorg and a read lock can be taken
/// for any operation which must complete without a reorg happening.
///
/// Currently, the only operation that needs to take a read lock is adding txs to the tx-pool,
/// this can potentially be removed in the future, see: <https://github.com/Cuprate/cuprate/issues/305>
pub static REORG_LOCK: RwLock<()> = RwLock::const_new(());

View file

@ -0,0 +1,53 @@
//! Global `static`s used throughout `cuprated`.
use std::{
sync::{atomic::AtomicU64, LazyLock},
time::{SystemTime, UNIX_EPOCH},
};
/// Define all the `static`s that should be always be initialized early on.
///
/// This wraps all `static`s inside a `LazyLock` and generates
/// a [`init_lazylock_statics`] function that must/should be
/// used by `main()` early on.
macro_rules! define_init_lazylock_statics {
($(
$( #[$attr:meta] )*
$name:ident: $t:ty = $init_fn:expr;
)*) => {
/// Initialize global static `LazyLock` data.
pub fn init_lazylock_statics() {
$(
LazyLock::force(&$name);
)*
}
$(
$(#[$attr])*
pub static $name: LazyLock<$t> = LazyLock::new(|| $init_fn);
)*
};
}
define_init_lazylock_statics! {
/// The start time of `cuprated`.
START_INSTANT: SystemTime = SystemTime::now();
/// Start time of `cuprated` as a UNIX timestamp.
START_INSTANT_UNIX: u64 = START_INSTANT
.duration_since(UNIX_EPOCH)
.expect("Failed to set `cuprated` startup time.")
.as_secs();
}
#[cfg(test)]
mod test {
use super::*;
/// Sanity check for startup UNIX time.
#[test]
fn start_instant_unix() {
// Fri Sep 27 01:07:13 AM UTC 2024
assert!(*START_INSTANT_UNIX > 1727399233);
}
}

View file

@ -0,0 +1,3 @@
//! Transaction Pool
//!
//! Will handle initiating the tx-pool, providing the preprocessor required for the dandelion pool.

View file

@ -27,11 +27,37 @@
---
- [⚪️ Storage](storage/intro.md)
- [⚪️ Database abstraction](storage/database-abstraction.md)
- [⚪️ Blockchain](storage/blockchain.md)
- [⚪️ Transaction pool](storage/transaction-pool.md)
- [⚪️ Pruning](storage/pruning.md)
- [🟢 Storage](storage/intro.md)
- [🟢 Database abstraction](storage/db/intro.md)
- [🟢 Abstraction](storage/db/abstraction/intro.md)
- [🟢 Backend](storage/db/abstraction/backend.md)
- [🟢 ConcreteEnv](storage/db/abstraction/concrete_env.md)
- [🟢 Trait](storage/db/abstraction/trait.md)
- [🟢 Syncing](storage/db/syncing.md)
- [🟢 Resizing](storage/db/resizing.md)
- [🟢 (De)serialization](storage/db/serde.md)
- [🟢 Known issues and tradeoffs](storage/db/issues/intro.md)
- [🟢 Abstracting backends](storage/db/issues/traits.md)
- [🟢 Hot-swap](storage/db/issues/hot-swap.md)
- [🟢 Unaligned bytes](storage/db/issues/unaligned.md)
- [🟢 Endianness](storage/db/issues/endian.md)
- [🟢 Multimap](storage/db/issues/multimap.md)
- [🟢 Common behavior](storage/common/intro.md)
- [🟢 Types](storage/common/types.md)
- [🟢 `ops`](storage/common/ops.md)
- [🟢 `tower::Service`](storage/common/service/intro.md)
- [🟢 Initialization](storage/common/service/initialization.md)
- [🟢 Requests](storage/common/service/requests.md)
- [🟢 Responses](storage/common/service/responses.md)
- [🟢 Resizing](storage/common/service/resizing.md)
- [🟢 Thread model](storage/common/service/thread-model.md)
- [🟢 Shutdown](storage/common/service/shutdown.md)
- [🟢 Blockchain](storage/blockchain/intro.md)
- [🟢 Schema](storage/blockchain/schema/intro.md)
- [🟢 Tables](storage/blockchain/schema/tables.md)
- [🟢 Multimap tables](storage/blockchain/schema/multimap.md)
- [⚪️ Transaction pool](storage/txpool/intro.md)
- [⚪️ Pruning](storage/pruning/intro.md)
---
@ -93,17 +119,20 @@
---
- [⚪️ Resource model](resource-model/intro.md)
- [⚪️ File system](resource-model/file-system.md)
- [⚪️ Sockets](resource-model/sockets.md)
- [⚪️ Memory](resource-model/memory.md)
- [🟡 Concurrency and parallelism](resource-model/concurrency-and-parallelism/intro.md)
- [⚪️ Map](resource-model/concurrency-and-parallelism/map.md)
- [⚪️ The RPC server](resource-model/concurrency-and-parallelism/the-rpc-server.md)
- [⚪️ The database](resource-model/concurrency-and-parallelism/the-database.md)
- [⚪️ The block downloader](resource-model/concurrency-and-parallelism/the-block-downloader.md)
- [⚪️ The verifier](resource-model/concurrency-and-parallelism/the-verifier.md)
- [⚪️ Thread exit](resource-model/concurrency-and-parallelism/thread-exit.md)
- [⚪️ Resources](resources/intro.md)
- [⚪️ File system](resources/fs/intro.md)
- [🟡 Index of PATHs](resources/fs/paths.md)
- [⚪️ Sockets](resources/sockets/index.md)
- [🔴 Index of ports](resources/sockets/ports.md)
- [⚪️ Memory](resources/memory.md)
- [🟡 Concurrency and parallelism](resources/cap/intro.md)
- [⚪️ Map](resources/cap/map.md)
- [⚪️ The RPC server](resources/cap/the-rpc-server.md)
- [⚪️ The database](resources/cap/the-database.md)
- [⚪️ The block downloader](resources/cap/the-block-downloader.md)
- [⚪️ The verifier](resources/cap/the-verifier.md)
- [⚪️ Thread exit](resources/cap/thread-exit.md)
- [🔴 Index of threads](resources/cap/threads.md)
---

View file

@ -55,6 +55,7 @@ cargo doc --open --package cuprate-blockchain
## 1-off crates
| Crate | In-tree path | Purpose |
|-------|--------------|---------|
| [`cuprate-constants`](https://doc.cuprate.org/cuprate_constants) | [`constants/`](https://github.com/Cuprate/cuprate/tree/main/constants) | Shared `const/static` data across Cuprate
| [`cuprate-cryptonight`](https://doc.cuprate.org/cuprate_cryptonight) | [`cryptonight/`](https://github.com/Cuprate/cuprate/tree/main/cryptonight) | CryptoNight hash functions
| [`cuprate-pruning`](https://doc.cuprate.org/cuprate_pruning) | [`pruning/`](https://github.com/Cuprate/cuprate/tree/main/pruning) | Monero pruning logic/types
| [`cuprate-helper`](https://doc.cuprate.org/cuprate_helper) | [`helper/`](https://github.com/Cuprate/cuprate/tree/main/helper) | Kitchen-sink helper crate for Cuprate

View file

@ -1 +0,0 @@
# ⚪️ Resource model

View file

@ -1 +0,0 @@
# ⚪️ Sockets

View file

@ -0,0 +1,2 @@
# Index of threads
This is an index of all of the system threads Cuprate actively uses.

View file

@ -0,0 +1,87 @@
# Index of PATHs
This is an index of all of the filesystem PATHs Cuprate actively uses.
The [`cuprate_helper::fs`](https://doc.cuprate.org/cuprate_helper/fs/index.html)
module defines the general locations used throughout Cuprate.
[`dirs`](https://docs.rs/dirs) is used internally, which follows
the PATH standards/conventions on each OS Cuprate supports, i.e.:
- the [XDG base directory](https://standards.freedesktop.org/basedir-spec/basedir-spec-latest.html) and the [XDG user directory](https://www.freedesktop.org/wiki/Software/xdg-user-dirs/) specifications on Linux
- the [Known Folder](https://msdn.microsoft.com/en-us/library/windows/desktop/bb776911(v=vs.85).aspx) system on Windows
- the [Standard Directories](https://developer.apple.com/library/content/documentation/FileManagement/Conceptual/FileSystemProgrammingGuide/FileSystemOverview/FileSystemOverview.html#//apple_ref/doc/uid/TP40010672-CH2-SW6) on macOS
## Cache
Cuprate's cache directory.
| OS | PATH |
|---------|-----------------------------------------|
| Windows | `C:\Users\Alice\AppData\Local\Cuprate\` |
| macOS | `/Users/Alice/Library/Caches/Cuprate/` |
| Linux | `/home/alice/.cache/cuprate/` |
## Config
Cuprate's config directory.
| OS | PATH |
|---------|-----------------------------------------------------|
| Windows | `C:\Users\Alice\AppData\Roaming\Cuprate\` |
| macOS | `/Users/Alice/Library/Application Support/Cuprate/` |
| Linux | `/home/alice/.config/cuprate/` |
## Data
Cuprate's data directory.
| OS | PATH |
|---------|-----------------------------------------------------|
| Windows | `C:\Users\Alice\AppData\Roaming\Cuprate\` |
| macOS | `/Users/Alice/Library/Application Support/Cuprate/` |
| Linux | `/home/alice/.local/share/cuprate/` |
## Blockchain
Cuprate's blockchain directory.
| OS | PATH |
|---------|----------------------------------------------------------------|
| Windows | `C:\Users\Alice\AppData\Roaming\Cuprate\blockchain\` |
| macOS | `/Users/Alice/Library/Application Support/Cuprate/blockchain/` |
| Linux | `/home/alice/.local/share/cuprate/blockchain/` |
## Transaction pool
Cuprate's transaction pool directory.
| OS | PATH |
|---------|------------------------------------------------------------|
| Windows | `C:\Users\Alice\AppData\Roaming\Cuprate\txpool\` |
| macOS | `/Users/Alice/Library/Application Support/Cuprate/txpool/` |
| Linux | `/home/alice/.local/share/cuprate/txpool/` |
## Database
Cuprate's database location/filenames depend on:
- Which database it is
- Which backend is being used
---
`cuprate_blockchain` files are in the above mentioned `blockchain` folder.
`cuprate_txpool` files are in the above mentioned `txpool` folder.
---
If the `heed` backend is being used, these files will be created:
| Filename | Purpose |
|------------|--------------------|
| `data.mdb` | Main data file |
| `lock.mdb` | Database lock file |
For example: `/home/alice/.local/share/cuprate/blockchain/lock.mdb`.
If the `redb` backend is being used, these files will be created:
| Filename | Purpose |
|-------------|--------------------|
| `data.redb` | Main data file |
For example: `/home/alice/.local/share/cuprate/txpool/data.redb`.

View file

@ -0,0 +1 @@
# Resources

View file

@ -0,0 +1 @@
# Sockets

View file

@ -0,0 +1,2 @@
# Index of ports
This is an index of all of the network sockets Cuprate actively uses.

View file

@ -1 +0,0 @@
# ⚪️ Blockchain

View file

@ -0,0 +1,3 @@
# Blockchain
This section contains storage information specific to [`cuprate_blockchain`](https://doc.cuprate.org/cuprate_blockchain),
the database built on-top of [`cuprate_database`](https://doc.cuprate.org/cuprate_database) that stores the blockchain.

View file

@ -0,0 +1,2 @@
# Schema
This section contains the schema of `cuprate_blockchain`'s database tables.

View file

@ -0,0 +1,45 @@
# Multimap tables
## Outputs
When referencing outputs, Monero will [use the amount and the amount index](https://github.com/monero-project/monero/blob/c8214782fb2a769c57382a999eaf099691c836e7/src/blockchain_db/lmdb/db_lmdb.cpp#L3447-L3449). This means 2 keys are needed to reach an output.
With LMDB you can set the `DUP_SORT` flag on a table and then set the key/value to:
```rust
Key = KEY_PART_1
```
```rust
Value = {
KEY_PART_2,
VALUE // The actual value we are storing.
}
```
Then you can set a custom value sorting function that only takes `KEY_PART_2` into account; this is how `monerod` does it.
This requires that the underlying database supports:
- multimap tables
- custom sort functions on values
- setting a cursor on a specific key/value
## How `cuprate_blockchain` does it
Another way to implement this is as follows:
```rust
Key = { KEY_PART_1, KEY_PART_2 }
```
```rust
Value = VALUE
```
Then the key type is simply used to look up the value; this is how `cuprate_blockchain` does it
as [`cuprate_database` does not have a multimap abstraction (yet)](../../db/issues/multimap.md).
For example, the key/value pair for outputs is:
```rust
PreRctOutputId => Output
```
where `PreRctOutputId` looks like this:
```rust
struct PreRctOutputId {
amount: u64,
amount_index: u64,
}
```

View file

@ -0,0 +1,39 @@
# Tables
> See also: <https://doc.cuprate.org/cuprate_blockchain/tables> & <https://doc.cuprate.org/cuprate_blockchain/types>.
The `CamelCase` names of the table headers documented here (e.g. `TxIds`) are the actual type name of the table within `cuprate_blockchain`.
Note that words written within `code blocks` mean that it is a real type defined and usable within `cuprate_blockchain`. Other standard types like u64 and type aliases (TxId) are written normally.
Within `cuprate_blockchain::tables`, the below table is essentially defined as-is with [a macro](https://github.com/Cuprate/cuprate/blob/31ce89412aa174fc33754f22c9a6d9ef5ddeda28/database/src/tables.rs#L369-L470).
Many of the data types stored are the same data types, although are different semantically, as such, a map of aliases used and their real data types is also provided below.
| Alias | Real Type |
|----------------------------------------------------|-----------|
| BlockHeight, Amount, AmountIndex, TxId, UnlockTime | u64
| BlockHash, KeyImage, TxHash, PrunableHash | [u8; 32]
---
| Table | Key | Value | Description |
|--------------------|----------------------|-------------------------|-------------|
| `BlockHeaderBlobs` | BlockHeight | `StorableVec<u8>` | Maps a block's height to a serialized byte form of its header
| `BlockTxsHashes` | BlockHeight | `StorableVec<[u8; 32]>` | Maps a block's height to the block's transaction hashes
| `BlockHeights` | BlockHash | BlockHeight | Maps a block's hash to its height
| `BlockInfos` | BlockHeight | `BlockInfo` | Contains metadata of all blocks
| `KeyImages` | KeyImage | () | This table is a set with no value, it stores transaction key images
| `NumOutputs` | Amount | u64 | Maps an output's amount to the number of outputs with that amount
| `Outputs` | `PreRctOutputId` | `Output` | This table contains legacy CryptoNote outputs which have clear amounts. This table will not contain an output with 0 amount.
| `PrunedTxBlobs` | TxId | `StorableVec<u8>` | Contains pruned transaction blobs (even if the database is not pruned)
| `PrunableTxBlobs` | TxId | `StorableVec<u8>` | Contains the prunable part of a transaction
| `PrunableHashes` | TxId | PrunableHash | Contains the hash of the prunable part of a transaction
| `RctOutputs` | AmountIndex | `RctOutput` | Contains RingCT outputs mapped from their global RCT index
| `TxBlobs` | TxId | `StorableVec<u8>` | Serialized transaction blobs (bytes)
| `TxIds` | TxHash | TxId | Maps a transaction's hash to its index/ID
| `TxHeights` | TxId | BlockHeight | Maps a transaction's ID to the height of the block it comes from
| `TxOutputs` | TxId | `StorableVec<u64>` | Gives the amount indices of a transaction's outputs
| `TxUnlockTime` | TxId | UnlockTime | Stores the unlock time of a transaction (only if it has a non-zero lock time)
<!-- TODO(Boog900): We could split this table again into `RingCT (non-miner) Outputs` and `RingCT (miner) Outputs` as for miner outputs we can store the amount instead of commitment saving 24 bytes per miner output. -->

View file

@ -0,0 +1,9 @@
# Common behavior
The crates that build on-top of the database abstraction ([`cuprate_database`](https://doc.cuprate.org/cuprate_database))
share some common behavior including but not limited to:
- Defining their specific database tables and types
- Having an `ops` module
- Exposing a `tower::Service` API (backed by a threadpool) for public usage
This section provides more details on these behaviors.

View file

@ -0,0 +1,21 @@
# `ops`
Both [`cuprate_blockchain`](https://doc.cuprate.org/cuprate_blockchain)
and [`cuprate_txpool`](https://doc.cuprate.org/cuprate_txpool) expose an
`ops` module containing abstracted abstracted Monero-related database operations.
For example, [`cuprate_blockchain::ops::block::add_block`](https://doc.cuprate.org/cuprate_blockchain/ops/block/fn.add_block.html).
These functions build on-top of the database traits and allow for more abstracted database operations.
For example, instead of these signatures:
```rust
fn get(_: &Key) -> Value;
fn put(_: &Key, &Value);
```
the `ops` module provides much higher-level signatures like such:
```rust
fn add_block(block: &Block) -> Result<_, _>;
```
Although these functions are exposed, they are not the main API, that would be next section:
the [`tower::Service`](./service/intro.md) (which uses these functions).

View file

@ -0,0 +1,9 @@
# Initialization
A database service is started simply by calling: [`init()`](https://doc.cuprate.org/cuprate_blockchain/service/fn.init.html).
This function initializes the database, spawns threads, and returns a:
- Read handle to the database
- Write handle to the database
- The database itself
These handles implement the `tower::Service` trait, which allows sending requests and receiving responses `async`hronously.

View file

@ -0,0 +1,65 @@
# tower::Service
Both [`cuprate_blockchain`](https://doc.cuprate.org/cuprate_blockchain)
and [`cuprate_txpool`](https://doc.cuprate.org/cuprate_txpool) provide
`async` [`tower::Service`](https://docs.rs/tower)s that define database requests/responses.
The main API that other Cuprate crates use.
There are 2 `tower::Service`s:
1. A read service which is backed by a [`rayon::ThreadPool`](https://docs.rs/rayon)
1. A write service which spawns a single thread to handle write requests
As this behavior is the same across all users of [`cuprate_database`](https://doc.cuprate.org/cuprate_database),
it is extracted into its own crate: [`cuprate_database_service`](https://doc.cuprate.org/cuprate_database_service).
## Diagram
As a recap, here is how this looks to a user of a higher-level database crate,
`cuprate_blockchain` in this example. Starting from the lowest layer:
1. `cuprate_database` is used to abstract the database
1. `cuprate_blockchain` builds on-top of that with tables, types, operations
1. `cuprate_blockchain` exposes a `tower::Service` using `cuprate_database_service`
1. The user now interfaces with `cuprate_blockchain` with that `tower::Service` in a request/response fashion
```
┌──────────────────┐
│ cuprate_database │
└────────┬─────────┘
┌─────────────────────────────────┴─────────────────────────────────┐
│ cuprate_blockchain │
│ │
│ ┌──────────────────────┐ ┌─────────────────────────────────────┐ │
│ │ Tables, types │ │ ops │ │
│ │ ┌───────────┐┌─────┐ │ │ ┌─────────────┐ ┌──────────┐┌─────┐ │ │
│ │ │ BlockInfo ││ ... │ ├──┤ │ add_block() │ │ add_tx() ││ ... │ │ │
│ │ └───────────┘└─────┘ │ │ └─────────────┘ └──────────┘└─────┘ │ │
│ └──────────────────────┘ └─────┬───────────────────────────────┘ │
│ │ │
│ ┌─────────┴───────────────────────────────┐ │
│ │ tower::Service │ │
│ │ ┌──────────────────────────────┐┌─────┐ │ │
│ │ │ Blockchain{Read,Write}Handle ││ ... │ │ │
│ │ └──────────────────────────────┘└─────┘ │ │
│ └─────────┬───────────────────────────────┘ │
│ │ │
└─────────────────────────────────┼─────────────────────────────────┘
┌─────┴─────┐
┌────────────────────┴────┐ ┌────┴──────────────────────────────────┐
│ Database requests │ │ Database responses │
│ ┌─────────────────────┐ │ │ ┌───────────────────────────────────┐ │
│ │ FindBlock([u8; 32]) │ │ │ │ FindBlock(Option<(Chain, usize)>) │ │
│ └─────────────────────┘ │ │ └───────────────────────────────────┘ │
│ ┌─────────────────────┐ │ │ ┌───────────────────────────────────┐ │
│ │ ChainHeight │ │ │ │ ChainHeight(usize, [u8; 32]) │ │
│ └─────────────────────┘ │ │ └───────────────────────────────────┘ │
│ ┌─────────────────────┐ │ │ ┌───────────────────────────────────┐ │
│ │ ... │ │ │ │ ... │ │
│ └─────────────────────┘ │ │ └───────────────────────────────────┘ │
└─────────────────────────┘ └───────────────────────────────────────┘
▲ │
│ ▼
┌─────────────────────────┐
│ cuprate_blockchain user │
└─────────────────────────┘
```

View file

@ -0,0 +1,8 @@
# Requests
Along with the 2 handles, there are 2 types of requests:
- Read requests, e.g. [`BlockchainReadRequest`](https://doc.cuprate.org/cuprate_types/blockchain/enum.BlockchainReadRequest.html)
- Write requests, e.g. [`BlockchainWriteRequest`](https://doc.cuprate.org/cuprate_types/blockchain/enum.BlockchainWriteRequest.html)
Quite obviously:
- Read requests are for retrieving various data from the database
- Write requests are for writing data to the database

View file

@ -0,0 +1,15 @@
# Resizing
As noted in the [`cuprate_database` resizing section](../../db/resizing.md),
builders on-top of `cuprate_database` are responsible for resizing the database.
In `cuprate_{blockchain,txpool}`'s case, that means the `tower::Service` must know
how to resize. This logic is shared between both crates, defined in `cuprate_database_service`:
<https://github.com/Cuprate/cuprate/blob/0941f68efcd7dfe66124ad0c1934277f47da9090/storage/service/src/service/write.rs#L107-L171>.
By default, this uses a _similar_ algorithm as `monerod`'s:
- [If there's not enough space to fit a write request's data](https://github.com/Cuprate/cuprate/blob/0941f68efcd7dfe66124ad0c1934277f47da9090/storage/service/src/service/write.rs#L130), start a resize
- Each resize adds around [`1,073,745,920`](https://github.com/Cuprate/cuprate/blob/2ac90420c658663564a71b7ecb52d74f3c2c9d0f/database/src/resize.rs#L104-L160) bytes to the current map size
- A resize will be [attempted `3` times](https://github.com/Cuprate/cuprate/blob/0941f68efcd7dfe66124ad0c1934277f47da9090/storage/service/src/service/write.rs#L110) before failing
There are other [resizing algorithms](https://doc.cuprate.org/cuprate_database/resize/enum.ResizeAlgorithm.html) that define how the database's memory map grows, although currently the behavior of `monerod` is closely followed (for no particular reason).

View file

@ -0,0 +1,18 @@
# Responses
After sending a request using the read/write handle, the value returned is _not_ the response, yet an `async`hronous channel that will eventually return the response:
```rust,ignore
// Send a request.
// tower::Service::call()
// V
let response_channel: Channel = read_handle.call(BlockchainReadRequest::ChainHeight)?;
// Await the response.
let response: BlockchainReadRequest = response_channel.await?;
```
After `await`ing the returned channel, a `Response` will eventually be returned when
the `Service` threadpool has fetched the value from the database and sent it off.
Both read/write requests variants match in name with `Response` variants, i.e.
- `BlockchainReadRequest::ChainHeight` leads to `BlockchainResponse::ChainHeight`
- `BlockchainWriteRequest::WriteBlock` leads to `BlockchainResponse::WriteBlockOk`

View file

@ -0,0 +1,4 @@
# Shutdown
Once the read/write handles to the `tower::Service` are `Drop`ed, the backing thread(pool) will gracefully exit, automatically.
Note the writer thread and reader threadpool aren't connected whatsoever; dropping the write handle will make the writer thread exit, however, the reader handle is free to be held onto and can be continued to be read from - and vice-versa for the write handle.

View file

@ -0,0 +1,23 @@
# Thread model
The base database abstractions themselves are not concerned with parallelism, they are mostly functions to be called from a single-thread.
However, the `cuprate_database_service` API, _does_ have a thread model backing it.
When a `Service`'s init() function is called, threads will be spawned and
maintained until the user drops (disconnects) the returned handles.
The current behavior for thread count is:
- [1 writer thread](https://github.com/Cuprate/cuprate/blob/0941f68efcd7dfe66124ad0c1934277f47da9090/storage/service/src/service/write.rs#L48-L52)
- [As many reader threads as there are system threads](https://github.com/Cuprate/cuprate/blob/0941f68efcd7dfe66124ad0c1934277f47da9090/storage/service/src/reader_threads.rs#L44-L49)
For example, on a system with 32-threads, `cuprate_database_service` will spawn:
- 1 writer thread
- 32 reader threads
whose sole responsibility is to listen for database requests, access the database (potentially in parallel), and return a response.
Note that the `1 system thread = 1 reader thread` model is only the default setting, the reader thread count can be configured by the user to be any number between `1 .. amount_of_system_threads`.
The reader threads are managed by [`rayon`](https://docs.rs/rayon).
For an example of where multiple reader threads are used: given a request that asks if any key-image within a set already exists, `cuprate_blockchain` will [split that work between the threads with `rayon`](https://github.com/Cuprate/cuprate/blob/0941f68efcd7dfe66124ad0c1934277f47da9090/storage/blockchain/src/service/read.rs#L400).

View file

@ -0,0 +1,21 @@
# Types
## POD types
Since [all types in the database are POD types](../db/serde.md), we must often
provide mappings between outside types and the types actually stored in the database.
A common case is mapping infallible types to and from [`bitflags`](https://docs.rs/bitflag) and/or their raw integer representation.
For example, the [`OutputFlag`](https://doc.cuprate.org/cuprate_blockchain/types/struct.OutputFlags.html) type or `bool` types.
As types like `enum`s, `bool`s and `char`s cannot be casted from an integer infallibly,
`bytemuck::Pod` cannot be implemented on it safely. Thus, we store some infallible version
of it inside the database with a custom type and map them when fetching the data.
## Lean types
Another reason why database crates define their own types is
to cut any unneeded data from the type.
Many of the types used in normal operation (e.g. [`cuprate_types::VerifiedBlockInformation`](https://doc.cuprate.org/cuprate_types/struct.VerifiedBlockInformation.html)) contain lots of extra pre-processed data for convenience.
This would be a waste to store in the database, so in this example, the much leaner
"raw" [`BlockInfo`](https://doc.cuprate.org/cuprate_blockchain/types/struct.BlockInfo.html)
type is stored.

View file

@ -1 +0,0 @@
# ⚪️ Database abstraction

View file

@ -0,0 +1,50 @@
# Backend
First, we need an actual database implementation.
`cuprate-database`'s `trait`s allow abstracting over the actual database, such that any backend in particular could be used.
This page is an enumeration of all the backends Cuprate has, has tried, and may try in the future.
## `heed`
The default database used is [`heed`](https://github.com/meilisearch/heed) (LMDB). The upstream versions from [`crates.io`](https://crates.io/crates/heed) are used. `LMDB` should not need to be installed as `heed` has a build script that pulls it in automatically.
`heed`'s filenames inside Cuprate's data folder are:
| Filename | Purpose |
|------------|---------|
| `data.mdb` | Main data file
| `lock.mdb` | Database lock file
`heed`-specific notes:
- [There is a maximum reader limit](https://github.com/monero-project/monero/blob/059028a30a8ae9752338a7897329fe8012a310d5/src/blockchain_db/lmdb/db_lmdb.cpp#L1372). Other potential processes (e.g. `xmrblocks`) that are also reading the `data.mdb` file need to be accounted for
- [LMDB does not work on remote filesystem](https://github.com/LMDB/lmdb/blob/b8e54b4c31378932b69f1298972de54a565185b1/libraries/liblmdb/lmdb.h#L129)
## `redb`
The 2nd database backend is the 100% Rust [`redb`](https://github.com/cberner/redb).
The upstream versions from [`crates.io`](https://crates.io/crates/redb) are used.
`redb`'s filenames inside Cuprate's data folder are:
| Filename | Purpose |
|-------------|---------|
| `data.redb` | Main data file
<!-- TODO: document DB on remote filesystem (does redb allow this?) -->
## `redb-memory`
This backend is 100% the same as `redb`, although, it uses [`redb::backend::InMemoryBackend`](https://docs.rs/redb/2.1.2/redb/backends/struct.InMemoryBackend.html) which is a database that completely resides in memory instead of a file.
All other details about this should be the same as the normal `redb` backend.
## `sanakirja`
[`sanakirja`](https://docs.rs/sanakirja) was a candidate as a backend, however there were problems with maximum value sizes.
The default maximum value size is [1012 bytes](https://docs.rs/sanakirja/1.4.1/sanakirja/trait.Storable.html) which was too small for our requirements. Using [`sanakirja::Slice`](https://docs.rs/sanakirja/1.4.1/sanakirja/union.Slice.html) and [sanakirja::UnsizedStorage](https://docs.rs/sanakirja/1.4.1/sanakirja/trait.UnsizedStorable.html) was attempted, but there were bugs found when inserting a value in-between `512..=4096` bytes.
As such, it is not implemented.
## `MDBX`
[`MDBX`](https://erthink.github.io/libmdbx) was a candidate as a backend, however MDBX deprecated the custom key/value comparison functions, this makes it a bit trickier to implement multimap tables. It is also quite similar to the main backend LMDB (of which it was originally a fork of).
As such, it is not implemented (yet).

View file

@ -0,0 +1,15 @@
# `ConcreteEnv`
After a backend is selected, the main database environment struct is "abstracted" by putting it in the non-generic, concrete [`struct ConcreteEnv`](https://doc.cuprate.org/cuprate_database/struct.ConcreteEnv.html).
This is the main object used when handling the database directly.
This struct contains all the data necessary to operate the database.
The actual database backend `ConcreteEnv` will use internally [depends on which backend feature is used](https://github.com/Cuprate/cuprate/blob/0941f68efcd7dfe66124ad0c1934277f47da9090/storage/database/src/backend/mod.rs#L3-L13).
`ConcreteEnv` itself is not too important, what is important is that:
1. It allows callers to not directly reference any particular backend environment
1. It implements [`trait Env`](https://doc.cuprate.org/cuprate_database/trait.Env.html) which opens the door to all the other database traits
The equivalent "database environment" objects in the backends themselves are:
- [`heed::Env`](https://docs.rs/heed/0.20.0/heed/struct.Env.html)
- [`redb::Database`](https://docs.rs/redb/2.1.0/redb/struct.Database.html)

View file

@ -0,0 +1,33 @@
# Abstraction
This next section details how `cuprate_database` abstracts multiple database backends into 1 API.
## Diagram
A simple diagram describing the responsibilities/relationship of `cuprate_database`.
```text
┌───────────────────────────────────────────────────────────────────────┐
│ cuprate_database │
│ │
│ ┌───────────────────────────┐ ┌─────────────────────────────────┐ │
│ │ Database traits │ │ Backends │ │
│ │ ┌─────┐┌──────┐┌────────┐ │ │ ┌─────────────┐ ┌─────────────┐ │ │
│ │ │ Env ││ TxRw ││ ... │ ├─────┤ │ heed (LMDB) │ │ redb │ │ │
│ │ └─────┘└──────┘└────────┘ │ │ └─────────────┘ └─────────────┘ │ │
│ └──────────┬─────────────┬──┘ └──┬──────────────────────────────┘ │
│ │ └─────┬─────┘ │
│ │ ┌─────────┴──────────────┐ │
│ │ │ Database types │ │
│ │ │ ┌─────────────┐┌─────┐ │ │
│ │ │ │ ConcreteEnv ││ ... │ │ │
│ │ │ └─────────────┘└─────┘ │ │
│ │ └─────────┬──────────────┘ │
│ │ │ │
└────────────┼───────────────────┼──────────────────────────────────────┘
│ │
└───────────────────┤
┌───────────────────────┐
│ cuprate_database user │
└───────────────────────┘
```

View file

@ -0,0 +1,49 @@
# Trait
`cuprate_database` provides a set of `trait`s that abstract over the various database backends.
This allows the function signatures and behavior to stay the same but allows for swapping out databases in an easier fashion.
All common behavior of the backend's are encapsulated here and used instead of using the backend directly.
Examples:
- [`trait Env`](https://github.com/Cuprate/cuprate/blob/2ac90420c658663564a71b7ecb52d74f3c2c9d0f/database/src/env.rs)
- [`trait {TxRo, TxRw}`](https://github.com/Cuprate/cuprate/blob/2ac90420c658663564a71b7ecb52d74f3c2c9d0f/database/src/transaction.rs)
- [`trait {DatabaseRo, DatabaseRw}`](https://github.com/Cuprate/cuprate/blob/2ac90420c658663564a71b7ecb52d74f3c2c9d0f/database/src/database.rs)
For example, instead of calling `heed` or `redb`'s `get()` function directly, `DatabaseRo::get()` is called.
## Usage
With a `ConcreteEnv` and a particular backend selected,
we can now start using it alongside these traits to start
doing database operations in a generic manner.
An example:
```rust
use cuprate_database::{
ConcreteEnv,
config::ConfigBuilder,
Env, EnvInner,
DatabaseRo, DatabaseRw, TxRo, TxRw,
};
// Initialize the database environment.
let env = ConcreteEnv::open(config)?;
// Open up a transaction + tables for writing.
let env_inner = env.env_inner();
let tx_rw = env_inner.tx_rw()?;
env_inner.create_db::<Table>(&tx_rw)?;
// Write data to the table.
{
let mut table = env_inner.open_db_rw::<Table>(&tx_rw)?;
table.put(&0, &1)?;
}
// Commit the transaction.
TxRw::commit(tx_rw)?;
```
As seen above, there is no direct call to `heed` or `redb`.
Their functionality is abstracted behind `ConcreteEnv` and the `trait`s.

View file

@ -0,0 +1,23 @@
# Database abstraction
[`cuprate_database`](https://doc.cuprate.org/cuprate_database) is Cuprates database abstraction.
This crate abstracts various database backends with `trait`s.
All backends have the following attributes:
- [Embedded](https://en.wikipedia.org/wiki/Embedded_database)
- [Multiversion concurrency control](https://en.wikipedia.org/wiki/Multiversion_concurrency_control)
- [ACID](https://en.wikipedia.org/wiki/ACID)
- Are `(key, value)` oriented and have the expected API (`get()`, `insert()`, `delete()`)
- Are table oriented (`"table_name" -> (key, value)`)
- Allows concurrent readers
The currently implemented backends are:
- [`heed`](https://github.com/meilisearch/heed) (LMDB)
- [`redb`](https://github.com/cberner/redb)
Said precicely, `cuprate_database` is the embedded database other Cuprate
crates interact with instead of using any particular backend implementation.
This allows the backend to be swapped and/or future backends to be implemented.
This section will go over `cuprate_database` details.

View file

@ -0,0 +1,6 @@
# Endianness
`cuprate_database`'s (de)serialization and storage of bytes are native-endian, as in, byte storage order will depend on the machine it is running on.
As Cuprate's build-targets are all little-endian ([big-endian by default machines barely exist](https://en.wikipedia.org/wiki/Endianness#Hardware)), this doesn't matter much and the byte ordering can be seen as a constant.
Practically, this means `cuprated`'s database files can be transferred across computers, as can `monerod`'s.

View file

@ -0,0 +1,17 @@
# Hot-swappable backends
> See also: <https://github.com/Cuprate/cuprate/issues/209>.
Using a different backend is really as simple as re-building `cuprate_database` with a different feature flag:
```bash
# Use LMDB.
cargo build --package cuprate-database --features heed
# Use redb.
cargo build --package cuprate-database --features redb
```
This is "good enough" for now, however ideally, this hot-swapping of backends would be able to be done at _runtime_.
As it is now, `cuprate_database` cannot compile both backends and swap based on user input at runtime; it must be compiled with a certain backend, which will produce a binary with only that backend.
This also means things like [CI testing multiple backends is awkward](https://github.com/Cuprate/cuprate/blob/main/.github/workflows/ci.yml#L132-L136), as we must re-compile with different feature flags instead.

View file

@ -0,0 +1,7 @@
# Known issues and tradeoffs
`cuprate_database` takes many tradeoffs, whether due to:
- Prioritizing certain values over others
- Not having a better solution
- Being "good enough"
This section is a list of the larger ones, along with issues that don't have answers yet.

View file

@ -0,0 +1,22 @@
# Multimap
`cuprate_database` does not currently have an abstraction for [multimap tables](https://en.wikipedia.org/wiki/Multimap).
All tables are single maps of keys to values.
This matters as this means some of `cuprate_blockchain`'s tables differ from `monerod`'s tables - the primary key is stored _for all_ entries, compared to `monerod` only needing to store it once:
```rust
// `monerod` only stores `amount: 1` once,
// `cuprated` stores it each time it appears.
struct PreRctOutputId { amount: 1, amount_index: 0 }
struct PreRctOutputId { amount: 1, amount_index: 1 }
```
This means `cuprated`'s database will be slightly larger than `monerod`'s.
The current method `cuprate_blockchain` uses will be "good enough" as the multimap
keys needed for now are fixed, e.g. pre-RCT outputs are no longer being produced.
This may need to change in the future when multimap is all but required, e.g. for FCMP++.
Until then, multimap tables are not implemented as they are tricky to implement across all backends.

View file

@ -0,0 +1,15 @@
# Traits abstracting backends
Although all database backends used are very similar, they have some crucial differences in small implementation details that must be worked around when conforming them to `cuprate_database`'s traits.
Put simply: using `cuprate_database`'s traits is less efficient and more awkward than using the backend directly.
For example:
- [Data types must be wrapped in compatibility layers when they otherwise wouldn't be](https://github.com/Cuprate/cuprate/blob/d0ac94a813e4cd8e0ed8da5e85a53b1d1ace2463/database/src/backend/heed/env.rs#L101-L116)
- [There are types that only apply to a specific backend, but are visible to all](https://github.com/Cuprate/cuprate/blob/d0ac94a813e4cd8e0ed8da5e85a53b1d1ace2463/database/src/error.rs#L86-L89)
- [There are extra layers of abstraction to smoothen the differences between all backends](https://github.com/Cuprate/cuprate/blob/d0ac94a813e4cd8e0ed8da5e85a53b1d1ace2463/database/src/env.rs#L62-L68)
- [Existing functionality of backends must be taken away, as it isn't supported in the others](https://github.com/Cuprate/cuprate/blob/d0ac94a813e4cd8e0ed8da5e85a53b1d1ace2463/database/src/database.rs#L27-L34)
This is a _tradeoff_ that `cuprate_database` takes, as:
- The backend itself is usually not the source of bottlenecks in the greater system, as such, small inefficiencies are OK
- None of the lost functionality is crucial for operation
- The ability to use, test, and swap between multiple database backends is [worth it](https://github.com/Cuprate/cuprate/pull/35#issuecomment-1952804393)

View file

@ -0,0 +1,24 @@
# Copying unaligned bytes
As mentioned in [`(De)serialization`](../serde.md), bytes are _copied_ when they are turned into a type `T` due to unaligned bytes being returned from database backends.
Using a regular reference cast results in an improperly aligned type `T`; [such a type even existing causes undefined behavior](https://doc.rust-lang.org/reference/behavior-considered-undefined.html). In our case, `bytemuck` saves us by panicking before this occurs.
Thus, when using `cuprate_database`'s database traits, an _owned_ `T` is returned.
This is doubly unfortunately for `&[u8]` as this does not even need deserialization.
For example, `StorableVec` could have been this:
```rust
enum StorableBytes<'a, T: Storable> {
Owned(T),
Ref(&'a T),
}
```
but this would require supporting types that must be copied regardless with the occasional `&[u8]` that can be returned without casting. This was hard to do so in a generic way, thus all `[u8]`'s are copied and returned as owned `StorableVec`s.
This is a _tradeoff_ `cuprate_database` takes as:
- `bytemuck::pod_read_unaligned` is cheap enough
- The main API, `service`, needs to return owned value anyway
- Having no references removes a lot of lifetime complexity
The alternative is somehow fixing the alignment issues in the backends mentioned previously.

View file

@ -0,0 +1,8 @@
# Resizing
`cuprate_database` itself does not handle memory map resizes automatically
(for database backends that need resizing, i.e. heed/LMDB).
When a user directly using `cuprate_database`, it is up to them on how to resize. The database will return [`RuntimeError::ResizeNeeded`](https://doc.cuprate.org/cuprate_database/enum.RuntimeError.html#variant.ResizeNeeded) when it needs resizing.
However, `cuprate_database` exposes some [resizing algorithms](https://doc.cuprate.org/cuprate_database/resize/index.html)
that define how the database's memory map grows.

View file

@ -0,0 +1,44 @@
# (De)serialization
All types stored inside the database are either bytes already or are perfectly bitcast-able.
As such, they do not incur heavy (de)serialization costs when storing/fetching them from the database. The main (de)serialization used is [`bytemuck`](https://docs.rs/bytemuck)'s traits and casting functions.
## Size and layout
The size & layout of types is stable across compiler versions, as they are set and determined with [`#[repr(C)]`](https://doc.rust-lang.org/nomicon/other-reprs.html#reprc) and `bytemuck`'s derive macros such as [`bytemuck::Pod`](https://docs.rs/bytemuck/latest/bytemuck/derive.Pod.html).
Note that the data stored in the tables are still type-safe; we still refer to the key and values within our tables by the type.
## How
The main deserialization `trait` for database storage is [`Storable`](https://doc.cuprate.org/cuprate_database/trait.Storable.html).
- Before storage, the type is [simply cast into bytes](https://github.com/Cuprate/cuprate/blob/2ac90420c658663564a71b7ecb52d74f3c2c9d0f/database/src/storable.rs#L125)
- When fetching, the bytes are [simply cast into the type](https://github.com/Cuprate/cuprate/blob/2ac90420c658663564a71b7ecb52d74f3c2c9d0f/database/src/storable.rs#L130)
When a type is casted into bytes, [the reference is casted](https://docs.rs/bytemuck/latest/bytemuck/fn.bytes_of.html), i.e. this is zero-cost serialization.
However, it is worth noting that when bytes are casted into the type, [it is copied](https://docs.rs/bytemuck/latest/bytemuck/fn.pod_read_unaligned.html). This is due to byte alignment guarantee issues with both backends, see:
- <https://github.com/AltSysrq/lmdb-zero/issues/8>
- <https://github.com/cberner/redb/issues/360>
Without this, `bytemuck` will panic with [`TargetAlignmentGreaterAndInputNotAligned`](https://docs.rs/bytemuck/latest/bytemuck/enum.PodCastError.html#variant.TargetAlignmentGreaterAndInputNotAligned) when casting.
Copying the bytes fixes this problem, although it is more costly than necessary. However, in the main use-case for `cuprate_database` (`tower::Service` API) the bytes would need to be owned regardless as the `Request/Response` API uses owned data types (`T`, `Vec<T>`, `HashMap<K, V>`, etc).
Practically speaking, this means lower-level database functions that normally look like such:
```rust
fn get(key: &Key) -> &Value;
```
end up looking like this in `cuprate_database`:
```rust
fn get(key: &Key) -> Value;
```
Since each backend has its own (de)serialization methods, our types are wrapped in compatibility types that map our `Storable` functions into whatever is required for the backend, e.g:
- [`StorableHeed<T>`](https://github.com/Cuprate/cuprate/blob/2ac90420c658663564a71b7ecb52d74f3c2c9d0f/database/src/backend/heed/storable.rs#L11-L45)
- [`StorableRedb<T>`](https://github.com/Cuprate/cuprate/blob/2ac90420c658663564a71b7ecb52d74f3c2c9d0f/database/src/backend/redb/storable.rs#L11-L30)
Compatibility structs also exist for any `Storable` containers:
- [`StorableVec<T>`](https://github.com/Cuprate/cuprate/blob/2ac90420c658663564a71b7ecb52d74f3c2c9d0f/database/src/storable.rs#L135-L191)
- [`StorableBytes`](https://github.com/Cuprate/cuprate/blob/2ac90420c658663564a71b7ecb52d74f3c2c9d0f/database/src/storable.rs#L208-L241)
Again, it's unfortunate that these must be owned, although in the `tower::Service` use-case, they would have to be owned anyway.

View file

@ -0,0 +1,17 @@
# Syncing
`cuprate_database`'s database has 5 disk syncing modes.
1. `FastThenSafe`
1. `Safe`
1. `Async`
1. `Threshold`
1. `Fast`
The default mode is `Safe`.
This means that upon each transaction commit, all the data that was written will be fully synced to disk.
This is the slowest, but safest mode of operation.
Note that upon any database `Drop`, the current implementation will sync to disk regardless of any configuration.
For more information on the other modes, read the documentation [here](https://github.com/Cuprate/cuprate/blob/2ac90420c658663564a71b7ecb52d74f3c2c9d0f/database/src/config/sync_mode.rs#L63-L144).

View file

@ -1 +1,34 @@
# ⚪️ Storage
# Storage
This section covers all things related to the on-disk storage of data within Cuprate.
## Overview
The quick overview is that Cuprate has a [database abstraction crate](./database-abstraction.md)
that handles "low-level" database details such as key and value (de)serialization, tables, transactions, etc.
This database abstraction crate is then used by all crates that need on-disk storage, i.e. the
- [Blockchain database](./blockchain/intro.md)
- [Transaction pool database](./txpool/intro.md)
## Service
The interface provided by all crates building on-top of the
database abstraction is a [`tower::Service`](https://docs.rs/tower), i.e.
database requests/responses are sent/received asynchronously.
As the interface details are similar across crates (threadpool, read operations, write operations),
the interface itself is abstracted in the [`cuprate_database_service`](./common/service/intro.md) crate,
which is then used by the crates.
## Diagram
This is roughly how database crates are set up.
```text
┌─────────────────┐
┌──────────────────────────────────┐ │ │
│ Some crate that needs a database │ ┌────────────────┐ │ │
│ │ │ Public │ │ │
│ ┌──────────────────────────────┐ │─►│ tower::Service │◄─►│ Rest of Cuprate │
│ │ Database abstraction │ │ │ API │ │ │
│ └──────────────────────────────┘ │ └────────────────┘ │ │
└──────────────────────────────────┘ │ │
└─────────────────┘
```

1
clippy.toml Normal file
View file

@ -0,0 +1 @@
upper-case-acronyms-aggressive = true

View file

@ -12,6 +12,7 @@ cuprate-helper = { path = "../helper", default-features = false, features = ["st
cuprate-consensus-rules = { path = "./rules", features = ["rayon"] }
cuprate-types = { path = "../types" }
cfg-if = { workspace = true }
thiserror = { workspace = true }
tower = { workspace = true, features = ["util"] }
tracing = { workspace = true, features = ["std", "attributes"] }
@ -19,7 +20,6 @@ futures = { workspace = true, features = ["std", "async-await"] }
randomx-rs = { workspace = true }
monero-serai = { workspace = true, features = ["std"] }
curve25519-dalek = { workspace = true }
rayon = { workspace = true }
thread_local = { workspace = true }
@ -34,8 +34,12 @@ cuprate-test-utils = { path = "../test-utils" }
cuprate-consensus-rules = {path = "./rules", features = ["proptest"]}
hex-literal = { workspace = true }
curve25519-dalek = { workspace = true }
tokio = { workspace = true, features = ["rt-multi-thread", "macros"]}
tokio-test = { workspace = true }
proptest = { workspace = true }
proptest-derive = { workspace = true }
proptest-derive = { workspace = true }
[lints]
workspace = true

View file

@ -9,19 +9,22 @@ name = "cuprate-fast-sync-create-hashes"
path = "src/create.rs"
[dependencies]
clap = { workspace = true, features = ["derive", "std"] }
cuprate-blockchain = { path = "../../storage/blockchain" }
cuprate-consensus = { path = ".." }
cuprate-blockchain = { path = "../../storage/blockchain" }
cuprate-consensus = { path = ".." }
cuprate-consensus-rules = { path = "../rules" }
cuprate-types = { path = "../../types" }
hex.workspace = true
hex-literal.workspace = true
monero-serai.workspace = true
rayon.workspace = true
sha3 = "0.10.8"
thiserror.workspace = true
tokio = { workspace = true, features = ["full"] }
tower.workspace = true
cuprate-types = { path = "../../types" }
cuprate-helper = { path = "../../helper", features = ["cast"] }
clap = { workspace = true, features = ["derive", "std"] }
hex = { workspace = true }
hex-literal = { workspace = true }
monero-serai = { workspace = true }
sha3 = { version = "0.10.8" }
thiserror = { workspace = true }
tokio = { workspace = true, features = ["full"] }
tower = { workspace = true }
[dev-dependencies]
tokio-test = "0.4.4"
[lints]
workspace = true

View file

@ -1,3 +1,8 @@
#![expect(
unused_crate_dependencies,
reason = "binary shares same Cargo.toml as library"
)]
use std::{fmt::Write, fs::write};
use clap::Parser;
@ -70,15 +75,12 @@ async fn main() {
let mut height = 0_usize;
while height < height_target {
match read_batch(&mut read_handle, height).await {
Ok(block_ids) => {
let hash = hash_of_hashes(block_ids.as_slice());
hashes_of_hashes.push(hash);
}
Err(_) => {
println!("Failed to read next batch from database");
break;
}
if let Ok(block_ids) = read_batch(&mut read_handle, height).await {
let hash = hash_of_hashes(block_ids.as_slice());
hashes_of_hashes.push(hash);
} else {
println!("Failed to read next batch from database");
break;
}
height += BATCH_SIZE;
}
@ -88,5 +90,5 @@ async fn main() {
let generated = generate_hex(&hashes_of_hashes);
write("src/data/hashes_of_hashes", generated).expect("Could not write file");
println!("Generated hashes up to block height {}", height);
println!("Generated hashes up to block height {height}");
}

View file

@ -1,12 +1,12 @@
[
hex!("1adffbaf832784406018009e07d3dc3a39da7edb6632523c119ed8acb32eb934"),
hex!("ae960265e3398d04f3cd4f949ed13c2689424887c71c1441a03d900a9d3a777f"),
hex!("938c72d267bbd3a17cdecbe02443d00012ee62d6e9f3524f5a914192110b1798"),
hex!("de0c82e51549b6514b42a591fd5440dddb5cc0118ec461459a99017bf06a0a0a"),
hex!("9a50f4586ec7e0fb58c6383048d3b334180235fd34bb714af20f1a3ebce4c911"),
hex!("5a3942f9bb318d65997bf57c40e045d62e7edbe35f3dae57499c2c5554896543"),
hex!("9dccee3b094cdd1b98e357c2c81bfcea798ea75efd94e67c6f5e86f428c5ec2c"),
hex!("620397540d44f21c3c57c20e9d47c6aaf0b1bf4302a4d43e75f2e33edd1a4032"),
hex!("ef6c612fb17bd70ac2ac69b2f85a421b138cc3a81daf622b077cb402dbf68377"),
hex!("6815ecb2bd73a3ba5f20558bfe1b714c30d6892b290e0d6f6cbf18237cedf75a"),
hex_literal::hex!("1adffbaf832784406018009e07d3dc3a39da7edb6632523c119ed8acb32eb934"),
hex_literal::hex!("ae960265e3398d04f3cd4f949ed13c2689424887c71c1441a03d900a9d3a777f"),
hex_literal::hex!("938c72d267bbd3a17cdecbe02443d00012ee62d6e9f3524f5a914192110b1798"),
hex_literal::hex!("de0c82e51549b6514b42a591fd5440dddb5cc0118ec461459a99017bf06a0a0a"),
hex_literal::hex!("9a50f4586ec7e0fb58c6383048d3b334180235fd34bb714af20f1a3ebce4c911"),
hex_literal::hex!("5a3942f9bb318d65997bf57c40e045d62e7edbe35f3dae57499c2c5554896543"),
hex_literal::hex!("9dccee3b094cdd1b98e357c2c81bfcea798ea75efd94e67c6f5e86f428c5ec2c"),
hex_literal::hex!("620397540d44f21c3c57c20e9d47c6aaf0b1bf4302a4d43e75f2e33edd1a4032"),
hex_literal::hex!("ef6c612fb17bd70ac2ac69b2f85a421b138cc3a81daf622b077cb402dbf68377"),
hex_literal::hex!("6815ecb2bd73a3ba5f20558bfe1b714c30d6892b290e0d6f6cbf18237cedf75a"),
]

View file

@ -6,8 +6,6 @@ use std::{
task::{Context, Poll},
};
#[allow(unused_imports)]
use hex_literal::hex;
use monero_serai::{
block::Block,
transaction::{Input, Transaction},
@ -19,6 +17,7 @@ use cuprate_consensus::{
transactions::new_tx_verification_data,
};
use cuprate_consensus_rules::{miner_tx::MinerTxError, ConsensusError};
use cuprate_helper::cast::u64_to_usize;
use cuprate_types::{VerifiedBlockInformation, VerifiedTransactionInformation};
use crate::{hash_of_hashes, BlockId, HashOfHashes};
@ -31,9 +30,9 @@ const BATCH_SIZE: usize = 512;
#[cfg(test)]
static HASHES_OF_HASHES: &[HashOfHashes] = &[
hex!("3fdc9032c16d440f6c96be209c36d3d0e1aed61a2531490fe0ca475eb615c40a"),
hex!("0102030405060708010203040506070801020304050607080102030405060708"),
hex!("0102030405060708010203040506070801020304050607080102030405060708"),
hex_literal::hex!("3fdc9032c16d440f6c96be209c36d3d0e1aed61a2531490fe0ca475eb615c40a"),
hex_literal::hex!("0102030405060708010203040506070801020304050607080102030405060708"),
hex_literal::hex!("0102030405060708010203040506070801020304050607080102030405060708"),
];
#[cfg(test)]
@ -44,14 +43,14 @@ fn max_height() -> u64 {
(HASHES_OF_HASHES.len() * BATCH_SIZE) as u64
}
#[derive(Debug, PartialEq)]
#[derive(Debug, PartialEq, Eq)]
pub struct ValidBlockId(BlockId);
fn valid_block_ids(block_ids: &[BlockId]) -> Vec<ValidBlockId> {
block_ids.iter().map(|b| ValidBlockId(*b)).collect()
}
#[allow(clippy::large_enum_variant)]
#[expect(clippy::large_enum_variant)]
pub enum FastSyncRequest {
ValidateHashes {
start_height: u64,
@ -64,8 +63,8 @@ pub enum FastSyncRequest {
},
}
#[allow(clippy::large_enum_variant)]
#[derive(Debug, PartialEq)]
#[expect(clippy::large_enum_variant)]
#[derive(Debug, PartialEq, Eq)]
pub enum FastSyncResponse {
ValidateHashes {
validated_hashes: Vec<ValidBlockId>,
@ -74,7 +73,7 @@ pub enum FastSyncResponse {
ValidateBlock(VerifiedBlockInformation),
}
#[derive(thiserror::Error, Debug, PartialEq)]
#[derive(thiserror::Error, Debug, PartialEq, Eq)]
pub enum FastSyncError {
#[error("Block does not match its expected hash")]
BlockHashMismatch,
@ -127,9 +126,9 @@ where
+ Send
+ 'static,
{
#[allow(dead_code)]
pub(crate) fn new(context_svc: C) -> FastSyncService<C> {
FastSyncService { context_svc }
#[expect(dead_code)]
pub(crate) const fn new(context_svc: C) -> Self {
Self { context_svc }
}
}
@ -161,7 +160,7 @@ where
FastSyncRequest::ValidateHashes {
start_height,
block_ids,
} => validate_hashes(start_height, &block_ids).await,
} => validate_hashes(start_height, &block_ids),
FastSyncRequest::ValidateBlock { block, txs, token } => {
validate_block(context_svc, block, txs, token).await
}
@ -170,11 +169,13 @@ where
}
}
async fn validate_hashes(
fn validate_hashes(
start_height: u64,
block_ids: &[BlockId],
) -> Result<FastSyncResponse, FastSyncError> {
if start_height as usize % BATCH_SIZE != 0 {
let start_height_usize = u64_to_usize(start_height);
if start_height_usize % BATCH_SIZE != 0 {
return Err(FastSyncError::InvalidStartHeight);
}
@ -182,9 +183,9 @@ async fn validate_hashes(
return Err(FastSyncError::OutOfRange);
}
let stop_height = start_height as usize + block_ids.len();
let stop_height = start_height_usize + block_ids.len();
let batch_from = start_height as usize / BATCH_SIZE;
let batch_from = start_height_usize / BATCH_SIZE;
let batch_to = cmp::min(stop_height / BATCH_SIZE, HASHES_OF_HASHES.len());
let n_batches = batch_to - batch_from;
@ -229,7 +230,7 @@ where
let BlockChainContextResponse::Context(checked_context) = context_svc
.ready()
.await?
.call(BlockChainContextRequest::GetContext)
.call(BlockChainContextRequest::Context)
.await?
else {
panic!("Context service returned wrong response!");
@ -285,7 +286,7 @@ where
block_blob,
txs: verified_txs,
block_hash,
pow_hash: [0u8; 32],
pow_hash: [0_u8; 32],
height: *height,
generated_coins,
weight,
@ -299,46 +300,36 @@ where
#[cfg(test)]
mod tests {
use super::*;
use tokio_test::block_on;
#[test]
fn test_validate_hashes_errors() {
let ids = [[1u8; 32], [2u8; 32], [3u8; 32], [4u8; 32], [5u8; 32]];
let ids = [[1_u8; 32], [2_u8; 32], [3_u8; 32], [4_u8; 32], [5_u8; 32]];
assert_eq!(
block_on(validate_hashes(3, &[])),
validate_hashes(3, &[]),
Err(FastSyncError::InvalidStartHeight)
);
assert_eq!(
block_on(validate_hashes(3, &ids)),
validate_hashes(3, &ids),
Err(FastSyncError::InvalidStartHeight)
);
assert_eq!(
block_on(validate_hashes(20, &[])),
Err(FastSyncError::OutOfRange)
);
assert_eq!(
block_on(validate_hashes(20, &ids)),
Err(FastSyncError::OutOfRange)
);
assert_eq!(validate_hashes(20, &[]), Err(FastSyncError::OutOfRange));
assert_eq!(validate_hashes(20, &ids), Err(FastSyncError::OutOfRange));
assert_eq!(validate_hashes(4, &[]), Err(FastSyncError::NothingToDo));
assert_eq!(
block_on(validate_hashes(4, &[])),
Err(FastSyncError::NothingToDo)
);
assert_eq!(
block_on(validate_hashes(4, &ids[..3])),
validate_hashes(4, &ids[..3]),
Err(FastSyncError::NothingToDo)
);
}
#[test]
fn test_validate_hashes_success() {
let ids = [[1u8; 32], [2u8; 32], [3u8; 32], [4u8; 32], [5u8; 32]];
let ids = [[1_u8; 32], [2_u8; 32], [3_u8; 32], [4_u8; 32], [5_u8; 32]];
let validated_hashes = valid_block_ids(&ids[0..4]);
let unknown_hashes = ids[4..].to_vec();
assert_eq!(
block_on(validate_hashes(0, &ids)),
validate_hashes(0, &ids),
Ok(FastSyncResponse::ValidateHashes {
validated_hashes,
unknown_hashes
@ -349,15 +340,10 @@ mod tests {
#[test]
fn test_validate_hashes_mismatch() {
let ids = [
[1u8; 32], [2u8; 32], [3u8; 32], [5u8; 32], [1u8; 32], [2u8; 32], [3u8; 32], [4u8; 32],
[1_u8; 32], [2_u8; 32], [3_u8; 32], [5_u8; 32], [1_u8; 32], [2_u8; 32], [3_u8; 32],
[4_u8; 32],
];
assert_eq!(
block_on(validate_hashes(0, &ids)),
Err(FastSyncError::Mismatch)
);
assert_eq!(
block_on(validate_hashes(4, &ids)),
Err(FastSyncError::Mismatch)
);
assert_eq!(validate_hashes(0, &ids), Err(FastSyncError::Mismatch));
assert_eq!(validate_hashes(4, &ids), Err(FastSyncError::Mismatch));
}
}

View file

@ -1,3 +1,9 @@
// Used in `create.rs`
use clap as _;
use cuprate_blockchain as _;
use hex as _;
use tokio as _;
pub mod fast_sync;
pub mod util;

View file

@ -7,11 +7,12 @@ authors = ["Boog900"]
[features]
default = []
proptest = ["dep:proptest", "dep:proptest-derive", "cuprate-types/proptest"]
proptest = ["cuprate-types/proptest"]
rayon = ["dep:rayon"]
[dependencies]
cuprate-helper = { path = "../../helper", default-features = false, features = ["std"] }
cuprate-constants = { path = "../../constants", default-features = false }
cuprate-helper = { path = "../../helper", default-features = false, features = ["std", "cast"] }
cuprate-types = { path = "../../types", default-features = false }
cuprate-cryptonight = {path = "../../cryptonight"}
@ -24,15 +25,16 @@ hex = { workspace = true, features = ["std"] }
hex-literal = { workspace = true }
crypto-bigint = { workspace = true }
cfg-if = { workspace = true }
tracing = { workspace = true, features = ["std"] }
thiserror = { workspace = true }
rayon = { workspace = true, optional = true }
proptest = {workspace = true, optional = true}
proptest-derive = {workspace = true, optional = true}
[dev-dependencies]
proptest = {workspace = true}
proptest-derive = {workspace = true}
tokio = {version = "1.35.0", features = ["rt-multi-thread", "macros"]}
proptest = { workspace = true }
proptest-derive = { workspace = true }
tokio = { version = "1.35.0", features = ["rt-multi-thread", "macros"] }
[lints]
workspace = true

View file

@ -44,22 +44,22 @@ pub enum BlockError {
MinerTxError(#[from] MinerTxError),
}
/// A trait to represent the RandomX VM.
/// A trait to represent the `RandomX` VM.
pub trait RandomX {
type Error;
fn calculate_hash(&self, buf: &[u8]) -> Result<[u8; 32], Self::Error>;
}
/// Returns if this height is a RandomX seed height.
pub fn is_randomx_seed_height(height: usize) -> bool {
/// Returns if this height is a `RandomX` seed height.
pub const fn is_randomx_seed_height(height: usize) -> bool {
height % RX_SEEDHASH_EPOCH_BLOCKS == 0
}
/// Returns the RandomX seed height for this block.
/// Returns the `RandomX` seed height for this block.
///
/// ref: <https://monero-book.cuprate.org/consensus_rules/blocks.html#randomx-seed>
pub fn randomx_seed_height(height: usize) -> usize {
pub const fn randomx_seed_height(height: usize) -> usize {
if height <= RX_SEEDHASH_EPOCH_BLOCKS + RX_SEEDHASH_EPOCH_LAG {
0
} else {
@ -122,10 +122,10 @@ pub fn check_block_pow(hash: &[u8; 32], difficulty: u128) -> Result<(), BlockErr
/// Returns the penalty free zone
///
/// <https://cuprate.github.io/monero-book/consensus_rules/blocks/weight_limit.html#penalty-free-zone>
pub fn penalty_free_zone(hf: &HardFork) -> usize {
if hf == &HardFork::V1 {
pub fn penalty_free_zone(hf: HardFork) -> usize {
if hf == HardFork::V1 {
PENALTY_FREE_ZONE_1
} else if hf >= &HardFork::V2 && hf < &HardFork::V5 {
} else if hf >= HardFork::V2 && hf < HardFork::V5 {
PENALTY_FREE_ZONE_2
} else {
PENALTY_FREE_ZONE_5
@ -135,7 +135,7 @@ pub fn penalty_free_zone(hf: &HardFork) -> usize {
/// Sanity check on the block blob size.
///
/// ref: <https://monero-book.cuprate.org/consensus_rules/blocks.html#block-weight-and-size>
fn block_size_sanity_check(
const fn block_size_sanity_check(
block_blob_len: usize,
effective_median: usize,
) -> Result<(), BlockError> {
@ -149,7 +149,7 @@ fn block_size_sanity_check(
/// Sanity check on the block weight.
///
/// ref: <https://monero-book.cuprate.org/consensus_rules/blocks.html#block-weight-and-size>
pub fn check_block_weight(
pub const fn check_block_weight(
block_weight: usize,
median_for_block_reward: usize,
) -> Result<(), BlockError> {
@ -163,7 +163,7 @@ pub fn check_block_weight(
/// Sanity check on number of txs in the block.
///
/// ref: <https://monero-book.cuprate.org/consensus_rules/blocks.html#amount-of-transactions>
fn check_amount_txs(number_none_miner_txs: usize) -> Result<(), BlockError> {
const fn check_amount_txs(number_none_miner_txs: usize) -> Result<(), BlockError> {
if number_none_miner_txs + 1 > 0x10000000 {
Err(BlockError::TooManyTxs)
} else {
@ -175,10 +175,10 @@ fn check_amount_txs(number_none_miner_txs: usize) -> Result<(), BlockError> {
///
/// ref: <https://monero-book.cuprate.org/consensus_rules/blocks.html#previous-id>
fn check_prev_id(block: &Block, top_hash: &[u8; 32]) -> Result<(), BlockError> {
if &block.header.previous != top_hash {
Err(BlockError::PreviousIDIncorrect)
} else {
if &block.header.previous == top_hash {
Ok(())
} else {
Err(BlockError::PreviousIDIncorrect)
}
}
@ -273,7 +273,7 @@ pub fn check_block(
block_weight,
block_chain_ctx.median_weight_for_block_reward,
block_chain_ctx.already_generated_coins,
&block_chain_ctx.current_hf,
block_chain_ctx.current_hf,
)?;
Ok((vote, generated_coins))

View file

@ -1,36 +1,27 @@
use std::sync::OnceLock;
/// Decomposed amount table.
///
static DECOMPOSED_AMOUNTS: OnceLock<[u64; 172]> = OnceLock::new();
#[rustfmt::skip]
pub fn decomposed_amounts() -> &'static [u64; 172] {
DECOMPOSED_AMOUNTS.get_or_init(|| {
[
1, 2, 3, 4, 5, 6, 7, 8, 9,
10, 20, 30, 40, 50, 60, 70, 80, 90,
100, 200, 300, 400, 500, 600, 700, 800, 900,
1000, 2000, 3000, 4000, 5000, 6000, 7000, 8000, 9000,
10000, 20000, 30000, 40000, 50000, 60000, 70000, 80000, 90000,
100000, 200000, 300000, 400000, 500000, 600000, 700000, 800000, 900000,
1000000, 2000000, 3000000, 4000000, 5000000, 6000000, 7000000, 8000000, 9000000,
10000000, 20000000, 30000000, 40000000, 50000000, 60000000, 70000000, 80000000, 90000000,
100000000, 200000000, 300000000, 400000000, 500000000, 600000000, 700000000, 800000000, 900000000,
1000000000, 2000000000, 3000000000, 4000000000, 5000000000, 6000000000, 7000000000, 8000000000, 9000000000,
10000000000, 20000000000, 30000000000, 40000000000, 50000000000, 60000000000, 70000000000, 80000000000, 90000000000,
100000000000, 200000000000, 300000000000, 400000000000, 500000000000, 600000000000, 700000000000, 800000000000, 900000000000,
1000000000000, 2000000000000, 3000000000000, 4000000000000, 5000000000000, 6000000000000, 7000000000000, 8000000000000, 9000000000000,
10000000000000, 20000000000000, 30000000000000, 40000000000000, 50000000000000, 60000000000000, 70000000000000, 80000000000000, 90000000000000,
100000000000000, 200000000000000, 300000000000000, 400000000000000, 500000000000000, 600000000000000, 700000000000000, 800000000000000, 900000000000000,
1000000000000000, 2000000000000000, 3000000000000000, 4000000000000000, 5000000000000000, 6000000000000000, 7000000000000000, 8000000000000000, 9000000000000000,
10000000000000000, 20000000000000000, 30000000000000000, 40000000000000000, 50000000000000000, 60000000000000000, 70000000000000000, 80000000000000000, 90000000000000000,
100000000000000000, 200000000000000000, 300000000000000000, 400000000000000000, 500000000000000000, 600000000000000000, 700000000000000000, 800000000000000000, 900000000000000000,
1000000000000000000, 2000000000000000000, 3000000000000000000, 4000000000000000000, 5000000000000000000, 6000000000000000000, 7000000000000000000, 8000000000000000000, 9000000000000000000,
10000000000000000000
]
})
}
/// Decomposed amount table.
pub(crate) static DECOMPOSED_AMOUNTS: [u64; 172] = [
1, 2, 3, 4, 5, 6, 7, 8, 9,
10, 20, 30, 40, 50, 60, 70, 80, 90,
100, 200, 300, 400, 500, 600, 700, 800, 900,
1000, 2000, 3000, 4000, 5000, 6000, 7000, 8000, 9000,
10000, 20000, 30000, 40000, 50000, 60000, 70000, 80000, 90000,
100000, 200000, 300000, 400000, 500000, 600000, 700000, 800000, 900000,
1000000, 2000000, 3000000, 4000000, 5000000, 6000000, 7000000, 8000000, 9000000,
10000000, 20000000, 30000000, 40000000, 50000000, 60000000, 70000000, 80000000, 90000000,
100000000, 200000000, 300000000, 400000000, 500000000, 600000000, 700000000, 800000000, 900000000,
1000000000, 2000000000, 3000000000, 4000000000, 5000000000, 6000000000, 7000000000, 8000000000, 9000000000,
10000000000, 20000000000, 30000000000, 40000000000, 50000000000, 60000000000, 70000000000, 80000000000, 90000000000,
100000000000, 200000000000, 300000000000, 400000000000, 500000000000, 600000000000, 700000000000, 800000000000, 900000000000,
1000000000000, 2000000000000, 3000000000000, 4000000000000, 5000000000000, 6000000000000, 7000000000000, 8000000000000, 9000000000000,
10000000000000, 20000000000000, 30000000000000, 40000000000000, 50000000000000, 60000000000000, 70000000000000, 80000000000000, 90000000000000,
100000000000000, 200000000000000, 300000000000000, 400000000000000, 500000000000000, 600000000000000, 700000000000000, 800000000000000, 900000000000000,
1000000000000000, 2000000000000000, 3000000000000000, 4000000000000000, 5000000000000000, 6000000000000000, 7000000000000000, 8000000000000000, 9000000000000000,
10000000000000000, 20000000000000000, 30000000000000000, 40000000000000000, 50000000000000000, 60000000000000000, 70000000000000000, 80000000000000000, 90000000000000000,
100000000000000000, 200000000000000000, 300000000000000000, 400000000000000000, 500000000000000000, 600000000000000000, 700000000000000000, 800000000000000000, 900000000000000000,
1000000000000000000, 2000000000000000000, 3000000000000000000, 4000000000000000000, 5000000000000000000, 6000000000000000000, 7000000000000000000, 8000000000000000000, 9000000000000000000,
10000000000000000000
];
/// Checks that an output amount is decomposed.
///
@ -40,7 +31,7 @@ pub fn decomposed_amounts() -> &'static [u64; 172] {
/// ref: <https://monero-book.cuprate.org/consensus_rules/blocks/miner_tx.html#output-amounts>
#[inline]
pub fn is_decomposed_amount(amount: &u64) -> bool {
decomposed_amounts().binary_search(amount).is_ok()
DECOMPOSED_AMOUNTS.binary_search(amount).is_ok()
}
#[cfg(test)]
@ -49,8 +40,8 @@ mod tests {
#[test]
fn decomposed_amounts_return_decomposed() {
for amount in decomposed_amounts() {
assert!(is_decomposed_amount(amount))
for amount in &DECOMPOSED_AMOUNTS {
assert!(is_decomposed_amount(amount));
}
}

View file

@ -8,7 +8,7 @@ use monero_serai::{
use cuprate_helper::network::Network;
const fn genesis_nonce(network: &Network) -> u32 {
const fn genesis_nonce(network: Network) -> u32 {
match network {
Network::Mainnet => 10000,
Network::Testnet => 10001,
@ -16,7 +16,7 @@ const fn genesis_nonce(network: &Network) -> u32 {
}
}
fn genesis_miner_tx(network: &Network) -> Transaction {
fn genesis_miner_tx(network: Network) -> Transaction {
Transaction::read(&mut hex::decode(match network {
Network::Mainnet | Network::Testnet => "013c01ff0001ffffffffffff03029b2e4c0281c0b02e7c53291a94d1d0cbff8883f8024f5142ee494ffbbd08807121017767aafcde9be00dcfd098715ebcf7f410daebc582fda69d24a28e9d0bc890d1",
Network::Stagenet => "013c01ff0001ffffffffffff0302df5d56da0c7d643ddd1ce61901c7bdc5fb1738bfe39fbe69c28a3a7032729c0f2101168d0c4ca86fb55a4cf6a36d31431be1c53a3bd7411bb24e8832410289fa6f3b"
@ -26,7 +26,7 @@ fn genesis_miner_tx(network: &Network) -> Transaction {
/// Generates the Monero genesis block.
///
/// ref: <https://monero-book.cuprate.org/consensus_rules/genesis_block.html>
pub fn generate_genesis_block(network: &Network) -> Block {
pub fn generate_genesis_block(network: Network) -> Block {
Block {
header: BlockHeader {
hardfork_version: 1,
@ -47,19 +47,19 @@ mod tests {
#[test]
fn generate_genesis_blocks() {
assert_eq!(
&generate_genesis_block(&Network::Mainnet).hash(),
&generate_genesis_block(Network::Mainnet).hash(),
hex::decode("418015bb9ae982a1975da7d79277c2705727a56894ba0fb246adaabb1f4632e3")
.unwrap()
.as_slice()
);
assert_eq!(
&generate_genesis_block(&Network::Testnet).hash(),
&generate_genesis_block(Network::Testnet).hash(),
hex::decode("48ca7cd3c8de5b6a4d53d2861fbdaedca141553559f9be9520068053cda8430b")
.unwrap()
.as_slice()
);
assert_eq!(
&generate_genesis_block(&Network::Stagenet).hash(),
&generate_genesis_block(Network::Stagenet).hash(),
hex::decode("76ee3cc98646292206cd3e86f74d88b4dcc1d937088645e9b0cbca84b7ce74eb")
.unwrap()
.as_slice()

View file

@ -25,10 +25,10 @@ pub fn check_block_version_vote(
) -> Result<(), HardForkError> {
// self = current hf
if hf != version {
Err(HardForkError::VersionIncorrect)?;
return Err(HardForkError::VersionIncorrect);
}
if hf > vote {
Err(HardForkError::VoteTooLow)?;
return Err(HardForkError::VoteTooLow);
}
Ok(())
@ -41,8 +41,8 @@ pub struct HFInfo {
threshold: usize,
}
impl HFInfo {
pub const fn new(height: usize, threshold: usize) -> HFInfo {
HFInfo { height, threshold }
pub const fn new(height: usize, threshold: usize) -> Self {
Self { height, threshold }
}
}
@ -51,7 +51,7 @@ impl HFInfo {
pub struct HFsInfo([HFInfo; NUMB_OF_HARD_FORKS]);
impl HFsInfo {
pub fn info_for_hf(&self, hf: &HardFork) -> HFInfo {
pub const fn info_for_hf(&self, hf: &HardFork) -> HFInfo {
self.0[*hf as usize - 1]
}
@ -62,7 +62,7 @@ impl HFsInfo {
/// Returns the main-net hard-fork information.
///
/// ref: <https://monero-book.cuprate.org/consensus_rules/hardforks.html#Mainnet-Hard-Forks>
pub const fn main_net() -> HFsInfo {
pub const fn main_net() -> Self {
Self([
HFInfo::new(0, 0),
HFInfo::new(1009827, 0),
@ -86,7 +86,7 @@ impl HFsInfo {
/// Returns the test-net hard-fork information.
///
/// ref: <https://monero-book.cuprate.org/consensus_rules/hardforks.html#Testnet-Hard-Forks>
pub const fn test_net() -> HFsInfo {
pub const fn test_net() -> Self {
Self([
HFInfo::new(0, 0),
HFInfo::new(624634, 0),
@ -110,7 +110,7 @@ impl HFsInfo {
/// Returns the test-net hard-fork information.
///
/// ref: <https://monero-book.cuprate.org/consensus_rules/hardforks.html#Stagenet-Hard-Forks>
pub const fn stage_net() -> HFsInfo {
pub const fn stage_net() -> Self {
Self([
HFInfo::new(0, 0),
HFInfo::new(32000, 0),
@ -165,8 +165,8 @@ impl Display for HFVotes {
}
impl HFVotes {
pub fn new(window_size: usize) -> HFVotes {
HFVotes {
pub fn new(window_size: usize) -> Self {
Self {
votes: [0; NUMB_OF_HARD_FORKS],
vote_list: VecDeque::with_capacity(window_size),
window_size,
@ -251,6 +251,6 @@ impl HFVotes {
/// Returns the votes needed for a hard-fork.
///
/// ref: <https://monero-book.cuprate.org/consensus_rules/hardforks.html#accepting-a-fork>
pub fn votes_needed(threshold: usize, window: usize) -> usize {
pub const fn votes_needed(threshold: usize, window: usize) -> usize {
(threshold * window).div_ceil(100)
}

Some files were not shown because too many files have changed in this diff Show more