Compare commits

...

17 commits

Author SHA1 Message Date
hinto.janai
c0edf23ec0
merge fix
Some checks failed
Audit / audit (push) Has been cancelled
Deny / audit (push) Has been cancelled
2024-09-20 20:28:15 -04:00
hinto.janai
bf62755e70
Merge branch 'main' into rpc-handler-json 2024-09-20 20:27:02 -04:00
hinto-janai
57af45e01d
epee-encoding: enable workspace lints (#294)
* epee-encoding: enable workspace lints

* fmt

* fixes

* fixes

* fmt
2024-09-20 15:13:55 +01:00
hinto-janai
5588671501
levin: enable workspace lints (#292)
* levin: enable workspace lints

* use `drop()`

* dep fixes
2024-09-20 15:11:27 +01:00
hinto-janai
19150df355
p2p/dandelion-tower: enable workspace lints (#287)
* dandelion-tower: add/fix workspace lints

* fmt

* fixes

* todos

* fixes

* fixes

* expect reason
2024-09-20 14:36:34 +01:00
Asurar
e7c6bba63d
Database: Split BlockBlobs table + Miscellaneous fixes (#290)
Some checks are pending
Audit / audit (push) Waiting to run
CI / fmt (push) Waiting to run
CI / typo (push) Waiting to run
CI / ci (macos-latest, stable, bash) (push) Waiting to run
CI / ci (ubuntu-latest, stable, bash) (push) Waiting to run
CI / ci (windows-latest, stable-x86_64-pc-windows-gnu, msys2 {0}) (push) Waiting to run
Deny / audit (push) Waiting to run
Doc / build (push) Waiting to run
Doc / deploy (push) Blocked by required conditions
* Split `BlockBlobs` database table + misc fixes

- Split the `BlockBlobs` database table into two new tables: `BlockHeaderBlobs` and `BlockTxsHashes`.
- `add_block`, `pop_block` and `get_block_extended_header` have been edited consequently.
- `VerifiedBlockInformation` now have a `mining_tx_index: u64` field.
- Made `cuprate-helper`'s `thread` feature a dependency of the `service` feature
- Edited service test mapping of output. It is now a full iterator.

* fix fmt

* Update storage/blockchain/src/types.rs

Co-authored-by: Boog900 <boog900@tutanota.com>

* Update storage/blockchain/src/ops/block.rs

Co-authored-by: Boog900 <boog900@tutanota.com>

* fix warning

---------

Co-authored-by: Boog900 <boog900@tutanota.com>
2024-09-19 20:05:41 +01:00
4169c45c58
Blockchain: add alt-block handling (#260)
* add new tables & types

* add function to fully add an alt block

* resolve current todo!s

* add new requests

* WIP: starting re-orgs

* add last service request

* commit Cargo.lock

* add test

* more docs + cleanup + alt blocks request

* clippy + fmt

* document types

* move tx_fee to helper

* more doc updates

* fmt

* fix imports

* fix fee

* Apply suggestions from code review

Co-authored-by: hinto-janai <hinto.janai@protonmail.com>

* remove default features from `cuprate-helper`

* review fixes

* fix find_block

* add a test and fix some issues in chain history

* fix clippy

* fmt

* Apply suggestions from code review

Co-authored-by: hinto-janai <hinto.janai@protonmail.com>

* add dev dep

* cargo update

* move `flush_alt_blocks`

* review fixes

* more review fixes

* fix clippy

* remove INVARIANT comments

---------

Co-authored-by: hinto-janai <hinto.janai@protonmail.com>
2024-09-19 16:55:28 +01:00
hinto-janai
e3a918bca5
wire: enable workspace lints (#291)
* wire: enable workspace lints

* revert match arm formatting
2024-09-18 23:19:32 +01:00
hinto-janai
a1267619ef
p2p/address-book: enable workspace lints (#286)
* address-book: enable workspace lints

* fix

* fixes
2024-09-18 23:18:31 +01:00
hinto-janai
2afc0e8373
test-utils: enable workspace lints (#283)
* test-utils: enable workspace lints + fix

* `allow` -> `expect`

* fixes
2024-09-18 23:14:31 +01:00
hinto-janai
b9842fcb18
fixed-bytes: enable workspace lints (#293) 2024-09-18 23:12:35 +01:00
hinto-janai
8b4b403c5c
pruning: enable workspace lints (#284)
pruning: enable/fix workspace lints
2024-09-18 22:44:23 +01:00
hinto-janai
6502729d8c
lints: replace allow with expect (#285)
Some checks are pending
Audit / audit (push) Waiting to run
CI / fmt (push) Waiting to run
CI / typo (push) Waiting to run
CI / ci (macos-latest, stable, bash) (push) Waiting to run
CI / ci (ubuntu-latest, stable, bash) (push) Waiting to run
CI / ci (windows-latest, stable-x86_64-pc-windows-gnu, msys2 {0}) (push) Waiting to run
Deny / audit (push) Waiting to run
Doc / build (push) Waiting to run
Doc / deploy (push) Blocked by required conditions
* cargo.toml: add `allow_attributes` lint

* fix lints

* fixes

* fmt

* fix docs

* fix docs

* fix expect msg
2024-09-18 21:31:08 +01:00
Asurar
2291a96795
P2P: Add latest clearnet mainnet seed nodes. (#281)
Add Monerod latest clearnet mainnet seed nodes
2024-09-14 14:01:43 +01:00
90027143f0
consensus: misc fixes (#276)
Some checks failed
CI / fmt (push) Has been cancelled
CI / typo (push) Has been cancelled
CI / ci (macos-latest, stable, bash) (push) Has been cancelled
CI / ci (ubuntu-latest, stable, bash) (push) Has been cancelled
CI / ci (windows-latest, stable-x86_64-pc-windows-gnu, msys2 {0}) (push) Has been cancelled
Doc / build (push) Has been cancelled
Doc / deploy (push) Has been cancelled
* fix decoy checks + fee calculation

* fmt
2024-09-10 01:18:26 +01:00
49d1344aa1
Storage: use saturating_add for cumulative_generated_coins (#275)
* use `saturating_add` for `cumulative_generated_coins`

* cargo fmt
2024-09-10 01:15:04 +01:00
Asurar
967537fae1
P2P: Implement incoming ping request handling over maximum inbound limit (#277)
Implement incoming ping request handling over maximum inbound limit

- If the maximum inbound connection semaphore reach its limit, `inbound_server` fn will
open a tokio task to check if the node wanted to ping us. If it is the case we respond, otherwise
drop the connection.
- Added some documentation to the `inbound_server` fn.
2024-09-09 23:12:06 +01:00
117 changed files with 2652 additions and 1330 deletions

843
Cargo.lock generated

File diff suppressed because it is too large Load diff

View file

@ -211,7 +211,6 @@ unseparated_literal_suffix = "deny"
unnecessary_safety_doc = "deny"
unnecessary_safety_comment = "deny"
unnecessary_self_imports = "deny"
tests_outside_test_module = "deny"
string_to_string = "deny"
rest_pat_in_fully_bound_structs = "deny"
redundant_type_annotations = "deny"
@ -265,6 +264,7 @@ empty_enum_variants_with_brackets = "deny"
empty_drop = "deny"
clone_on_ref_ptr = "deny"
upper_case_acronyms = "deny"
allow_attributes = "deny"
# Hot
# inline_always = "deny"

View file

@ -393,6 +393,11 @@ async fn verify_transactions_decoy_info<D>(
where
D: Database + Clone + Sync + Send + 'static,
{
// Decoy info is not validated for V1 txs.
if hf == HardFork::V1 || txs.is_empty() {
return Ok(());
}
batch_get_decoy_info(&txs, hf, database)
.await?
.try_for_each(|decoy_info| decoy_info.and_then(|di| Ok(check_decoy_info(&di, &hf)?)))?;

View file

@ -78,7 +78,8 @@ pub fn tx_fee(tx: &Transaction) -> Result<u64, TransactionError> {
}
for output in &prefix.outputs {
fee.checked_sub(output.amount.unwrap_or(0))
fee = fee
.checked_sub(output.amount.unwrap_or(0))
.ok_or(TransactionError::OutputsTooHigh)?;
}
}

View file

@ -9,8 +9,8 @@ repository = "https://github.com/Cuprate/cuprate/tree/main/consensus"
[features]
# All features on by default.
default = ["std", "atomic", "asynch", "cast", "fs", "num", "map", "time", "thread", "constants"]
# All features off by default.
default = []
std = []
atomic = ["dep:crossbeam"]
asynch = ["dep:futures", "dep:rayon"]
@ -21,6 +21,7 @@ num = []
map = ["cast", "dep:monero-serai"]
time = ["dep:chrono", "std"]
thread = ["std", "dep:target_os_lib"]
tx = ["dep:monero-serai"]
[dependencies]
crossbeam = { workspace = true, optional = true }
@ -39,7 +40,8 @@ target_os_lib = { package = "windows", version = ">=0.51", features = ["Win32_Sy
target_os_lib = { package = "libc", version = "0.2.151", optional = true }
[dev-dependencies]
tokio = { workspace = true, features = ["full"] }
tokio = { workspace = true, features = ["full"] }
curve25519-dalek = { workspace = true }
[lints]
workspace = true

View file

@ -5,9 +5,6 @@
//---------------------------------------------------------------------------------------------------- Use
use crossbeam::atomic::AtomicCell;
#[allow(unused_imports)] // docs
use std::sync::atomic::{Ordering, Ordering::Acquire, Ordering::Release};
//---------------------------------------------------------------------------------------------------- Atomic Float
/// Compile-time assertion that our floats are
/// lock-free for the target we're building for.
@ -31,9 +28,13 @@ const _: () = {
/// This is an alias for
/// [`crossbeam::atomic::AtomicCell<f32>`](https://docs.rs/crossbeam/latest/crossbeam/atomic/struct.AtomicCell.html).
///
/// Note that there are no [`Ordering`] parameters,
/// atomic loads use [`Acquire`],
/// and atomic stores use [`Release`].
/// Note that there are no [Ordering] parameters,
/// atomic loads use [Acquire],
/// and atomic stores use [Release].
///
/// [Ordering]: std::sync::atomic::Ordering
/// [Acquire]: std::sync::atomic::Ordering::Acquire
/// [Release]: std::sync::atomic::Ordering::Release
pub type AtomicF32 = AtomicCell<f32>;
/// An atomic [`f64`].
@ -41,9 +42,13 @@ pub type AtomicF32 = AtomicCell<f32>;
/// This is an alias for
/// [`crossbeam::atomic::AtomicCell<f64>`](https://docs.rs/crossbeam/latest/crossbeam/atomic/struct.AtomicCell.html).
///
/// Note that there are no [`Ordering`] parameters,
/// atomic loads use [`Acquire`],
/// and atomic stores use [`Release`].
/// Note that there are no [Ordering] parameters,
/// atomic loads use [Acquire],
/// and atomic stores use [Release].
///
/// [Ordering]: std::sync::atomic::Ordering
/// [Acquire]: std::sync::atomic::Ordering::Acquire
/// [Release]: std::sync::atomic::Ordering::Release
pub type AtomicF64 = AtomicCell<f64>;
//---------------------------------------------------------------------------------------------------- TESTS

View file

@ -31,6 +31,8 @@ pub mod thread;
#[cfg(feature = "time")]
pub mod time;
#[cfg(feature = "tx")]
pub mod tx;
//---------------------------------------------------------------------------------------------------- Private Usage
//----------------------------------------------------------------------------------------------------

View file

@ -29,7 +29,7 @@ use crate::cast::{u64_to_usize, usize_to_u64};
/// ```
#[inline]
pub const fn split_u128_into_low_high_bits(value: u128) -> (u64, u64) {
#[allow(clippy::cast_possible_truncation)]
#[expect(clippy::cast_possible_truncation)]
(value as u64, (value >> 64) as u64)
}

View file

@ -91,7 +91,7 @@ where
///
/// # Invariant
/// If not sorted the output will be invalid.
#[allow(clippy::debug_assert_with_mut_call)]
#[expect(clippy::debug_assert_with_mut_call)]
pub fn median<T>(array: impl AsRef<[T]>) -> T
where
T: Add<Output = T>

View file

@ -6,7 +6,6 @@
use std::{cmp::max, num::NonZeroUsize};
//---------------------------------------------------------------------------------------------------- Thread Count & Percent
#[allow(non_snake_case)]
/// Get the total amount of system threads.
///
/// ```rust
@ -28,10 +27,15 @@ macro_rules! impl_thread_percent {
$(
$(#[$doc])*
pub fn $fn_name() -> NonZeroUsize {
// unwrap here is okay because:
// - THREADS().get() is always non-zero
// - max() guards against 0
#[allow(clippy::cast_possible_truncation, clippy::cast_sign_loss, clippy::cast_precision_loss)]
// unwrap here is okay because:
// - THREADS().get() is always non-zero
// - max() guards against 0
#[expect(
clippy::cast_possible_truncation,
clippy::cast_sign_loss,
clippy::cast_precision_loss,
reason = "we need to round integers"
)]
NonZeroUsize::new(max(1, (threads().get() as f64 * $percent).floor() as usize)).unwrap()
}
)*

View file

@ -129,7 +129,7 @@ pub const fn secs_to_clock(seconds: u32) -> (u8, u8, u8) {
debug_assert!(m < 60);
debug_assert!(s < 60);
#[allow(clippy::cast_possible_truncation)] // checked above
#[expect(clippy::cast_possible_truncation, reason = "checked above")]
(h as u8, m, s)
}
@ -154,7 +154,7 @@ pub fn time() -> u32 {
///
/// This is guaranteed to return a value between `0..=86399`
pub fn time_utc() -> u32 {
#[allow(clippy::cast_sign_loss)] // checked in function calls
#[expect(clippy::cast_sign_loss, reason = "checked in function calls")]
unix_clock(chrono::offset::Local::now().timestamp() as u64)
}

70
helper/src/tx.rs Normal file
View file

@ -0,0 +1,70 @@
//! Utils for working with [`Transaction`]
use monero_serai::transaction::{Input, Transaction};
/// Calculates the fee of the [`Transaction`].
///
/// # Panics
/// This will panic if the inputs overflow or the transaction outputs too much, so should only
/// be used on known to be valid txs.
pub fn tx_fee(tx: &Transaction) -> u64 {
let mut fee = 0_u64;
match &tx {
Transaction::V1 { prefix, .. } => {
for input in &prefix.inputs {
match input {
Input::Gen(_) => return 0,
Input::ToKey { amount, .. } => {
fee = fee.checked_add(amount.unwrap_or(0)).unwrap();
}
}
}
for output in &prefix.outputs {
fee = fee.checked_sub(output.amount.unwrap_or(0)).unwrap();
}
}
Transaction::V2 { proofs, .. } => {
fee = proofs.as_ref().unwrap().base.fee;
}
};
fee
}
#[cfg(test)]
mod test {
use curve25519_dalek::{edwards::CompressedEdwardsY, EdwardsPoint};
use monero_serai::transaction::{NotPruned, Output, Timelock, TransactionPrefix};
use super::*;
#[test]
#[should_panic(expected = "called `Option::unwrap()` on a `None` value")]
fn tx_fee_panic() {
let input = Input::ToKey {
amount: Some(u64::MAX),
key_offsets: vec![],
key_image: EdwardsPoint::default(),
};
let output = Output {
amount: Some(u64::MAX),
key: CompressedEdwardsY::default(),
view_tag: None,
};
let tx = Transaction::<NotPruned>::V1 {
prefix: TransactionPrefix {
additional_timelock: Timelock::None,
inputs: vec![input; 2],
outputs: vec![output],
extra: vec![],
},
signatures: vec![],
};
tx_fee(&tx);
}
}

View file

@ -25,3 +25,6 @@ thiserror = { workspace = true, optional = true}
[dev-dependencies]
hex = { workspace = true, features = ["default"] }
[lints]
workspace = true

View file

@ -9,7 +9,7 @@ pub struct ContainerAsBlob<T: Containerable + EpeeValue>(Vec<T>);
impl<T: Containerable + EpeeValue> From<Vec<T>> for ContainerAsBlob<T> {
fn from(value: Vec<T>) -> Self {
ContainerAsBlob(value)
Self(value)
}
}
@ -36,9 +36,7 @@ impl<T: Containerable + EpeeValue> EpeeValue for ContainerAsBlob<T> {
));
}
Ok(ContainerAsBlob(
bytes.chunks(T::SIZE).map(T::from_bytes).collect(),
))
Ok(Self(bytes.chunks(T::SIZE).map(T::from_bytes).collect()))
}
fn should_write(&self) -> bool {
@ -46,10 +44,10 @@ impl<T: Containerable + EpeeValue> EpeeValue for ContainerAsBlob<T> {
}
fn epee_default_value() -> Option<Self> {
Some(ContainerAsBlob(vec![]))
Some(Self(vec![]))
}
fn write<B: BufMut>(self, w: &mut B) -> crate::Result<()> {
fn write<B: BufMut>(self, w: &mut B) -> Result<()> {
let mut buf = BytesMut::with_capacity(self.0.len() * T::SIZE);
self.0.iter().for_each(|tt| tt.push_bytes(&mut buf));
buf.write(w)

View file

@ -7,6 +7,7 @@ use core::{
pub type Result<T> = core::result::Result<T, Error>;
#[cfg_attr(feature = "std", derive(thiserror::Error))]
#[expect(clippy::error_impl_error, reason = "FIXME: rename this type")]
pub enum Error {
#[cfg_attr(feature = "std", error("IO error: {0}"))]
IO(&'static str),
@ -17,19 +18,18 @@ pub enum Error {
}
impl Error {
fn field_name(&self) -> &'static str {
const fn field_name(&self) -> &'static str {
match self {
Error::IO(_) => "io",
Error::Format(_) => "format",
Error::Value(_) => "value",
Self::IO(_) => "io",
Self::Format(_) => "format",
Self::Value(_) => "value",
}
}
fn field_data(&self) -> &str {
match self {
Error::IO(data) => data,
Error::Format(data) => data,
Error::Value(data) => data,
Self::IO(data) | Self::Format(data) => data,
Self::Value(data) => data,
}
}
}
@ -44,12 +44,12 @@ impl Debug for Error {
impl From<TryFromIntError> for Error {
fn from(_: TryFromIntError) -> Self {
Error::Value("Int is too large".to_string())
Self::Value("Int is too large".to_string())
}
}
impl From<Utf8Error> for Error {
fn from(_: Utf8Error) -> Self {
Error::Value("Invalid utf8 str".to_string())
Self::Value("Invalid utf8 str".to_string())
}
}

View file

@ -3,7 +3,7 @@ use bytes::{Buf, BufMut};
use crate::error::*;
#[inline]
pub fn checked_read_primitive<B: Buf, R: Sized>(
pub(crate) fn checked_read_primitive<B: Buf, R: Sized>(
b: &mut B,
read: impl Fn(&mut B) -> R,
) -> Result<R> {
@ -11,16 +11,20 @@ pub fn checked_read_primitive<B: Buf, R: Sized>(
}
#[inline]
pub fn checked_read<B: Buf, R>(b: &mut B, read: impl Fn(&mut B) -> R, size: usize) -> Result<R> {
pub(crate) fn checked_read<B: Buf, R>(
b: &mut B,
read: impl Fn(&mut B) -> R,
size: usize,
) -> Result<R> {
if b.remaining() < size {
Err(Error::IO("Not enough bytes in buffer to build object."))?;
Err(Error::IO("Not enough bytes in buffer to build object."))
} else {
Ok(read(b))
}
Ok(read(b))
}
#[inline]
pub fn checked_write_primitive<B: BufMut, T: Sized>(
pub(crate) fn checked_write_primitive<B: BufMut, T: Sized>(
b: &mut B,
write: impl Fn(&mut B, T),
t: T,
@ -29,16 +33,16 @@ pub fn checked_write_primitive<B: BufMut, T: Sized>(
}
#[inline]
pub fn checked_write<B: BufMut, T>(
pub(crate) fn checked_write<B: BufMut, T>(
b: &mut B,
write: impl Fn(&mut B, T),
t: T,
size: usize,
) -> Result<()> {
if b.remaining_mut() < size {
Err(Error::IO("Not enough capacity to write object."))?;
Err(Error::IO("Not enough capacity to write object."))
} else {
write(b, t);
Ok(())
}
write(b, t);
Ok(())
}

View file

@ -59,9 +59,12 @@
//!
//! ```
#[cfg(test)]
use hex as _;
extern crate alloc;
use core::{ops::Deref, str::from_utf8 as str_from_utf8};
use core::str::from_utf8 as str_from_utf8;
use bytes::{Buf, BufMut, Bytes, BytesMut};
@ -130,7 +133,7 @@ pub fn to_bytes<T: EpeeObject>(val: T) -> Result<BytesMut> {
fn read_header<B: Buf>(r: &mut B) -> Result<()> {
let buf = checked_read(r, |b: &mut B| b.copy_to_bytes(HEADER.len()), HEADER.len())?;
if buf.deref() != HEADER {
if &*buf != HEADER {
return Err(Error::Format("Data does not contain header"));
}
Ok(())
@ -185,7 +188,7 @@ fn read_object<T: EpeeObject, B: Buf>(r: &mut B, skipped_objects: &mut u8) -> Re
for _ in 0..number_o_field {
let field_name_bytes = read_field_name_bytes(r)?;
let field_name = str_from_utf8(field_name_bytes.deref())?;
let field_name = str_from_utf8(&field_name_bytes)?;
if !object_builder.add_field(field_name, r)? {
skip_epee_value(r, skipped_objects)?;
@ -289,7 +292,7 @@ where
B: BufMut,
{
write_varint(usize_to_u64(iterator.len()), w)?;
for item in iterator.into_iter() {
for item in iterator {
item.write(w)?;
}
Ok(())
@ -329,10 +332,7 @@ impl EpeeObject for SkipObject {
fn skip_epee_value<B: Buf>(r: &mut B, skipped_objects: &mut u8) -> Result<()> {
let marker = read_marker(r)?;
let mut len = 1;
if marker.is_seq {
len = read_varint(r)?;
}
let len = if marker.is_seq { read_varint(r)? } else { 1 };
if let Some(size) = marker.inner_marker.size() {
let bytes_to_skip = size

View file

@ -19,13 +19,13 @@ pub enum InnerMarker {
}
impl InnerMarker {
pub fn size(&self) -> Option<usize> {
pub const fn size(&self) -> Option<usize> {
Some(match self {
InnerMarker::I64 | InnerMarker::U64 | InnerMarker::F64 => 8,
InnerMarker::I32 | InnerMarker::U32 => 4,
InnerMarker::I16 | InnerMarker::U16 => 2,
InnerMarker::I8 | InnerMarker::U8 | InnerMarker::Bool => 1,
InnerMarker::String | InnerMarker::Object => return None,
Self::I64 | Self::U64 | Self::F64 => 8,
Self::I32 | Self::U32 => 4,
Self::I16 | Self::U16 => 2,
Self::I8 | Self::U8 | Self::Bool => 1,
Self::String | Self::Object => return None,
})
}
}
@ -40,23 +40,23 @@ pub struct Marker {
impl Marker {
pub(crate) const fn new(inner_marker: InnerMarker) -> Self {
Marker {
Self {
inner_marker,
is_seq: false,
}
}
#[must_use]
pub const fn into_seq(self) -> Self {
if self.is_seq {
panic!("Sequence of sequence not allowed!");
}
assert!(!self.is_seq, "Sequence of sequence not allowed!");
if matches!(self.inner_marker, InnerMarker::U8) {
return Marker {
return Self {
inner_marker: InnerMarker::String,
is_seq: false,
};
}
Marker {
Self {
inner_marker: self.inner_marker,
is_seq: true,
}
@ -112,7 +112,7 @@ impl TryFrom<u8> for Marker {
_ => return Err(Error::Format("Unknown value Marker")),
};
Ok(Marker {
Ok(Self {
inner_marker,
is_seq,
})

View file

@ -71,7 +71,7 @@ impl<T: EpeeObject> EpeeValue for Vec<T> {
let individual_marker = Marker::new(marker.inner_marker);
let mut res = Vec::with_capacity(len);
let mut res = Self::with_capacity(len);
for _ in 0..len {
res.push(T::read(r, &individual_marker)?);
}
@ -83,7 +83,7 @@ impl<T: EpeeObject> EpeeValue for Vec<T> {
}
fn epee_default_value() -> Option<Self> {
Some(Vec::new())
Some(Self::new())
}
fn write<B: BufMut>(self, w: &mut B) -> Result<()> {
@ -181,7 +181,7 @@ impl EpeeValue for Vec<u8> {
}
fn epee_default_value() -> Option<Self> {
Some(Vec::new())
Some(Self::new())
}
fn should_write(&self) -> bool {
@ -216,7 +216,7 @@ impl EpeeValue for Bytes {
}
fn epee_default_value() -> Option<Self> {
Some(Bytes::new())
Some(Self::new())
}
fn should_write(&self) -> bool {
@ -247,14 +247,14 @@ impl EpeeValue for BytesMut {
return Err(Error::IO("Not enough bytes to fill object"));
}
let mut bytes = BytesMut::zeroed(len);
let mut bytes = Self::zeroed(len);
r.copy_to_slice(&mut bytes);
Ok(bytes)
}
fn epee_default_value() -> Option<Self> {
Some(BytesMut::new())
Some(Self::new())
}
fn should_write(&self) -> bool {
@ -285,12 +285,11 @@ impl<const N: usize> EpeeValue for ByteArrayVec<N> {
return Err(Error::IO("Not enough bytes to fill object"));
}
ByteArrayVec::try_from(r.copy_to_bytes(len))
.map_err(|_| Error::Format("Field has invalid length"))
Self::try_from(r.copy_to_bytes(len)).map_err(|_| Error::Format("Field has invalid length"))
}
fn epee_default_value() -> Option<Self> {
Some(ByteArrayVec::try_from(Bytes::new()).unwrap())
Some(Self::try_from(Bytes::new()).unwrap())
}
fn should_write(&self) -> bool {
@ -320,8 +319,7 @@ impl<const N: usize> EpeeValue for ByteArray<N> {
return Err(Error::IO("Not enough bytes to fill object"));
}
ByteArray::try_from(r.copy_to_bytes(N))
.map_err(|_| Error::Format("Field has invalid length"))
Self::try_from(r.copy_to_bytes(N)).map_err(|_| Error::Format("Field has invalid length"))
}
fn write<B: BufMut>(self, w: &mut B) -> Result<()> {
@ -335,7 +333,7 @@ impl EpeeValue for String {
fn read<B: Buf>(r: &mut B, marker: &Marker) -> Result<Self> {
let bytes = Vec::<u8>::read(r, marker)?;
String::from_utf8(bytes).map_err(|_| Error::Format("Invalid string"))
Self::from_utf8(bytes).map_err(|_| Error::Format("Invalid string"))
}
fn should_write(&self) -> bool {
@ -343,7 +341,7 @@ impl EpeeValue for String {
}
fn epee_default_value() -> Option<Self> {
Some(String::new())
Some(Self::new())
}
fn write<B: BufMut>(self, w: &mut B) -> Result<()> {
@ -383,7 +381,7 @@ impl<const N: usize> EpeeValue for Vec<[u8; N]> {
let individual_marker = Marker::new(marker.inner_marker);
let mut res = Vec::with_capacity(len);
let mut res = Self::with_capacity(len);
for _ in 0..len {
res.push(<[u8; N]>::read(r, &individual_marker)?);
}
@ -395,7 +393,7 @@ impl<const N: usize> EpeeValue for Vec<[u8; N]> {
}
fn epee_default_value() -> Option<Self> {
Some(Vec::new())
Some(Self::new())
}
fn write<B: BufMut>(self, w: &mut B) -> Result<()> {

View file

@ -21,14 +21,14 @@ const FITS_IN_FOUR_BYTES: u64 = 2_u64.pow(32 - SIZE_OF_SIZE_MARKER) - 1;
/// ```
pub fn read_varint<B: Buf>(r: &mut B) -> Result<u64> {
if !r.has_remaining() {
Err(Error::IO("Not enough bytes to build VarInt"))?
return Err(Error::IO("Not enough bytes to build VarInt"));
}
let vi_start = r.get_u8();
let len = 1 << (vi_start & 0b11);
if r.remaining() < len - 1 {
Err(Error::IO("Not enough bytes to build VarInt"))?
return Err(Error::IO("Not enough bytes to build VarInt"));
}
let mut vi = u64::from(vi_start >> 2);
@ -67,12 +67,15 @@ pub fn write_varint<B: BufMut>(number: u64, w: &mut B) -> Result<()> {
};
if w.remaining_mut() < 1 << size_marker {
Err(Error::IO("Not enough capacity to write VarInt"))?;
return Err(Error::IO("Not enough capacity to write VarInt"));
}
let number = (number << 2) | size_marker;
// Although `as` is unsafe we just checked the length.
#[expect(
clippy::cast_possible_truncation,
reason = "Although `as` is unsafe we just checked the length."
)]
match size_marker {
0 => w.put_u8(number as u8),
1 => w.put_u16_le(number as u16),

View file

@ -1,3 +1,5 @@
#![expect(unused_crate_dependencies, reason = "outer test module")]
use cuprate_epee_encoding::{epee_object, from_bytes, to_bytes};
struct AltName {

View file

@ -1,3 +1,5 @@
#![expect(unused_crate_dependencies, reason = "outer test module")]
use cuprate_epee_encoding::{epee_object, from_bytes};
struct T {

View file

@ -1,3 +1,5 @@
#![expect(unused_crate_dependencies, reason = "outer test module")]
use cuprate_epee_encoding::{epee_object, from_bytes, to_bytes};
pub struct Optional {
@ -58,7 +60,7 @@ fn epee_non_default_does_encode() {
let val: Optional = from_bytes(&mut bytes).unwrap();
assert_eq!(val.optional_val, -3);
assert_eq!(val.val, 8)
assert_eq!(val.val, 8);
}
#[test]
@ -70,5 +72,5 @@ fn epee_value_not_present_with_default() {
let val: Optional = from_bytes(&mut bytes).unwrap();
assert_eq!(val.optional_val, -4);
assert_eq!(val.val, 76)
assert_eq!(val.val, 76);
}

View file

@ -1,3 +1,5 @@
#![expect(unused_crate_dependencies, reason = "outer test module")]
use cuprate_epee_encoding::{epee_object, from_bytes, to_bytes};
struct Child {
@ -37,6 +39,7 @@ epee_object!(
);
#[test]
#[expect(clippy::float_cmp)]
fn epee_flatten() {
let val2 = ParentChild {
h: 38.9,

View file

@ -1,5 +1,6 @@
#![expect(unused_crate_dependencies, reason = "outer test module")]
use cuprate_epee_encoding::{epee_object, from_bytes, to_bytes};
use std::ops::Deref;
#[derive(Clone)]
struct T {
@ -28,6 +29,6 @@ fn optional_val_in_data() {
];
let t: T = from_bytes(&mut &bytes[..]).unwrap();
let bytes2 = to_bytes(t.clone()).unwrap();
assert_eq!(bytes.as_slice(), bytes2.deref());
assert_eq!(bytes.as_slice(), &*bytes2);
assert_eq!(t.val.unwrap(), 21);
}

View file

@ -1,3 +1,5 @@
#![expect(unused_crate_dependencies, reason = "outer test module")]
use cuprate_epee_encoding::{epee_object, from_bytes, to_bytes};
#[derive(Eq, PartialEq, Debug, Clone)]
@ -5,7 +7,7 @@ pub struct SupportFlags(u32);
impl From<u32> for SupportFlags {
fn from(value: u32) -> Self {
SupportFlags(value)
Self(value)
}
}

View file

@ -1,3 +1,5 @@
#![expect(unused_crate_dependencies, reason = "outer test module")]
use cuprate_epee_encoding::{epee_object, from_bytes, to_bytes};
#[derive(Clone, Debug, PartialEq)]

View file

@ -1,3 +1,5 @@
#![expect(unused_crate_dependencies, reason = "outer test module")]
use cuprate_epee_encoding::{epee_object, from_bytes};
struct ObjSeq {

View file

@ -1,3 +1,5 @@
#![expect(unused_crate_dependencies, reason = "outer test module")]
use cuprate_epee_encoding::{epee_object, from_bytes};
struct D {
@ -737,5 +739,5 @@ fn stack_overflow() {
let obj: Result<Q, _> = from_bytes(&mut bytes.as_slice());
assert!(obj.is_err())
assert!(obj.is_err());
}

View file

@ -17,3 +17,6 @@ serde = { workspace = true, features = ["derive"], optional = true }
[dev-dependencies]
serde_json = { workspace = true, features = ["std"] }
[lints]
workspace = true

View file

@ -22,17 +22,15 @@ pub enum FixedByteError {
}
impl FixedByteError {
fn field_name(&self) -> &'static str {
const fn field_name(&self) -> &'static str {
match self {
FixedByteError::InvalidLength => "input",
Self::InvalidLength => "input",
}
}
fn field_data(&self) -> &'static str {
const fn field_data(&self) -> &'static str {
match self {
FixedByteError::InvalidLength => {
"Cannot create fix byte array, input has invalid length."
}
Self::InvalidLength => "Cannot create fix byte array, input has invalid length.",
}
}
}
@ -82,7 +80,7 @@ impl<const N: usize> ByteArray<N> {
impl<const N: usize> From<[u8; N]> for ByteArray<N> {
fn from(value: [u8; N]) -> Self {
ByteArray(Bytes::copy_from_slice(&value))
Self(Bytes::copy_from_slice(&value))
}
}
@ -101,7 +99,7 @@ impl<const N: usize> TryFrom<Bytes> for ByteArray<N> {
if value.len() != N {
return Err(FixedByteError::InvalidLength);
}
Ok(ByteArray(value))
Ok(Self(value))
}
}
@ -112,7 +110,7 @@ impl<const N: usize> TryFrom<Vec<u8>> for ByteArray<N> {
if value.len() != N {
return Err(FixedByteError::InvalidLength);
}
Ok(ByteArray(Bytes::from(value)))
Ok(Self(Bytes::from(value)))
}
}
@ -142,11 +140,11 @@ impl<'de, const N: usize> Deserialize<'de> for ByteArrayVec<N> {
}
impl<const N: usize> ByteArrayVec<N> {
pub fn len(&self) -> usize {
pub const fn len(&self) -> usize {
self.0.len() / N
}
pub fn is_empty(&self) -> bool {
pub const fn is_empty(&self) -> bool {
self.len() == 0
}
@ -182,6 +180,7 @@ impl<const N: usize> ByteArrayVec<N> {
///
/// # Panics
/// Panics if at > len.
#[must_use]
pub fn split_off(&mut self, at: usize) -> Self {
Self(self.0.split_off(at * N))
}
@ -189,9 +188,9 @@ impl<const N: usize> ByteArrayVec<N> {
impl<const N: usize> From<&ByteArrayVec<N>> for Vec<[u8; N]> {
fn from(value: &ByteArrayVec<N>) -> Self {
let mut out = Vec::with_capacity(value.len());
let mut out = Self::with_capacity(value.len());
for i in 0..value.len() {
out.push(value[i])
out.push(value[i]);
}
out
@ -201,11 +200,11 @@ impl<const N: usize> From<&ByteArrayVec<N>> for Vec<[u8; N]> {
impl<const N: usize> From<Vec<[u8; N]>> for ByteArrayVec<N> {
fn from(value: Vec<[u8; N]>) -> Self {
let mut bytes = BytesMut::with_capacity(N * value.len());
for i in value.into_iter() {
bytes.extend_from_slice(&i)
for i in value {
bytes.extend_from_slice(&i);
}
ByteArrayVec(bytes.freeze())
Self(bytes.freeze())
}
}
@ -217,13 +216,13 @@ impl<const N: usize> TryFrom<Bytes> for ByteArrayVec<N> {
return Err(FixedByteError::InvalidLength);
}
Ok(ByteArrayVec(value))
Ok(Self(value))
}
}
impl<const N: usize> From<[u8; N]> for ByteArrayVec<N> {
fn from(value: [u8; N]) -> Self {
ByteArrayVec(Bytes::copy_from_slice(value.as_slice()))
Self(Bytes::copy_from_slice(value.as_slice()))
}
}
@ -231,11 +230,11 @@ impl<const N: usize, const LEN: usize> From<[[u8; N]; LEN]> for ByteArrayVec<N>
fn from(value: [[u8; N]; LEN]) -> Self {
let mut bytes = BytesMut::with_capacity(N * LEN);
for val in value.into_iter() {
for val in value {
bytes.put_slice(val.as_slice());
}
ByteArrayVec(bytes.freeze())
Self(bytes.freeze())
}
}
@ -247,7 +246,7 @@ impl<const N: usize> TryFrom<Vec<u8>> for ByteArrayVec<N> {
return Err(FixedByteError::InvalidLength);
}
Ok(ByteArrayVec(Bytes::from(value)))
Ok(Self(Bytes::from(value)))
}
}
@ -255,9 +254,12 @@ impl<const N: usize> Index<usize> for ByteArrayVec<N> {
type Output = [u8; N];
fn index(&self, index: usize) -> &Self::Output {
if (index + 1) * N > self.0.len() {
panic!("Index out of range, idx: {}, length: {}", index, self.len());
}
assert!(
(index + 1) * N <= self.0.len(),
"Index out of range, idx: {}, length: {}",
index,
self.len()
);
self.0[index * N..(index + 1) * N]
.as_ref()

View file

@ -14,6 +14,7 @@ tracing = ["dep:tracing", "tokio-util/tracing"]
[dependencies]
cuprate-helper = { path = "../../helper", default-features = false, features = ["cast"] }
cfg-if = { workspace = true }
thiserror = { workspace = true }
bytes = { workspace = true, features = ["std"] }
bitflags = { workspace = true }
@ -26,4 +27,7 @@ proptest = { workspace = true }
rand = { workspace = true, features = ["std", "std_rng"] }
tokio-util = { workspace = true, features = ["io-util"]}
tokio = { workspace = true, features = ["full"] }
futures = { workspace = true, features = ["std"] }
futures = { workspace = true, features = ["std"] }
[lints]
workspace = true

View file

@ -47,7 +47,7 @@ pub struct LevinBucketCodec<C> {
impl<C> Default for LevinBucketCodec<C> {
fn default() -> Self {
LevinBucketCodec {
Self {
state: LevinBucketState::WaitingForHeader,
protocol: Protocol::default(),
handshake_message_seen: false,
@ -56,8 +56,8 @@ impl<C> Default for LevinBucketCodec<C> {
}
impl<C> LevinBucketCodec<C> {
pub fn new(protocol: Protocol) -> Self {
LevinBucketCodec {
pub const fn new(protocol: Protocol) -> Self {
Self {
state: LevinBucketState::WaitingForHeader,
protocol,
handshake_message_seen: false,
@ -112,8 +112,10 @@ impl<C: LevinCommand + Debug> Decoder for LevinBucketCodec<C> {
}
}
let _ =
std::mem::replace(&mut self.state, LevinBucketState::WaitingForBody(head));
drop(std::mem::replace(
&mut self.state,
LevinBucketState::WaitingForBody(head),
));
}
LevinBucketState::WaitingForBody(head) => {
let body_len = u64_to_usize(head.size);
@ -145,7 +147,7 @@ impl<C: LevinCommand> Encoder<Bucket<C>> for LevinBucketCodec<C> {
type Error = BucketError;
fn encode(&mut self, item: Bucket<C>, dst: &mut BytesMut) -> Result<(), Self::Error> {
if let Some(additional) = (HEADER_SIZE + item.body.len()).checked_sub(dst.capacity()) {
dst.reserve(additional)
dst.reserve(additional);
}
item.header.write_bytes_into(dst);

View file

@ -13,7 +13,7 @@
// copies or substantial portions of the Software.
//
//! This module provides a struct BucketHead for the header of a levin protocol
//! This module provides a struct `BucketHead` for the header of a levin protocol
//! message.
use bitflags::bitflags;
@ -62,7 +62,7 @@ bitflags! {
impl From<u32> for Flags {
fn from(value: u32) -> Self {
Flags(value)
Self(value)
}
}
@ -99,9 +99,9 @@ impl<C: LevinCommand> BucketHead<C> {
///
/// # Panics
/// This function will panic if there aren't enough bytes to fill the header.
/// Currently [HEADER_SIZE]
pub fn from_bytes(buf: &mut BytesMut) -> BucketHead<C> {
BucketHead {
/// Currently [`HEADER_SIZE`]
pub fn from_bytes(buf: &mut BytesMut) -> Self {
Self {
signature: buf.get_u64_le(),
size: buf.get_u64_le(),
have_to_return_data: buf.get_u8() != 0,

View file

@ -33,6 +33,16 @@
#![deny(unused_mut)]
//#![deny(missing_docs)]
cfg_if::cfg_if! {
// Used in `tests/`.
if #[cfg(test)] {
use futures as _;
use proptest as _;
use rand as _;
use tokio as _;
}
}
use std::fmt::Debug;
use bytes::{Buf, Bytes};
@ -99,7 +109,7 @@ pub struct Protocol {
impl Default for Protocol {
fn default() -> Self {
Protocol {
Self {
version: MONERO_PROTOCOL_VERSION,
signature: MONERO_LEVIN_SIGNATURE,
max_packet_size_before_handshake: MONERO_MAX_PACKET_SIZE_BEFORE_HANDSHAKE,
@ -130,22 +140,22 @@ pub enum MessageType {
impl MessageType {
/// Returns if the message requires a response
pub fn have_to_return_data(&self) -> bool {
pub const fn have_to_return_data(&self) -> bool {
match self {
MessageType::Request => true,
MessageType::Response | MessageType::Notification => false,
Self::Request => true,
Self::Response | Self::Notification => false,
}
}
/// Returns the `MessageType` given the flags and have_to_return_data fields
pub fn from_flags_and_have_to_return(
/// Returns the `MessageType` given the flags and `have_to_return_data` fields
pub const fn from_flags_and_have_to_return(
flags: Flags,
have_to_return: bool,
) -> Result<Self, BucketError> {
Ok(match (flags, have_to_return) {
(Flags::REQUEST, true) => MessageType::Request,
(Flags::REQUEST, false) => MessageType::Notification,
(Flags::RESPONSE, false) => MessageType::Response,
(Flags::REQUEST, true) => Self::Request,
(Flags::REQUEST, false) => Self::Notification,
(Flags::RESPONSE, false) => Self::Response,
_ => {
return Err(BucketError::InvalidHeaderFlags(
"Unable to assign a message type to this bucket",
@ -154,10 +164,10 @@ impl MessageType {
})
}
pub fn as_flags(&self) -> header::Flags {
pub const fn as_flags(&self) -> Flags {
match self {
MessageType::Request | MessageType::Notification => header::Flags::REQUEST,
MessageType::Response => header::Flags::RESPONSE,
Self::Request | Self::Notification => Flags::REQUEST,
Self::Response => Flags::RESPONSE,
}
}
}
@ -173,7 +183,7 @@ pub struct BucketBuilder<C> {
}
impl<C: LevinCommand> BucketBuilder<C> {
pub fn new(protocol: &Protocol) -> Self {
pub const fn new(protocol: &Protocol) -> Self {
Self {
signature: Some(protocol.signature),
ty: None,
@ -185,27 +195,27 @@ impl<C: LevinCommand> BucketBuilder<C> {
}
pub fn set_signature(&mut self, sig: u64) {
self.signature = Some(sig)
self.signature = Some(sig);
}
pub fn set_message_type(&mut self, ty: MessageType) {
self.ty = Some(ty)
self.ty = Some(ty);
}
pub fn set_command(&mut self, command: C) {
self.command = Some(command)
self.command = Some(command);
}
pub fn set_return_code(&mut self, code: i32) {
self.return_code = Some(code)
self.return_code = Some(code);
}
pub fn set_protocol_version(&mut self, version: u32) {
self.protocol_version = Some(version)
self.protocol_version = Some(version);
}
pub fn set_body(&mut self, body: Bytes) {
self.body = Some(body)
self.body = Some(body);
}
pub fn finish(self) -> Bucket<C> {

View file

@ -33,13 +33,13 @@ pub enum LevinMessage<T: LevinBody> {
impl<T: LevinBody> From<T> for LevinMessage<T> {
fn from(value: T) -> Self {
LevinMessage::Body(value)
Self::Body(value)
}
}
impl<T: LevinBody> From<Bucket<T::Command>> for LevinMessage<T> {
fn from(value: Bucket<T::Command>) -> Self {
LevinMessage::Bucket(value)
Self::Bucket(value)
}
}
@ -58,7 +58,7 @@ pub struct Dummy(pub usize);
impl<T: LevinBody> From<Dummy> for LevinMessage<T> {
fn from(value: Dummy) -> Self {
LevinMessage::Dummy(value.0)
Self::Dummy(value.0)
}
}
@ -76,12 +76,11 @@ pub fn make_fragmented_messages<T: LevinBody>(
fragment_size: usize,
message: T,
) -> Result<Vec<Bucket<T::Command>>, BucketError> {
if fragment_size * 2 < HEADER_SIZE {
panic!(
"Fragment size: {fragment_size}, is too small, must be at least {}",
2 * HEADER_SIZE
);
}
assert!(
fragment_size * 2 >= HEADER_SIZE,
"Fragment size: {fragment_size}, is too small, must be at least {}",
2 * HEADER_SIZE
);
let mut builder = BucketBuilder::new(protocol);
message.encode(&mut builder)?;

View file

@ -1,3 +1,9 @@
#![expect(
clippy::tests_outside_test_module,
unused_crate_dependencies,
reason = "outer test module"
)]
use bytes::{Buf, BufMut, Bytes, BytesMut};
use futures::{SinkExt, StreamExt};
use proptest::{prelude::any_with, prop_assert_eq, proptest, sample::size_range};
@ -58,12 +64,12 @@ impl LevinBody for TestBody {
) -> Result<Self, BucketError> {
let size = u64_to_usize(body.get_u64_le());
// bucket
Ok(TestBody::Bytes(size, body.copy_to_bytes(size)))
Ok(Self::Bytes(size, body.copy_to_bytes(size)))
}
fn encode(self, builder: &mut BucketBuilder<Self::Command>) -> Result<(), BucketError> {
match self {
TestBody::Bytes(len, bytes) => {
Self::Bytes(len, bytes) => {
let mut buf = BytesMut::new();
buf.put_u64_le(len as u64);
buf.extend_from_slice(bytes.as_ref());
@ -141,12 +147,12 @@ proptest! {
message2.extend_from_slice(&fragments[0].body[(33 + 8)..]);
for frag in fragments.iter().skip(1) {
message2.extend_from_slice(frag.body.as_ref())
message2.extend_from_slice(frag.body.as_ref());
}
prop_assert_eq!(message.as_slice(), &message2[0..message.len()], "numb_fragments: {}", fragments.len());
for byte in message2[message.len()..].iter(){
for byte in &message2[message.len()..]{
prop_assert_eq!(*byte, 0);
}
}

View file

@ -15,7 +15,7 @@ cuprate-levin = { path = "../levin" }
cuprate-epee-encoding = { path = "../epee-encoding" }
cuprate-fixed-bytes = { path = "../fixed-bytes" }
cuprate-types = { path = "../../types", default-features = false, features = ["epee"] }
cuprate-helper = { path = "../../helper", default-features = false, features = ["cast"] }
cuprate-helper = { path = "../../helper", default-features = false, features = ["map"] }
bitflags = { workspace = true, features = ["std"] }
bytes = { workspace = true, features = ["std"] }
@ -24,3 +24,5 @@ thiserror = { workspace = true }
[dev-dependencies]
hex = { workspace = true, features = ["std"]}
[lints]
workspace = true

View file

@ -51,38 +51,38 @@ impl EpeeObject for NetworkAddress {
}
impl NetworkAddress {
pub fn get_zone(&self) -> NetZone {
pub const fn get_zone(&self) -> NetZone {
match self {
NetworkAddress::Clear(_) => NetZone::Public,
Self::Clear(_) => NetZone::Public,
}
}
pub fn is_loopback(&self) -> bool {
pub const fn is_loopback(&self) -> bool {
// TODO
false
}
pub fn is_local(&self) -> bool {
pub const fn is_local(&self) -> bool {
// TODO
false
}
pub fn port(&self) -> u16 {
pub const fn port(&self) -> u16 {
match self {
NetworkAddress::Clear(ip) => ip.port(),
Self::Clear(ip) => ip.port(),
}
}
}
impl From<net::SocketAddrV4> for NetworkAddress {
fn from(value: net::SocketAddrV4) -> Self {
NetworkAddress::Clear(value.into())
Self::Clear(value.into())
}
}
impl From<net::SocketAddrV6> for NetworkAddress {
fn from(value: net::SocketAddrV6) -> Self {
NetworkAddress::Clear(value.into())
Self::Clear(value.into())
}
}

View file

@ -74,7 +74,7 @@ impl From<NetworkAddress> for TaggedNetworkAddress {
fn from(value: NetworkAddress) -> Self {
match value {
NetworkAddress::Clear(addr) => match addr {
SocketAddr::V4(addr) => TaggedNetworkAddress {
SocketAddr::V4(addr) => Self {
ty: Some(1),
addr: Some(AllFieldsNetworkAddress {
m_ip: Some(u32::from_be_bytes(addr.ip().octets())),
@ -82,7 +82,7 @@ impl From<NetworkAddress> for TaggedNetworkAddress {
addr: None,
}),
},
SocketAddr::V6(addr) => TaggedNetworkAddress {
SocketAddr::V6(addr) => Self {
ty: Some(2),
addr: Some(AllFieldsNetworkAddress {
addr: Some(addr.ip().octets()),

View file

@ -55,27 +55,27 @@ pub enum LevinCommand {
impl std::fmt::Display for LevinCommand {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
if let LevinCommand::Unknown(id) = self {
return f.write_str(&format!("unknown id: {}", id));
if let Self::Unknown(id) = self {
return f.write_str(&format!("unknown id: {id}"));
}
f.write_str(match self {
LevinCommand::Handshake => "handshake",
LevinCommand::TimedSync => "timed sync",
LevinCommand::Ping => "ping",
LevinCommand::SupportFlags => "support flags",
Self::Handshake => "handshake",
Self::TimedSync => "timed sync",
Self::Ping => "ping",
Self::SupportFlags => "support flags",
LevinCommand::NewBlock => "new block",
LevinCommand::NewTransactions => "new transactions",
LevinCommand::GetObjectsRequest => "get objects request",
LevinCommand::GetObjectsResponse => "get objects response",
LevinCommand::ChainRequest => "chain request",
LevinCommand::ChainResponse => "chain response",
LevinCommand::NewFluffyBlock => "new fluffy block",
LevinCommand::FluffyMissingTxsRequest => "fluffy missing transaction request",
LevinCommand::GetTxPoolCompliment => "get transaction pool compliment",
Self::NewBlock => "new block",
Self::NewTransactions => "new transactions",
Self::GetObjectsRequest => "get objects request",
Self::GetObjectsResponse => "get objects response",
Self::ChainRequest => "chain request",
Self::ChainResponse => "chain response",
Self::NewFluffyBlock => "new fluffy block",
Self::FluffyMissingTxsRequest => "fluffy missing transaction request",
Self::GetTxPoolCompliment => "get transaction pool compliment",
LevinCommand::Unknown(_) => unreachable!(),
Self::Unknown(_) => unreachable!(),
})
}
}
@ -83,50 +83,51 @@ impl std::fmt::Display for LevinCommand {
impl LevinCommandTrait for LevinCommand {
fn bucket_size_limit(&self) -> u64 {
// https://github.com/monero-project/monero/blob/00fd416a99686f0956361d1cd0337fe56e58d4a7/src/cryptonote_basic/connection_context.cpp#L37
#[expect(clippy::match_same_arms, reason = "formatting is more clear")]
match self {
LevinCommand::Handshake => 65536,
LevinCommand::TimedSync => 65536,
LevinCommand::Ping => 4096,
LevinCommand::SupportFlags => 4096,
Self::Handshake => 65536,
Self::TimedSync => 65536,
Self::Ping => 4096,
Self::SupportFlags => 4096,
LevinCommand::NewBlock => 1024 * 1024 * 128, // 128 MB (max packet is a bit less than 100 MB though)
LevinCommand::NewTransactions => 1024 * 1024 * 128, // 128 MB (max packet is a bit less than 100 MB though)
LevinCommand::GetObjectsRequest => 1024 * 1024 * 2, // 2 MB
LevinCommand::GetObjectsResponse => 1024 * 1024 * 128, // 128 MB (max packet is a bit less than 100 MB though)
LevinCommand::ChainRequest => 512 * 1024, // 512 kB
LevinCommand::ChainResponse => 1024 * 1024 * 4, // 4 MB
LevinCommand::NewFluffyBlock => 1024 * 1024 * 4, // 4 MB
LevinCommand::FluffyMissingTxsRequest => 1024 * 1024, // 1 MB
LevinCommand::GetTxPoolCompliment => 1024 * 1024 * 4, // 4 MB
Self::NewBlock => 1024 * 1024 * 128, // 128 MB (max packet is a bit less than 100 MB though)
Self::NewTransactions => 1024 * 1024 * 128, // 128 MB (max packet is a bit less than 100 MB though)
Self::GetObjectsRequest => 1024 * 1024 * 2, // 2 MB
Self::GetObjectsResponse => 1024 * 1024 * 128, // 128 MB (max packet is a bit less than 100 MB though)
Self::ChainRequest => 512 * 1024, // 512 kB
Self::ChainResponse => 1024 * 1024 * 4, // 4 MB
Self::NewFluffyBlock => 1024 * 1024 * 4, // 4 MB
Self::FluffyMissingTxsRequest => 1024 * 1024, // 1 MB
Self::GetTxPoolCompliment => 1024 * 1024 * 4, // 4 MB
LevinCommand::Unknown(_) => u64::MAX,
Self::Unknown(_) => u64::MAX,
}
}
fn is_handshake(&self) -> bool {
matches!(self, LevinCommand::Handshake)
matches!(self, Self::Handshake)
}
}
impl From<u32> for LevinCommand {
fn from(value: u32) -> Self {
match value {
1001 => LevinCommand::Handshake,
1002 => LevinCommand::TimedSync,
1003 => LevinCommand::Ping,
1007 => LevinCommand::SupportFlags,
1001 => Self::Handshake,
1002 => Self::TimedSync,
1003 => Self::Ping,
1007 => Self::SupportFlags,
2001 => LevinCommand::NewBlock,
2002 => LevinCommand::NewTransactions,
2003 => LevinCommand::GetObjectsRequest,
2004 => LevinCommand::GetObjectsResponse,
2006 => LevinCommand::ChainRequest,
2007 => LevinCommand::ChainResponse,
2008 => LevinCommand::NewFluffyBlock,
2009 => LevinCommand::FluffyMissingTxsRequest,
2010 => LevinCommand::GetTxPoolCompliment,
2001 => Self::NewBlock,
2002 => Self::NewTransactions,
2003 => Self::GetObjectsRequest,
2004 => Self::GetObjectsResponse,
2006 => Self::ChainRequest,
2007 => Self::ChainResponse,
2008 => Self::NewFluffyBlock,
2009 => Self::FluffyMissingTxsRequest,
2010 => Self::GetTxPoolCompliment,
x => LevinCommand::Unknown(x),
x => Self::Unknown(x),
}
}
}
@ -191,19 +192,19 @@ pub enum ProtocolMessage {
}
impl ProtocolMessage {
pub fn command(&self) -> LevinCommand {
pub const fn command(&self) -> LevinCommand {
use LevinCommand as C;
match self {
ProtocolMessage::NewBlock(_) => C::NewBlock,
ProtocolMessage::NewFluffyBlock(_) => C::NewFluffyBlock,
ProtocolMessage::GetObjectsRequest(_) => C::GetObjectsRequest,
ProtocolMessage::GetObjectsResponse(_) => C::GetObjectsResponse,
ProtocolMessage::ChainRequest(_) => C::ChainRequest,
ProtocolMessage::ChainEntryResponse(_) => C::ChainResponse,
ProtocolMessage::NewTransactions(_) => C::NewTransactions,
ProtocolMessage::FluffyMissingTransactionsRequest(_) => C::FluffyMissingTxsRequest,
ProtocolMessage::GetTxPoolCompliment(_) => C::GetTxPoolCompliment,
Self::NewBlock(_) => C::NewBlock,
Self::NewFluffyBlock(_) => C::NewFluffyBlock,
Self::GetObjectsRequest(_) => C::GetObjectsRequest,
Self::GetObjectsResponse(_) => C::GetObjectsResponse,
Self::ChainRequest(_) => C::ChainRequest,
Self::ChainEntryResponse(_) => C::ChainResponse,
Self::NewTransactions(_) => C::NewTransactions,
Self::FluffyMissingTransactionsRequest(_) => C::FluffyMissingTxsRequest,
Self::GetTxPoolCompliment(_) => C::GetTxPoolCompliment,
}
}
@ -230,26 +231,26 @@ impl ProtocolMessage {
use LevinCommand as C;
match self {
ProtocolMessage::NewBlock(val) => build_message(C::NewBlock, val, builder)?,
ProtocolMessage::NewTransactions(val) => {
build_message(C::NewTransactions, val, builder)?
Self::NewBlock(val) => build_message(C::NewBlock, val, builder)?,
Self::NewTransactions(val) => {
build_message(C::NewTransactions, val, builder)?;
}
ProtocolMessage::GetObjectsRequest(val) => {
build_message(C::GetObjectsRequest, val, builder)?
Self::GetObjectsRequest(val) => {
build_message(C::GetObjectsRequest, val, builder)?;
}
ProtocolMessage::GetObjectsResponse(val) => {
build_message(C::GetObjectsResponse, val, builder)?
Self::GetObjectsResponse(val) => {
build_message(C::GetObjectsResponse, val, builder)?;
}
ProtocolMessage::ChainRequest(val) => build_message(C::ChainRequest, val, builder)?,
ProtocolMessage::ChainEntryResponse(val) => {
build_message(C::ChainResponse, val, builder)?
Self::ChainRequest(val) => build_message(C::ChainRequest, val, builder)?,
Self::ChainEntryResponse(val) => {
build_message(C::ChainResponse, val, builder)?;
}
ProtocolMessage::NewFluffyBlock(val) => build_message(C::NewFluffyBlock, val, builder)?,
ProtocolMessage::FluffyMissingTransactionsRequest(val) => {
build_message(C::FluffyMissingTxsRequest, val, builder)?
Self::NewFluffyBlock(val) => build_message(C::NewFluffyBlock, val, builder)?,
Self::FluffyMissingTransactionsRequest(val) => {
build_message(C::FluffyMissingTxsRequest, val, builder)?;
}
ProtocolMessage::GetTxPoolCompliment(val) => {
build_message(C::GetTxPoolCompliment, val, builder)?
Self::GetTxPoolCompliment(val) => {
build_message(C::GetTxPoolCompliment, val, builder)?;
}
}
Ok(())
@ -265,14 +266,14 @@ pub enum AdminRequestMessage {
}
impl AdminRequestMessage {
pub fn command(&self) -> LevinCommand {
pub const fn command(&self) -> LevinCommand {
use LevinCommand as C;
match self {
AdminRequestMessage::Handshake(_) => C::Handshake,
AdminRequestMessage::Ping => C::Ping,
AdminRequestMessage::SupportFlags => C::SupportFlags,
AdminRequestMessage::TimedSync(_) => C::TimedSync,
Self::Handshake(_) => C::Handshake,
Self::Ping => C::Ping,
Self::SupportFlags => C::SupportFlags,
Self::TimedSync(_) => C::TimedSync,
}
}
@ -286,13 +287,13 @@ impl AdminRequestMessage {
cuprate_epee_encoding::from_bytes::<EmptyMessage, _>(buf)
.map_err(|e| BucketError::BodyDecodingError(e.into()))?;
AdminRequestMessage::Ping
Self::Ping
}
C::SupportFlags => {
cuprate_epee_encoding::from_bytes::<EmptyMessage, _>(buf)
.map_err(|e| BucketError::BodyDecodingError(e.into()))?;
AdminRequestMessage::SupportFlags
Self::SupportFlags
}
_ => return Err(BucketError::UnknownCommand),
})
@ -302,11 +303,11 @@ impl AdminRequestMessage {
use LevinCommand as C;
match self {
AdminRequestMessage::Handshake(val) => build_message(C::Handshake, val, builder)?,
AdminRequestMessage::TimedSync(val) => build_message(C::TimedSync, val, builder)?,
AdminRequestMessage::Ping => build_message(C::Ping, EmptyMessage, builder)?,
AdminRequestMessage::SupportFlags => {
build_message(C::SupportFlags, EmptyMessage, builder)?
Self::Handshake(val) => build_message(C::Handshake, val, builder)?,
Self::TimedSync(val) => build_message(C::TimedSync, val, builder)?,
Self::Ping => build_message(C::Ping, EmptyMessage, builder)?,
Self::SupportFlags => {
build_message(C::SupportFlags, EmptyMessage, builder)?;
}
}
Ok(())
@ -322,14 +323,14 @@ pub enum AdminResponseMessage {
}
impl AdminResponseMessage {
pub fn command(&self) -> LevinCommand {
pub const fn command(&self) -> LevinCommand {
use LevinCommand as C;
match self {
AdminResponseMessage::Handshake(_) => C::Handshake,
AdminResponseMessage::Ping(_) => C::Ping,
AdminResponseMessage::SupportFlags(_) => C::SupportFlags,
AdminResponseMessage::TimedSync(_) => C::TimedSync,
Self::Handshake(_) => C::Handshake,
Self::Ping(_) => C::Ping,
Self::SupportFlags(_) => C::SupportFlags,
Self::TimedSync(_) => C::TimedSync,
}
}
@ -349,11 +350,11 @@ impl AdminResponseMessage {
use LevinCommand as C;
match self {
AdminResponseMessage::Handshake(val) => build_message(C::Handshake, val, builder)?,
AdminResponseMessage::TimedSync(val) => build_message(C::TimedSync, val, builder)?,
AdminResponseMessage::Ping(val) => build_message(C::Ping, val, builder)?,
AdminResponseMessage::SupportFlags(val) => {
build_message(C::SupportFlags, val, builder)?
Self::Handshake(val) => build_message(C::Handshake, val, builder)?,
Self::TimedSync(val) => build_message(C::TimedSync, val, builder)?,
Self::Ping(val) => build_message(C::Ping, val, builder)?,
Self::SupportFlags(val) => {
build_message(C::SupportFlags, val, builder)?;
}
}
Ok(())
@ -368,23 +369,23 @@ pub enum Message {
}
impl Message {
pub fn is_request(&self) -> bool {
matches!(self, Message::Request(_))
pub const fn is_request(&self) -> bool {
matches!(self, Self::Request(_))
}
pub fn is_response(&self) -> bool {
matches!(self, Message::Response(_))
pub const fn is_response(&self) -> bool {
matches!(self, Self::Response(_))
}
pub fn is_protocol(&self) -> bool {
matches!(self, Message::Protocol(_))
pub const fn is_protocol(&self) -> bool {
matches!(self, Self::Protocol(_))
}
pub fn command(&self) -> LevinCommand {
pub const fn command(&self) -> LevinCommand {
match self {
Message::Request(mes) => mes.command(),
Message::Response(mes) => mes.command(),
Message::Protocol(mes) => mes.command(),
Self::Request(mes) => mes.command(),
Self::Response(mes) => mes.command(),
Self::Protocol(mes) => mes.command(),
}
}
}
@ -398,27 +399,25 @@ impl LevinBody for Message {
command: LevinCommand,
) -> Result<Self, BucketError> {
Ok(match typ {
MessageType::Request => Message::Request(AdminRequestMessage::decode(body, command)?),
MessageType::Response => {
Message::Response(AdminResponseMessage::decode(body, command)?)
}
MessageType::Notification => Message::Protocol(ProtocolMessage::decode(body, command)?),
MessageType::Request => Self::Request(AdminRequestMessage::decode(body, command)?),
MessageType::Response => Self::Response(AdminResponseMessage::decode(body, command)?),
MessageType::Notification => Self::Protocol(ProtocolMessage::decode(body, command)?),
})
}
fn encode(self, builder: &mut BucketBuilder<LevinCommand>) -> Result<(), BucketError> {
match self {
Message::Protocol(pro) => {
Self::Protocol(pro) => {
builder.set_message_type(MessageType::Notification);
builder.set_return_code(0);
pro.build(builder)
}
Message::Request(req) => {
Self::Request(req) => {
builder.set_message_type(MessageType::Request);
builder.set_return_code(0);
req.build(builder)
}
Message::Response(res) => {
Self::Response(res) => {
builder.set_message_type(MessageType::Response);
builder.set_return_code(1);
res.build(builder)

View file

@ -45,7 +45,7 @@ pub struct HandshakeResponse {
pub node_data: BasicNodeData,
/// Core Sync Data
pub payload_data: CoreSyncData,
/// PeerList
/// `PeerList`
pub local_peerlist_new: Vec<PeerListEntryBase>,
}
@ -56,7 +56,7 @@ epee_object!(
local_peerlist_new: Vec<PeerListEntryBase>,
);
/// A TimedSync Request
/// A `TimedSync` Request
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct TimedSyncRequest {
/// Core Sync Data
@ -68,12 +68,12 @@ epee_object!(
payload_data: CoreSyncData,
);
/// A TimedSync Response
/// A `TimedSync` Response
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct TimedSyncResponse {
/// Core Sync Data
pub payload_data: CoreSyncData,
/// PeerList
/// `PeerList`
pub local_peerlist_new: Vec<PeerListEntryBase>,
}

View file

@ -18,6 +18,7 @@
use bitflags::bitflags;
use cuprate_epee_encoding::epee_object;
use cuprate_helper::map::split_u128_into_low_high_bits;
pub use cuprate_types::{BlockCompleteEntry, PrunedTxBlobEntry, TransactionBlobs};
use crate::NetworkAddress;
@ -34,7 +35,7 @@ bitflags! {
impl From<u32> for PeerSupportFlags {
fn from(value: u32) -> Self {
PeerSupportFlags(value)
Self(value)
}
}
@ -113,16 +114,17 @@ epee_object! {
}
impl CoreSyncData {
pub fn new(
pub const fn new(
cumulative_difficulty_128: u128,
current_height: u64,
pruning_seed: u32,
top_id: [u8; 32],
top_version: u8,
) -> CoreSyncData {
let cumulative_difficulty = cumulative_difficulty_128 as u64;
let cumulative_difficulty_top64 = (cumulative_difficulty_128 >> 64) as u64;
CoreSyncData {
) -> Self {
let (cumulative_difficulty, cumulative_difficulty_top64) =
split_u128_into_low_high_bits(cumulative_difficulty_128);
Self {
cumulative_difficulty,
cumulative_difficulty_top64,
current_height,
@ -139,7 +141,7 @@ impl CoreSyncData {
}
}
/// PeerListEntryBase, information kept on a peer which will be entered
/// `PeerListEntryBase`, information kept on a peer which will be entered
/// in a peer list/store.
#[derive(Clone, Copy, Debug, Eq, PartialEq)]
pub struct PeerListEntryBase {

View file

@ -127,7 +127,7 @@ pub struct ChainResponse {
impl ChainResponse {
#[inline]
pub fn cumulative_difficulty(&self) -> u128 {
pub const fn cumulative_difficulty(&self) -> u128 {
let cumulative_difficulty = self.cumulative_difficulty_top64 as u128;
cumulative_difficulty << 64 | self.cumulative_difficulty_low64 as u128
}
@ -159,7 +159,7 @@ epee_object!(
current_blockchain_height: u64,
);
/// A request for Txs we are missing from our TxPool
/// A request for Txs we are missing from our `TxPool`
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct FluffyMissingTransactionsRequest {
/// The Block we are missing the Txs in
@ -177,7 +177,7 @@ epee_object!(
missing_tx_indices: Vec<u64> as ContainerAsBlob<u64>,
);
/// TxPoolCompliment
/// `TxPoolCompliment`
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct GetTxPoolCompliment {
/// Tx Hashes

View file

@ -8,7 +8,6 @@ authors = ["Boog900"]
[dependencies]
cuprate-pruning = { path = "../../pruning" }
cuprate-wire = { path= "../../net/wire" }
cuprate-p2p-core = { path = "../p2p-core" }
tower = { workspace = true, features = ["util"] }
@ -29,3 +28,6 @@ borsh = { workspace = true, features = ["derive", "std"]}
cuprate-test-utils = {path = "../../test-utils"}
tokio = { workspace = true, features = ["rt-multi-thread", "macros"]}
[lints]
workspace = true

View file

@ -36,7 +36,7 @@ use crate::{
mod tests;
/// An entry in the connected list.
pub struct ConnectionPeerEntry<Z: NetworkZone> {
pub(crate) struct ConnectionPeerEntry<Z: NetworkZone> {
addr: Option<Z::Addr>,
id: u64,
handle: ConnectionHandle,
@ -109,14 +109,14 @@ impl<Z: BorshNetworkZone> AddressBook<Z> {
match handle.poll_unpin(cx) {
Poll::Pending => return,
Poll::Ready(Ok(Err(e))) => {
tracing::error!("Could not save peer list to disk, got error: {}", e)
tracing::error!("Could not save peer list to disk, got error: {e}");
}
Poll::Ready(Err(e)) => {
if e.is_panic() {
panic::resume_unwind(e.into_panic())
}
}
_ => (),
Poll::Ready(_) => (),
}
}
// the task is finished.
@ -144,6 +144,7 @@ impl<Z: BorshNetworkZone> AddressBook<Z> {
let mut internal_addr_disconnected = Vec::new();
let mut addrs_to_ban = Vec::new();
#[expect(clippy::iter_over_hash_type, reason = "ordering doesn't matter here")]
for (internal_addr, peer) in &mut self.connected_peers {
if let Some(time) = peer.handle.check_should_ban() {
match internal_addr {
@ -158,7 +159,7 @@ impl<Z: BorshNetworkZone> AddressBook<Z> {
}
}
for (addr, time) in addrs_to_ban.into_iter() {
for (addr, time) in addrs_to_ban {
self.ban_peer(addr, time);
}
@ -172,12 +173,7 @@ impl<Z: BorshNetworkZone> AddressBook<Z> {
.remove(&addr);
// If the amount of peers with this ban id is 0 remove the whole set.
if self
.connected_peers_ban_id
.get(&addr.ban_id())
.unwrap()
.is_empty()
{
if self.connected_peers_ban_id[&addr.ban_id()].is_empty() {
self.connected_peers_ban_id.remove(&addr.ban_id());
}
// remove the peer from the anchor list.
@ -188,7 +184,7 @@ impl<Z: BorshNetworkZone> AddressBook<Z> {
fn ban_peer(&mut self, addr: Z::Addr, time: Duration) {
if self.banned_peers.contains_key(&addr.ban_id()) {
tracing::error!("Tried to ban peer twice, this shouldn't happen.")
tracing::error!("Tried to ban peer twice, this shouldn't happen.");
}
if let Some(connected_peers_with_ban_id) = self.connected_peers_ban_id.get(&addr.ban_id()) {
@ -242,10 +238,10 @@ impl<Z: BorshNetworkZone> AddressBook<Z> {
peer_list.retain_mut(|peer| {
peer.adr.make_canonical();
if !peer.adr.should_add_to_peer_list() {
false
} else {
if peer.adr.should_add_to_peer_list() {
!self.is_peer_banned(&peer.adr)
} else {
false
}
// TODO: check rpc/ p2p ports not the same
});
@ -391,7 +387,7 @@ impl<Z: BorshNetworkZone> Service<AddressBookRequest<Z>> for AddressBook<Z> {
rpc_credits_per_hash,
},
)
.map(|_| AddressBookResponse::Ok),
.map(|()| AddressBookResponse::Ok),
AddressBookRequest::IncomingPeerList(peer_list) => {
self.handle_incoming_peer_list(peer_list);
Ok(AddressBookResponse::Ok)

View file

@ -109,7 +109,7 @@ async fn add_new_peer_already_connected() {
},
),
Err(AddressBookError::PeerAlreadyConnected)
)
);
}
#[tokio::test]
@ -143,5 +143,5 @@ async fn banned_peer_removed_from_peer_lists() {
.unwrap()
.into_inner(),
TestNetZoneAddr(1)
)
);
}

View file

@ -7,31 +7,31 @@ use cuprate_p2p_core::{services::ZoneSpecificPeerListEntryBase, NetZoneAddress,
use cuprate_pruning::{PruningSeed, CRYPTONOTE_MAX_BLOCK_HEIGHT};
#[cfg(test)]
pub mod tests;
pub(crate) mod tests;
/// A Peer list in the address book.
///
/// This could either be the white list or gray list.
#[derive(Debug)]
pub struct PeerList<Z: NetworkZone> {
pub(crate) struct PeerList<Z: NetworkZone> {
/// The peers with their peer data.
pub peers: IndexMap<Z::Addr, ZoneSpecificPeerListEntryBase<Z::Addr>>,
/// An index of Pruning seed to address, so can quickly grab peers with the blocks
/// we want.
///
/// Pruning seeds are sorted by first their log_stripes and then their stripe.
/// Pruning seeds are sorted by first their `log_stripes` and then their stripe.
/// This means the first peers in this list will store more blocks than peers
/// later on. So when we need a peer with a certain block we look at the peers
/// storing more blocks first then work our way to the peers storing less.
///
pruning_seeds: BTreeMap<PruningSeed, Vec<Z::Addr>>,
/// A hashmap linking ban_ids to addresses.
/// A hashmap linking `ban_ids` to addresses.
ban_ids: HashMap<<Z::Addr as NetZoneAddress>::BanID, Vec<Z::Addr>>,
}
impl<Z: NetworkZone> PeerList<Z> {
/// Creates a new peer list.
pub fn new(list: Vec<ZoneSpecificPeerListEntryBase<Z::Addr>>) -> PeerList<Z> {
pub(crate) fn new(list: Vec<ZoneSpecificPeerListEntryBase<Z::Addr>>) -> Self {
let mut peers = IndexMap::with_capacity(list.len());
let mut pruning_seeds = BTreeMap::new();
let mut ban_ids = HashMap::with_capacity(list.len());
@ -49,7 +49,7 @@ impl<Z: NetworkZone> PeerList<Z> {
peers.insert(peer.adr, peer);
}
PeerList {
Self {
peers,
pruning_seeds,
ban_ids,
@ -57,21 +57,20 @@ impl<Z: NetworkZone> PeerList<Z> {
}
/// Gets the length of the peer list
pub fn len(&self) -> usize {
pub(crate) fn len(&self) -> usize {
self.peers.len()
}
/// Adds a new peer to the peer list
pub fn add_new_peer(&mut self, peer: ZoneSpecificPeerListEntryBase<Z::Addr>) {
pub(crate) fn add_new_peer(&mut self, peer: ZoneSpecificPeerListEntryBase<Z::Addr>) {
if self.peers.insert(peer.adr, peer).is_none() {
// It's more clear with this
#[allow(clippy::unwrap_or_default)]
#[expect(clippy::unwrap_or_default, reason = "It's more clear with this")]
self.pruning_seeds
.entry(peer.pruning_seed)
.or_insert_with(Vec::new)
.push(peer.adr);
#[allow(clippy::unwrap_or_default)]
#[expect(clippy::unwrap_or_default)]
self.ban_ids
.entry(peer.adr.ban_id())
.or_insert_with(Vec::new)
@ -85,7 +84,7 @@ impl<Z: NetworkZone> PeerList<Z> {
/// list.
///
/// The given peer will be removed from the peer list.
pub fn take_random_peer<R: Rng>(
pub(crate) fn take_random_peer<R: Rng>(
&mut self,
r: &mut R,
block_needed: Option<usize>,
@ -127,7 +126,7 @@ impl<Z: NetworkZone> PeerList<Z> {
None
}
pub fn get_random_peers<R: Rng>(
pub(crate) fn get_random_peers<R: Rng>(
&self,
r: &mut R,
len: usize,
@ -142,7 +141,7 @@ impl<Z: NetworkZone> PeerList<Z> {
}
/// Returns a mutable reference to a peer.
pub fn get_peer_mut(
pub(crate) fn get_peer_mut(
&mut self,
peer: &Z::Addr,
) -> Option<&mut ZoneSpecificPeerListEntryBase<Z::Addr>> {
@ -150,7 +149,7 @@ impl<Z: NetworkZone> PeerList<Z> {
}
/// Returns true if the list contains this peer.
pub fn contains_peer(&self, peer: &Z::Addr) -> bool {
pub(crate) fn contains_peer(&self, peer: &Z::Addr) -> bool {
self.peers.contains_key(peer)
}
@ -189,11 +188,11 @@ impl<Z: NetworkZone> PeerList<Z> {
/// MUST NOT BE USED ALONE
fn remove_peer_from_all_idxs(&mut self, peer: &ZoneSpecificPeerListEntryBase<Z::Addr>) {
self.remove_peer_pruning_idx(peer);
self.remove_peer_ban_idx(peer)
self.remove_peer_ban_idx(peer);
}
/// Removes a peer from the peer list
pub fn remove_peer(
pub(crate) fn remove_peer(
&mut self,
peer: &Z::Addr,
) -> Option<ZoneSpecificPeerListEntryBase<Z::Addr>> {
@ -203,7 +202,7 @@ impl<Z: NetworkZone> PeerList<Z> {
}
/// Removes all peers with a specific ban id.
pub fn remove_peers_with_ban_id(&mut self, ban_id: &<Z::Addr as NetZoneAddress>::BanID) {
pub(crate) fn remove_peers_with_ban_id(&mut self, ban_id: &<Z::Addr as NetZoneAddress>::BanID) {
let Some(addresses) = self.ban_ids.get(ban_id) else {
// No peers to ban
return;
@ -217,8 +216,8 @@ impl<Z: NetworkZone> PeerList<Z> {
/// Tries to reduce the peer list to `new_len`.
///
/// This function could keep the list bigger than `new_len` if `must_keep_peers`s length
/// is larger than new_len, in that case we will remove as much as we can.
pub fn reduce_list(&mut self, must_keep_peers: &HashSet<Z::Addr>, new_len: usize) {
/// is larger than `new_len`, in that case we will remove as much as we can.
pub(crate) fn reduce_list(&mut self, must_keep_peers: &HashSet<Z::Addr>, new_len: usize) {
if new_len >= self.len() {
return;
}

View file

@ -14,7 +14,7 @@ fn make_fake_peer(
) -> ZoneSpecificPeerListEntryBase<TestNetZoneAddr> {
ZoneSpecificPeerListEntryBase {
adr: TestNetZoneAddr(id),
id: id as u64,
id: u64::from(id),
last_seen: 0,
pruning_seed: PruningSeed::decompress(pruning_seed.unwrap_or(0)).unwrap(),
rpc_port: 0,
@ -22,14 +22,14 @@ fn make_fake_peer(
}
}
pub fn make_fake_peer_list(
pub(crate) fn make_fake_peer_list(
start_idx: u32,
numb_o_peers: u32,
) -> PeerList<TestNetZone<true, true, true>> {
let mut peer_list = Vec::with_capacity(numb_o_peers as usize);
for idx in start_idx..(start_idx + numb_o_peers) {
peer_list.push(make_fake_peer(idx, None))
peer_list.push(make_fake_peer(idx, None));
}
PeerList::new(peer_list)
@ -50,7 +50,7 @@ fn make_fake_peer_list_with_random_pruning_seeds(
} else {
r.gen_range(384..=391)
}),
))
));
}
PeerList::new(peer_list)
}
@ -70,7 +70,7 @@ fn peer_list_reduce_length() {
#[test]
fn peer_list_reduce_length_with_peers_we_need() {
let mut peer_list = make_fake_peer_list(0, 500);
let must_keep_peers = HashSet::from_iter(peer_list.peers.keys().copied());
let must_keep_peers = peer_list.peers.keys().copied().collect::<HashSet<_>>();
let target_len = 49;
@ -92,7 +92,7 @@ fn peer_list_remove_specific_peer() {
let peers = peer_list.peers;
for (_, addrs) in pruning_idxs {
addrs.iter().for_each(|adr| assert_ne!(adr, &peer.adr))
addrs.iter().for_each(|adr| assert_ne!(adr, &peer.adr));
}
assert!(!peers.contains_key(&peer.adr));
@ -104,13 +104,13 @@ fn peer_list_pruning_idxs_are_correct() {
let mut total_len = 0;
for (seed, list) in peer_list.pruning_seeds {
for peer in list.iter() {
for peer in &list {
assert_eq!(peer_list.peers.get(peer).unwrap().pruning_seed, seed);
total_len += 1;
}
}
assert_eq!(total_len, peer_list.peers.len())
assert_eq!(total_len, peer_list.peers.len());
}
#[test]
@ -122,11 +122,7 @@ fn peer_list_add_new_peer() {
assert_eq!(peer_list.len(), 11);
assert_eq!(peer_list.peers.get(&new_peer.adr), Some(&new_peer));
assert!(peer_list
.pruning_seeds
.get(&new_peer.pruning_seed)
.unwrap()
.contains(&new_peer.adr));
assert!(peer_list.pruning_seeds[&new_peer.pruning_seed].contains(&new_peer.adr));
}
#[test]
@ -164,7 +160,7 @@ fn peer_list_get_peer_with_block() {
assert!(peer
.pruning_seed
.get_next_unpruned_block(1, 1_000_000)
.is_ok())
.is_ok());
}
#[test]

View file

@ -1,3 +1,8 @@
#![expect(
single_use_lifetimes,
reason = "false positive on generated derive code on `SerPeerDataV1`"
)]
use std::fs;
use borsh::{from_slice, to_vec, BorshDeserialize, BorshSerialize};
@ -21,7 +26,7 @@ struct DeserPeerDataV1<A: NetZoneAddress> {
gray_list: Vec<ZoneSpecificPeerListEntryBase<A>>,
}
pub fn save_peers_to_disk<Z: BorshNetworkZone>(
pub(crate) fn save_peers_to_disk<Z: BorshNetworkZone>(
cfg: &AddressBookConfig,
white_list: &PeerList<Z>,
gray_list: &PeerList<Z>,
@ -38,7 +43,7 @@ pub fn save_peers_to_disk<Z: BorshNetworkZone>(
spawn_blocking(move || fs::write(&file, &data))
}
pub async fn read_peers_from_disk<Z: BorshNetworkZone>(
pub(crate) async fn read_peers_from_disk<Z: BorshNetworkZone>(
cfg: &AddressBookConfig,
) -> Result<
(

View file

@ -24,4 +24,7 @@ thiserror = { workspace = true }
[dev-dependencies]
tokio = { workspace = true, features = ["rt-multi-thread", "macros", "sync"] }
proptest = { workspace = true, features = ["default"] }
proptest = { workspace = true, features = ["default"] }
[lints]
workspace = true

View file

@ -8,7 +8,7 @@ use std::{
/// (1 - ep) is the probability that a transaction travels for `k` hops before a nodes embargo timeout fires, this constant is (1 - ep).
const EMBARGO_FULL_TRAVEL_PROBABILITY: f64 = 0.90;
/// The graph type to use for dandelion routing, the dandelion paper recommends [Graph::FourRegular].
/// The graph type to use for dandelion routing, the dandelion paper recommends [`Graph::FourRegular`].
///
/// The decision between line graphs and 4-regular graphs depend on the priorities of the system, if
/// linkability of transactions is a first order concern then line graphs may be better, however 4-regular graphs
@ -66,7 +66,7 @@ impl DandelionConfig {
/// Returns the number of outbound peers to use to stem transactions.
///
/// This value depends on the [`Graph`] chosen.
pub fn number_of_stems(&self) -> usize {
pub const fn number_of_stems(&self) -> usize {
match self.graph {
Graph::Line => 1,
Graph::FourRegular => 2,

View file

@ -26,7 +26,7 @@
//! The diffuse service should have a request of [`DiffuseRequest`](traits::DiffuseRequest) and it's error
//! should be [`tower::BoxError`].
//!
//! ## Outbound Peer TryStream
//! ## Outbound Peer `TryStream`
//!
//! The outbound peer [`TryStream`](futures::TryStream) should provide a stream of randomly selected outbound
//! peers, these peers will then be used to route stem txs to.
@ -37,7 +37,7 @@
//! ## Peer Service
//!
//! This service represents a connection to an individual peer, this should be returned from the Outbound Peer
//! TryStream. This should immediately send the transaction to the peer when requested, it should _not_ set
//! `TryStream`. This should immediately send the transaction to the peer when requested, it should _not_ set
//! a timer.
//!
//! The peer service should have a request of [`StemRequest`](traits::StemRequest) and its error

View file

@ -30,7 +30,7 @@ pub struct IncomingTxBuilder<const RS: bool, const DBS: bool, Tx, TxId, PeerId>
impl<Tx, TxId, PeerId> IncomingTxBuilder<false, false, Tx, TxId, PeerId> {
/// Creates a new [`IncomingTxBuilder`].
pub fn new(tx: Tx, tx_id: TxId) -> Self {
pub const fn new(tx: Tx, tx_id: TxId) -> Self {
Self {
tx,
tx_id,

View file

@ -88,9 +88,7 @@ where
.insert(peer.clone());
}
let state = from
.map(|from| TxState::Stem { from })
.unwrap_or(TxState::Local);
let state = from.map_or(TxState::Local, |from| TxState::Stem { from });
let fut = self
.dandelion_router
@ -280,13 +278,15 @@ where
};
if let Err(e) = self.handle_incoming_tx(tx, routing_state, tx_id).await {
#[expect(clippy::let_underscore_must_use, reason = "dropped receivers can be ignored")]
let _ = res_tx.send(());
tracing::error!("Error handling transaction in dandelion pool: {e}");
return;
}
let _ = res_tx.send(());
#[expect(clippy::let_underscore_must_use)]
let _ = res_tx.send(());
}
}
}

View file

@ -140,7 +140,7 @@ where
State::Stem
};
DandelionRouter {
Self {
outbound_peer_discover: Box::pin(outbound_peer_discover),
broadcast_svc,
current_state,
@ -198,7 +198,7 @@ where
fn stem_tx(
&mut self,
tx: Tx,
from: Id,
from: &Id,
) -> BoxFuture<'static, Result<State, DandelionRouterError>> {
if self.stem_peers.is_empty() {
tracing::debug!("Stem peers are empty, fluffing stem transaction.");
@ -216,7 +216,7 @@ where
});
let Some(peer) = self.stem_peers.get_mut(stem_route) else {
self.stem_routes.remove(&from);
self.stem_routes.remove(from);
continue;
};
@ -302,7 +302,7 @@ where
tracing::debug!(
parent: span,
"Peer returned an error on `poll_ready`: {e}, removing from router.",
)
);
})
.is_ok(),
Poll::Pending => {
@ -341,7 +341,7 @@ where
State::Stem => {
tracing::trace!(parent: &self.span, "Steming transaction");
self.stem_tx(req.tx, from)
self.stem_tx(req.tx, &from)
}
},
TxState::Local => {

View file

@ -12,7 +12,7 @@ use crate::{
OutboundPeer, State,
};
pub fn mock_discover_svc<Req: Send + 'static>() -> (
pub(crate) fn mock_discover_svc<Req: Send + 'static>() -> (
impl Stream<
Item = Result<
OutboundPeer<
@ -49,7 +49,7 @@ pub fn mock_discover_svc<Req: Send + 'static>() -> (
(discover, rx)
}
pub fn mock_broadcast_svc<Req: Send + 'static>() -> (
pub(crate) fn mock_broadcast_svc<Req: Send + 'static>() -> (
impl Service<
Req,
Future = impl Future<Output = Result<(), tower::BoxError>> + Send + 'static,
@ -70,8 +70,8 @@ pub fn mock_broadcast_svc<Req: Send + 'static>() -> (
)
}
#[allow(clippy::type_complexity)] // just test code.
pub fn mock_in_memory_backing_pool<
#[expect(clippy::type_complexity, reason = "just test code.")]
pub(crate) fn mock_in_memory_backing_pool<
Tx: Clone + Send + 'static,
TxID: Clone + Hash + Eq + Send + 'static,
>() -> (
@ -85,11 +85,11 @@ pub fn mock_in_memory_backing_pool<
Arc<std::sync::Mutex<HashMap<TxID, (Tx, State)>>>,
) {
let txs = Arc::new(std::sync::Mutex::new(HashMap::new()));
let txs_2 = txs.clone();
let txs_2 = Arc::clone(&txs);
(
service_fn(move |req: TxStoreRequest<TxID>| {
let txs = txs.clone();
let txs = Arc::clone(&txs);
async move {
match req {
TxStoreRequest::Get(tx_id) => {

View file

@ -39,5 +39,5 @@ async fn basic_functionality() {
// TODO: the DandelionPoolManager doesn't handle adding txs to the pool, add more tests here to test
// all functionality.
//assert!(pool.lock().unwrap().contains_key(&1));
assert!(broadcast_rx.try_recv().is_ok())
assert!(broadcast_rx.try_recv().is_ok());
}

View file

@ -54,8 +54,13 @@ impl NetworkZone for ClearNet {
const NAME: &'static str = "ClearNet";
const SEEDS: &'static [Self::Addr] = &[
ip_v4(37, 187, 74, 171, 18080),
ip_v4(176, 9, 0, 187, 18080),
ip_v4(88, 198, 163, 90, 18080),
ip_v4(66, 85, 74, 134, 18080),
ip_v4(51, 79, 173, 165, 18080),
ip_v4(192, 99, 8, 110, 18080),
ip_v4(37, 187, 74, 171, 18080),
ip_v4(77, 172, 183, 193, 18080),
];
const ALLOW_SYNC: bool = true;

View file

@ -3,6 +3,12 @@ use std::time::Duration;
/// The timeout we set on handshakes.
pub(crate) const HANDSHAKE_TIMEOUT: Duration = Duration::from_secs(20);
/// The timeout we set on receiving ping requests
pub(crate) const PING_REQUEST_TIMEOUT: Duration = Duration::from_secs(5);
/// The amount of concurrency (maximum number of simultaneous tasks) we allow for handling ping requests
pub(crate) const PING_REQUEST_CONCURRENCY: usize = 2;
/// The maximum amount of connections to make to seed nodes for when we need peers.
pub(crate) const MAX_SEED_CONNECTIONS: usize = 3;

View file

@ -4,9 +4,10 @@
//! them to the handshaker service and then adds them to the client pool.
use std::{pin::pin, sync::Arc};
use futures::StreamExt;
use futures::{SinkExt, StreamExt};
use tokio::{
sync::Semaphore,
task::JoinSet,
time::{sleep, timeout},
};
use tower::{Service, ServiceExt};
@ -17,14 +18,22 @@ use cuprate_p2p_core::{
services::{AddressBookRequest, AddressBookResponse},
AddressBook, ConnectionDirection, NetworkZone,
};
use cuprate_wire::{
admin::{PingResponse, PING_OK_RESPONSE_STATUS_TEXT},
AdminRequestMessage, AdminResponseMessage, Message,
};
use crate::{
client_pool::ClientPool,
constants::{HANDSHAKE_TIMEOUT, INBOUND_CONNECTION_COOL_DOWN},
constants::{
HANDSHAKE_TIMEOUT, INBOUND_CONNECTION_COOL_DOWN, PING_REQUEST_CONCURRENCY,
PING_REQUEST_TIMEOUT,
},
P2PConfig,
};
/// Starts the inbound server.
/// Starts the inbound server. This function will listen to all incoming connections
/// and initiate handshake if needed, after verifying the address isn't banned.
#[instrument(level = "warn", skip_all)]
pub async fn inbound_server<N, HS, A>(
client_pool: Arc<ClientPool<N>>,
@ -40,6 +49,10 @@ where
HS::Future: Send + 'static,
A: AddressBook<N>,
{
// Copying the peer_id before borrowing for ping responses (Make us avoid a `clone()`).
let our_peer_id = config.basic_node_data().peer_id;
// Mandatory. Extract server config from P2PConfig
let Some(server_config) = config.server_config else {
tracing::warn!("No inbound server config provided, not listening for inbound connections.");
return Ok(());
@ -53,13 +66,18 @@ where
let mut listener = pin!(listener);
// Create semaphore for limiting to maximum inbound connections.
let semaphore = Arc::new(Semaphore::new(config.max_inbound_connections));
// Create ping request handling JoinSet
let mut ping_join_set = JoinSet::new();
// Listen to incoming connections and extract necessary information.
while let Some(connection) = listener.next().await {
let Ok((addr, peer_stream, peer_sink)) = connection else {
let Ok((addr, mut peer_stream, mut peer_sink)) = connection else {
continue;
};
// If peer is banned, drop connection
if let Some(addr) = &addr {
let AddressBookResponse::IsPeerBanned(banned) = address_book
.ready()
@ -75,11 +93,13 @@ where
}
}
// Create a new internal id for new peers
let addr = match addr {
Some(addr) => InternalPeerID::KnownAddr(addr),
None => InternalPeerID::Unknown(rand::random()),
};
// If we're still behind our maximum limit, Initiate handshake.
if let Ok(permit) = semaphore.clone().try_acquire_owned() {
tracing::debug!("Permit free for incoming connection, attempting handshake.");
@ -102,8 +122,39 @@ where
.instrument(Span::current()),
);
} else {
// Otherwise check if the node is simply pinging us.
tracing::debug!("No permit free for incoming connection.");
// TODO: listen for if the peer is just trying to ping us to see if we are reachable.
// We only handle 2 ping request conccurently. Otherwise we drop the connection immediately.
if ping_join_set.len() < PING_REQUEST_CONCURRENCY {
ping_join_set.spawn(
async move {
// Await first message from node. If it is a ping request we respond back, otherwise we drop the connection.
let fut = timeout(PING_REQUEST_TIMEOUT, peer_stream.next());
// Ok if timeout did not elapsed -> Some if there is a message -> Ok if it has been decoded
if let Ok(Some(Ok(Message::Request(AdminRequestMessage::Ping)))) = fut.await
{
let response = peer_sink
.send(
Message::Response(AdminResponseMessage::Ping(PingResponse {
status: PING_OK_RESPONSE_STATUS_TEXT,
peer_id: our_peer_id,
}))
.into(),
)
.await;
if let Err(err) = response {
tracing::debug!(
"Unable to respond to ping request from peer ({addr}): {err}"
)
}
}
}
.instrument(Span::current()),
);
}
}
sleep(INBOUND_CONNECTION_COOL_DOWN).await;

View file

@ -13,3 +13,6 @@ borsh = ["dep:borsh"]
thiserror = { workspace = true }
borsh = { workspace = true, features = ["derive", "std"], optional = true }
[lints]
workspace = true

View file

@ -71,7 +71,7 @@ impl PruningSeed {
///
/// See: [`DecompressedPruningSeed::new`]
pub fn new_pruned(stripe: u32, log_stripes: u32) -> Result<Self, PruningError> {
Ok(PruningSeed::Pruned(DecompressedPruningSeed::new(
Ok(Self::Pruned(DecompressedPruningSeed::new(
stripe,
log_stripes,
)?))
@ -81,9 +81,7 @@ impl PruningSeed {
///
/// An error means the pruning seed was invalid.
pub fn decompress(seed: u32) -> Result<Self, PruningError> {
Ok(DecompressedPruningSeed::decompress(seed)?
.map(PruningSeed::Pruned)
.unwrap_or(PruningSeed::NotPruned))
Ok(DecompressedPruningSeed::decompress(seed)?.map_or(Self::NotPruned, Self::Pruned))
}
/// Decompresses the seed, performing the same checks as [`PruningSeed::decompress`] and some more according to
@ -103,34 +101,34 @@ impl PruningSeed {
}
/// Compresses this pruning seed to a u32.
pub fn compress(&self) -> u32 {
pub const fn compress(&self) -> u32 {
match self {
PruningSeed::NotPruned => 0,
PruningSeed::Pruned(seed) => seed.compress(),
Self::NotPruned => 0,
Self::Pruned(seed) => seed.compress(),
}
}
/// Returns the `log_stripes` for this seed, if this seed is pruned otherwise [`None`] is returned.
pub fn get_log_stripes(&self) -> Option<u32> {
pub const fn get_log_stripes(&self) -> Option<u32> {
match self {
PruningSeed::NotPruned => None,
PruningSeed::Pruned(seed) => Some(seed.log_stripes),
Self::NotPruned => None,
Self::Pruned(seed) => Some(seed.log_stripes),
}
}
/// Returns the `stripe` for this seed, if this seed is pruned otherwise [`None`] is returned.
pub fn get_stripe(&self) -> Option<u32> {
pub const fn get_stripe(&self) -> Option<u32> {
match self {
PruningSeed::NotPruned => None,
PruningSeed::Pruned(seed) => Some(seed.stripe),
Self::NotPruned => None,
Self::Pruned(seed) => Some(seed.stripe),
}
}
/// Returns `true` if a peer with this pruning seed should have a non-pruned version of a block.
pub fn has_full_block(&self, height: usize, blockchain_height: usize) -> bool {
pub const fn has_full_block(&self, height: usize, blockchain_height: usize) -> bool {
match self {
PruningSeed::NotPruned => true,
PruningSeed::Pruned(seed) => seed.has_full_block(height, blockchain_height),
Self::NotPruned => true,
Self::Pruned(seed) => seed.has_full_block(height, blockchain_height),
}
}
@ -155,10 +153,8 @@ impl PruningSeed {
blockchain_height: usize,
) -> Result<Option<usize>, PruningError> {
Ok(match self {
PruningSeed::NotPruned => None,
PruningSeed::Pruned(seed) => {
seed.get_next_pruned_block(block_height, blockchain_height)?
}
Self::NotPruned => None,
Self::Pruned(seed) => seed.get_next_pruned_block(block_height, blockchain_height)?,
})
}
@ -181,10 +177,8 @@ impl PruningSeed {
blockchain_height: usize,
) -> Result<usize, PruningError> {
Ok(match self {
PruningSeed::NotPruned => block_height,
PruningSeed::Pruned(seed) => {
seed.get_next_unpruned_block(block_height, blockchain_height)?
}
Self::NotPruned => block_height,
Self::Pruned(seed) => seed.get_next_unpruned_block(block_height, blockchain_height)?,
})
}
}
@ -199,11 +193,11 @@ impl Ord for PruningSeed {
fn cmp(&self, other: &Self) -> Ordering {
match (self, other) {
// Make sure pruning seeds storing more blocks are greater.
(PruningSeed::NotPruned, PruningSeed::NotPruned) => Ordering::Equal,
(PruningSeed::NotPruned, PruningSeed::Pruned(_)) => Ordering::Greater,
(PruningSeed::Pruned(_), PruningSeed::NotPruned) => Ordering::Less,
(Self::NotPruned, Self::NotPruned) => Ordering::Equal,
(Self::NotPruned, Self::Pruned(_)) => Ordering::Greater,
(Self::Pruned(_), Self::NotPruned) => Ordering::Less,
(PruningSeed::Pruned(seed1), PruningSeed::Pruned(seed2)) => seed1.cmp(seed2),
(Self::Pruned(seed1), Self::Pruned(seed2)) => seed1.cmp(seed2),
}
}
}
@ -222,7 +216,7 @@ pub struct DecompressedPruningSeed {
log_stripes: u32,
/// The specific portion this peer keeps.
///
/// *MUST* be between 1..=2^log_stripes
/// *MUST* be between `1..=2^log_stripes`
stripe: u32,
}
@ -268,13 +262,13 @@ impl DecompressedPruningSeed {
/// a valid seed you currently MUST pass in a number 1 to 8 for `stripe`
/// and 3 for `log_stripes`.*
///
pub fn new(stripe: u32, log_stripes: u32) -> Result<Self, PruningError> {
pub const fn new(stripe: u32, log_stripes: u32) -> Result<Self, PruningError> {
if log_stripes > PRUNING_SEED_LOG_STRIPES_MASK {
Err(PruningError::LogStripesOutOfRange)
} else if !(stripe > 0 && stripe <= (1 << log_stripes)) {
Err(PruningError::StripeOutOfRange)
} else {
Ok(DecompressedPruningSeed {
Ok(Self {
log_stripes,
stripe,
})
@ -286,7 +280,7 @@ impl DecompressedPruningSeed {
/// Will return Ok(None) if the pruning seed means no pruning.
///
/// An error means the pruning seed was invalid.
pub fn decompress(seed: u32) -> Result<Option<Self>, PruningError> {
pub const fn decompress(seed: u32) -> Result<Option<Self>, PruningError> {
if seed == 0 {
// No pruning.
return Ok(None);
@ -299,20 +293,20 @@ impl DecompressedPruningSeed {
return Err(PruningError::StripeOutOfRange);
}
Ok(Some(DecompressedPruningSeed {
Ok(Some(Self {
log_stripes,
stripe,
}))
}
/// Compresses the pruning seed into a u32.
pub fn compress(&self) -> u32 {
pub const fn compress(&self) -> u32 {
(self.log_stripes << PRUNING_SEED_LOG_STRIPES_SHIFT)
| ((self.stripe - 1) << PRUNING_SEED_STRIPE_SHIFT)
}
/// Returns `true` if a peer with this pruning seed should have a non-pruned version of a block.
pub fn has_full_block(&self, height: usize, blockchain_height: usize) -> bool {
pub const fn has_full_block(&self, height: usize, blockchain_height: usize) -> bool {
match get_block_pruning_stripe(height, blockchain_height, self.log_stripes) {
Some(block_stripe) => self.stripe == block_stripe,
None => true,
@ -419,7 +413,7 @@ impl DecompressedPruningSeed {
// We can get the end of our "non-pruning" cycle by getting the next stripe's first un-pruned block height.
// So we calculate the next un-pruned block for the next stripe and return it as our next pruned block
let next_stripe = 1 + (self.stripe & ((1 << self.log_stripes) - 1));
let seed = DecompressedPruningSeed::new(next_stripe, self.log_stripes)
let seed = Self::new(next_stripe, self.log_stripes)
.expect("We just made sure this stripe is in range for this log_stripe");
let calculated_height = seed.get_next_unpruned_block(block_height, blockchain_height)?;
@ -433,7 +427,7 @@ impl DecompressedPruningSeed {
}
}
fn get_block_pruning_stripe(
const fn get_block_pruning_stripe(
block_height: usize,
blockchain_height: usize,
log_stripe: u32,
@ -441,9 +435,14 @@ fn get_block_pruning_stripe(
if block_height + CRYPTONOTE_PRUNING_TIP_BLOCKS >= blockchain_height {
None
} else {
#[expect(
clippy::cast_possible_truncation,
clippy::cast_sign_loss,
reason = "it's trivial to prove it's ok to us `as` here"
)]
Some(
(((block_height / CRYPTONOTE_PRUNING_STRIPE_SIZE) & ((1 << log_stripe) as usize - 1))
+ 1) as u32, // it's trivial to prove it's ok to us `as` here
+ 1) as u32,
)
}
}
@ -483,16 +482,17 @@ mod tests {
#[test]
fn get_pruning_log_stripe() {
let all_valid_seeds = make_all_pruning_seeds();
for seed in all_valid_seeds.iter() {
assert_eq!(seed.get_log_stripes().unwrap(), 3)
for seed in &all_valid_seeds {
assert_eq!(seed.get_log_stripes().unwrap(), 3);
}
}
#[test]
fn get_pruning_stripe() {
let all_valid_seeds = make_all_pruning_seeds();
#[expect(clippy::cast_possible_truncation)]
for (i, seed) in all_valid_seeds.iter().enumerate() {
assert_eq!(seed.get_stripe().unwrap(), i as u32 + 1)
assert_eq!(seed.get_stripe().unwrap(), i as u32 + 1);
}
}
@ -554,7 +554,7 @@ mod tests {
assert_eq!(
seed.get_next_unpruned_block(0, blockchain_height).unwrap(),
i * 4096
)
);
}
for (i, seed) in all_valid_seeds.iter().enumerate() {
@ -562,7 +562,7 @@ mod tests {
seed.get_next_unpruned_block((i + 1) * 4096, blockchain_height)
.unwrap(),
i * 4096 + 32768
)
);
}
for (i, seed) in all_valid_seeds.iter().enumerate() {
@ -570,15 +570,15 @@ mod tests {
seed.get_next_unpruned_block((i + 8) * 4096, blockchain_height)
.unwrap(),
i * 4096 + 32768
)
);
}
for seed in all_valid_seeds.iter() {
for seed in &all_valid_seeds {
assert_eq!(
seed.get_next_unpruned_block(76437863 - 1, blockchain_height)
.unwrap(),
76437863 - 1
)
);
}
let zero_seed = PruningSeed::NotPruned;
@ -591,7 +591,7 @@ mod tests {
let seed = PruningSeed::decompress(384).unwrap();
// the next unpruned block is the first tip block
assert_eq!(seed.get_next_unpruned_block(5000, 11000).unwrap(), 5500)
assert_eq!(seed.get_next_unpruned_block(5000, 11000).unwrap(), 5500);
}
#[test]
@ -605,7 +605,7 @@ mod tests {
.unwrap()
.unwrap(),
0
)
);
}
for (i, seed) in all_valid_seeds.iter().enumerate() {
@ -614,7 +614,7 @@ mod tests {
.unwrap()
.unwrap(),
(i + 1) * 4096
)
);
}
for (i, seed) in all_valid_seeds.iter().enumerate() {
@ -623,15 +623,15 @@ mod tests {
.unwrap()
.unwrap(),
(i + 9) * 4096
)
);
}
for seed in all_valid_seeds.iter() {
for seed in &all_valid_seeds {
assert_eq!(
seed.get_next_pruned_block(76437863 - 1, blockchain_height)
.unwrap(),
None
)
);
}
let zero_seed = PruningSeed::NotPruned;
@ -644,6 +644,6 @@ mod tests {
let seed = PruningSeed::decompress(384).unwrap();
// there is no next pruned block
assert_eq!(seed.get_next_pruned_block(5000, 10000).unwrap(), None)
assert_eq!(seed.get_next_pruned_block(5000, 10000).unwrap(), None);
}
}

View file

@ -28,7 +28,6 @@ macro_rules! generate_endpoints_with_input {
),*) => { paste::paste! {
$(
/// TODO
#[allow(unused_mut)]
pub(crate) async fn $endpoint<H: RpcHandler>(
State(handler): State<H>,
mut request: Bytes,
@ -55,7 +54,6 @@ macro_rules! generate_endpoints_with_no_input {
),*) => { paste::paste! {
$(
/// TODO
#[allow(unused_mut)]
pub(crate) async fn $endpoint<H: RpcHandler>(
State(handler): State<H>,
) -> Result<Bytes, StatusCode> {

View file

@ -69,7 +69,6 @@ macro_rules! generate_router_builder {
/// .all()
/// .build();
/// ```
#[allow(clippy::struct_excessive_bools)]
#[derive(Clone)]
pub struct RouterBuilder<H: RpcHandler> {
router: Router<H>,

View file

@ -57,7 +57,7 @@ impl Service<JsonRpcRequest> for RpcHandlerDummy {
use cuprate_rpc_types::json::JsonRpcRequest as Req;
use cuprate_rpc_types::json::JsonRpcResponse as Resp;
#[allow(clippy::default_trait_access)]
#[expect(clippy::default_trait_access)]
let resp = match req {
Req::GetBlockCount(_) => Resp::GetBlockCount(Default::default()),
Req::OnGetBlockHash(_) => Resp::OnGetBlockHash(Default::default()),
@ -112,7 +112,7 @@ impl Service<BinRequest> for RpcHandlerDummy {
use cuprate_rpc_types::bin::BinRequest as Req;
use cuprate_rpc_types::bin::BinResponse as Resp;
#[allow(clippy::default_trait_access)]
#[expect(clippy::default_trait_access)]
let resp = match req {
Req::GetBlocks(_) => Resp::GetBlocks(Default::default()),
Req::GetBlocksByHeight(_) => Resp::GetBlocksByHeight(Default::default()),
@ -142,7 +142,7 @@ impl Service<OtherRequest> for RpcHandlerDummy {
use cuprate_rpc_types::other::OtherRequest as Req;
use cuprate_rpc_types::other::OtherResponse as Resp;
#[allow(clippy::default_trait_access)]
#[expect(clippy::default_trait_access)]
let resp = match req {
Req::GetHeight(_) => Resp::GetHeight(Default::default()),
Req::GetTransactions(_) => Resp::GetTransactions(Default::default()),

View file

@ -52,7 +52,7 @@ where
}
/// Tests an input JSON string matches an expected type `T`.
#[allow(clippy::needless_pass_by_value)] // serde signature
#[expect(clippy::needless_pass_by_value, reason = "serde signature")]
fn assert_de<T>(json: &'static str, expected: T)
where
T: DeserializeOwned + std::fmt::Debug + Clone + PartialEq,

View file

@ -138,7 +138,6 @@ define_request! {
)]
///
/// This response's variant depends upon [`PoolInfoExtent`].
#[allow(dead_code, missing_docs)]
#[cfg_attr(feature = "serde", derive(Serialize, Deserialize))]
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Hash)]
pub enum GetBlocksResponse {
@ -157,7 +156,6 @@ impl Default for GetBlocksResponse {
}
/// Data within [`GetBlocksResponse::PoolInfoNone`].
#[allow(dead_code, missing_docs)]
#[cfg_attr(feature = "serde", derive(Serialize, Deserialize))]
#[derive(Clone, Default, Debug, PartialEq, Eq, PartialOrd, Ord, Hash)]
pub struct GetBlocksResponsePoolInfoNone {
@ -183,7 +181,6 @@ epee_object! {
}
/// Data within [`GetBlocksResponse::PoolInfoIncremental`].
#[allow(dead_code, missing_docs)]
#[cfg_attr(feature = "serde", derive(Serialize, Deserialize))]
#[derive(Clone, Default, Debug, PartialEq, Eq, PartialOrd, Ord, Hash)]
pub struct GetBlocksResponsePoolInfoIncremental {
@ -215,7 +212,6 @@ epee_object! {
}
/// Data within [`GetBlocksResponse::PoolInfoFull`].
#[allow(dead_code, missing_docs)]
#[cfg_attr(feature = "serde", derive(Serialize, Deserialize))]
#[derive(Clone, Default, Debug, PartialEq, Eq, PartialOrd, Ord, Hash)]
pub struct GetBlocksResponsePoolInfoFull {
@ -248,7 +244,6 @@ epee_object! {
/// [`EpeeObjectBuilder`] for [`GetBlocksResponse`].
///
/// Not for public usage.
#[allow(dead_code, missing_docs)]
#[derive(Clone, Debug, Default, PartialEq, Eq, PartialOrd, Ord, Hash)]
#[cfg_attr(feature = "serde", derive(Serialize, Deserialize))]
pub struct __GetBlocksResponseEpeeBuilder {
@ -354,7 +349,6 @@ impl EpeeObjectBuilder<GetBlocksResponse> for __GetBlocksResponseEpeeBuilder {
}
#[cfg(feature = "epee")]
#[allow(clippy::cognitive_complexity)]
impl EpeeObject for GetBlocksResponse {
type Builder = __GetBlocksResponseEpeeBuilder;
@ -397,7 +391,6 @@ impl EpeeObject for GetBlocksResponse {
/// See also: [`BinResponse`].
#[cfg_attr(feature = "serde", derive(Deserialize, Serialize))]
#[cfg_attr(feature = "serde", serde(untagged))]
#[allow(missing_docs)]
#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord, Hash)]
pub enum BinRequest {
GetBlocks(GetBlocksRequest),
@ -444,7 +437,6 @@ impl RpcCallValue for BinRequest {
#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord, Hash)]
#[cfg_attr(feature = "serde", derive(Deserialize, Serialize))]
#[cfg_attr(feature = "serde", serde(untagged))]
#[allow(missing_docs)]
pub enum BinResponse {
GetBlocks(GetBlocksResponse),
GetBlocksByHeight(GetBlocksByHeightResponse),

View file

@ -5,16 +5,16 @@
/// Returns `true` if the input `u` is equal to `0`.
#[inline]
#[allow(clippy::trivially_copy_pass_by_ref)] // serde needs `&`
#[allow(dead_code)] // TODO: see if needed after handlers.
#[expect(clippy::trivially_copy_pass_by_ref, reason = "serde signature")]
#[expect(dead_code, reason = "TODO: see if needed after handlers.")]
pub(crate) const fn is_zero(u: &u64) -> bool {
*u == 0
}
/// Returns `true` the input `u` is equal to `1`.
#[inline]
#[allow(clippy::trivially_copy_pass_by_ref)] // serde needs `&`
#[allow(dead_code)] // TODO: see if needed after handlers.
#[expect(clippy::trivially_copy_pass_by_ref, reason = "serde signature")]
#[expect(dead_code, reason = "TODO: see if needed after handlers.")]
pub(crate) const fn is_one(u: &u64) -> bool {
*u == 1
}

View file

@ -1590,7 +1590,6 @@ define_request_and_response! {
feature = "serde",
serde(rename_all = "snake_case", tag = "method", content = "params")
)]
#[allow(missing_docs)]
pub enum JsonRpcRequest {
GetBlockCount(GetBlockCountRequest),
OnGetBlockHash(OnGetBlockHashRequest),
@ -1723,7 +1722,6 @@ impl RpcCallValue for JsonRpcRequest {
#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord, Hash)]
#[cfg_attr(feature = "serde", derive(Deserialize, Serialize))]
#[cfg_attr(feature = "serde", serde(untagged, rename_all = "snake_case"))]
#[allow(missing_docs)]
pub enum JsonRpcResponse {
GetBlockCount(GetBlockCountResponse),
OnGetBlockHash(OnGetBlockHashResponse),

View file

@ -11,6 +11,10 @@
unreachable_code,
reason = "TODO: remove after cuprated RpcHandler impl"
)]
#![allow(
clippy::allow_attributes,
reason = "macros (internal + serde) make this lint hard to satisfy"
)]
mod constants;
mod defaults;

View file

@ -94,6 +94,7 @@ macro_rules! define_request_and_response {
}
) => { paste::paste! {
$crate::macros::define_request! {
#[allow(dead_code, missing_docs, reason = "inside a macro")]
#[doc = $crate::macros::define_request_and_response_doc!(
"response" => [<$type_name Response>],
$monero_daemon_rpc_doc_link,
@ -118,8 +119,7 @@ macro_rules! define_request_and_response {
}
$crate::macros::define_response! {
#[allow(dead_code)]
#[allow(missing_docs)]
#[allow(dead_code, missing_docs, reason = "inside a macro")]
#[cfg_attr(feature = "serde", derive(serde::Serialize, serde::Deserialize))]
#[derive(Clone, Debug, Default, PartialEq, Eq, PartialOrd, Ord, Hash)]
#[doc = $crate::macros::define_request_and_response_doc!(
@ -236,7 +236,7 @@ macro_rules! define_request {
)*
}
) => {
#[allow(dead_code, missing_docs)]
#[allow(dead_code, missing_docs, reason = "inside a macro")]
#[cfg_attr(feature = "serde", derive(serde::Serialize, serde::Deserialize))]
#[derive(Clone, Debug, Default, PartialEq, Eq, PartialOrd, Ord, Hash)]
$( #[$attr] )*

View file

@ -76,7 +76,6 @@ impl Default for Distribution {
}
/// Data within [`Distribution::Uncompressed`].
#[allow(dead_code, missing_docs)]
#[cfg_attr(feature = "serde", derive(Serialize, Deserialize))]
#[derive(Clone, Default, Debug, PartialEq, Eq, PartialOrd, Ord, Hash)]
pub struct DistributionUncompressed {
@ -99,7 +98,6 @@ epee_object! {
}
/// Data within [`Distribution::CompressedBinary`].
#[allow(dead_code, missing_docs)]
#[cfg_attr(feature = "serde", derive(Serialize, Deserialize))]
#[derive(Clone, Default, Debug, PartialEq, Eq, PartialOrd, Ord, Hash)]
pub struct DistributionCompressedBinary {
@ -132,7 +130,7 @@ epee_object! {
/// 1. Compresses the distribution array
/// 2. Serializes the compressed data
#[cfg(feature = "serde")]
#[allow(clippy::ptr_arg)]
#[expect(clippy::ptr_arg)]
fn serialize_distribution_as_compressed_data<S>(v: &Vec<u64>, s: S) -> Result<S::Ok, S::Error>
where
S: serde::Serializer,
@ -162,7 +160,6 @@ where
/// [`EpeeObjectBuilder`] for [`Distribution`].
///
/// Not for public usage.
#[allow(dead_code, missing_docs)]
#[derive(Clone, Debug, Default, PartialEq, Eq, PartialOrd, Ord, Hash)]
#[cfg_attr(feature = "serde", derive(Serialize, Deserialize))]
pub struct __DistributionEpeeBuilder {

View file

@ -15,7 +15,7 @@
mod binary_string;
mod distribution;
mod key_image_spent_status;
#[allow(clippy::module_inception)]
#[expect(clippy::module_inception)]
mod misc;
mod pool_info_extent;
mod requested_info;

View file

@ -973,7 +973,6 @@ define_request_and_response! {
#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord, Hash)]
#[cfg_attr(feature = "serde", derive(Deserialize, Serialize))]
#[cfg_attr(feature = "serde", serde(untagged))]
#[allow(missing_docs)]
pub enum OtherRequest {
GetHeight(GetHeightRequest),
GetTransactions(GetTransactionsRequest),
@ -1092,7 +1091,6 @@ impl RpcCallValue for OtherRequest {
#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord, Hash)]
#[cfg_attr(feature = "serde", derive(Deserialize, Serialize))]
#[cfg_attr(feature = "serde", serde(untagged))]
#[allow(missing_docs)]
pub enum OtherResponse {
GetHeight(GetHeightResponse),
GetTransactions(GetTransactionsResponse),

View file

@ -15,21 +15,19 @@ default = ["heed", "service"]
heed = ["cuprate-database/heed"]
redb = ["cuprate-database/redb"]
redb-memory = ["cuprate-database/redb-memory"]
service = ["dep:thread_local", "dep:rayon"]
service = ["dep:thread_local", "dep:rayon", "cuprate-helper/thread"]
[dependencies]
# FIXME:
# We only need the `thread` feature if `service` is enabled.
# Figure out how to enable features of an already pulled in dependency conditionally.
cuprate-database = { path = "../database" }
cuprate-database-service = { path = "../service" }
cuprate-helper = { path = "../../helper", features = ["fs", "thread", "map"] }
cuprate-helper = { path = "../../helper", features = ["fs", "map"] }
cuprate-types = { path = "../../types", features = ["blockchain"] }
cuprate-pruning = { path = "../../pruning" }
bitflags = { workspace = true, features = ["std", "serde", "bytemuck"] }
bytemuck = { workspace = true, features = ["must_cast", "derive", "min_const_generics", "extern_crate_alloc"] }
curve25519-dalek = { workspace = true }
cuprate-pruning = { path = "../../pruning" }
rand = { workspace = true }
monero-serai = { workspace = true, features = ["std"] }
serde = { workspace = true, optional = true }

View file

@ -0,0 +1,337 @@
use bytemuck::TransparentWrapper;
use monero_serai::block::{Block, BlockHeader};
use cuprate_database::{DatabaseRo, DatabaseRw, RuntimeError, StorableVec};
use cuprate_helper::map::{combine_low_high_bits_to_u128, split_u128_into_low_high_bits};
use cuprate_types::{AltBlockInformation, Chain, ChainId, ExtendedBlockHeader, HardFork};
use crate::{
ops::{
alt_block::{add_alt_transaction_blob, get_alt_transaction, update_alt_chain_info},
block::get_block_info,
macros::doc_error,
},
tables::{Tables, TablesMut},
types::{AltBlockHeight, BlockHash, BlockHeight, CompactAltBlockInfo},
};
/// Flush all alt-block data from all the alt-block tables.
///
/// This function completely empties the alt block tables.
pub fn flush_alt_blocks<'a, E: cuprate_database::EnvInner<'a>>(
env_inner: &E,
tx_rw: &mut E::Rw<'_>,
) -> Result<(), RuntimeError> {
use crate::tables::{
AltBlockBlobs, AltBlockHeights, AltBlocksInfo, AltChainInfos, AltTransactionBlobs,
AltTransactionInfos,
};
env_inner.clear_db::<AltChainInfos>(tx_rw)?;
env_inner.clear_db::<AltBlockHeights>(tx_rw)?;
env_inner.clear_db::<AltBlocksInfo>(tx_rw)?;
env_inner.clear_db::<AltBlockBlobs>(tx_rw)?;
env_inner.clear_db::<AltTransactionBlobs>(tx_rw)?;
env_inner.clear_db::<AltTransactionInfos>(tx_rw)
}
/// Add a [`AltBlockInformation`] to the database.
///
/// This extracts all the data from the input block and
/// maps/adds them to the appropriate database tables.
///
#[doc = doc_error!()]
///
/// # Panics
/// This function will panic if:
/// - `alt_block.height` is == `0`
/// - `alt_block.txs.len()` != `alt_block.block.transactions.len()`
///
pub fn add_alt_block(
alt_block: &AltBlockInformation,
tables: &mut impl TablesMut,
) -> Result<(), RuntimeError> {
let alt_block_height = AltBlockHeight {
chain_id: alt_block.chain_id.into(),
height: alt_block.height,
};
tables
.alt_block_heights_mut()
.put(&alt_block.block_hash, &alt_block_height)?;
update_alt_chain_info(&alt_block_height, &alt_block.block.header.previous, tables)?;
let (cumulative_difficulty_low, cumulative_difficulty_high) =
split_u128_into_low_high_bits(alt_block.cumulative_difficulty);
let alt_block_info = CompactAltBlockInfo {
block_hash: alt_block.block_hash,
pow_hash: alt_block.pow_hash,
height: alt_block.height,
weight: alt_block.weight,
long_term_weight: alt_block.long_term_weight,
cumulative_difficulty_low,
cumulative_difficulty_high,
};
tables
.alt_blocks_info_mut()
.put(&alt_block_height, &alt_block_info)?;
tables.alt_block_blobs_mut().put(
&alt_block_height,
StorableVec::wrap_ref(&alt_block.block_blob),
)?;
assert_eq!(alt_block.txs.len(), alt_block.block.transactions.len());
for tx in &alt_block.txs {
add_alt_transaction_blob(tx, tables)?;
}
Ok(())
}
/// Retrieves an [`AltBlockInformation`] from the database.
///
/// This function will look at only the blocks with the given [`AltBlockHeight::chain_id`], no others
/// even if they are technically part of this chain.
#[doc = doc_error!()]
pub fn get_alt_block(
alt_block_height: &AltBlockHeight,
tables: &impl Tables,
) -> Result<AltBlockInformation, RuntimeError> {
let block_info = tables.alt_blocks_info().get(alt_block_height)?;
let block_blob = tables.alt_block_blobs().get(alt_block_height)?.0;
let block = Block::read(&mut block_blob.as_slice())?;
let txs = block
.transactions
.iter()
.map(|tx_hash| get_alt_transaction(tx_hash, tables))
.collect::<Result<_, RuntimeError>>()?;
Ok(AltBlockInformation {
block,
block_blob,
txs,
block_hash: block_info.block_hash,
pow_hash: block_info.pow_hash,
height: block_info.height,
weight: block_info.weight,
long_term_weight: block_info.long_term_weight,
cumulative_difficulty: combine_low_high_bits_to_u128(
block_info.cumulative_difficulty_low,
block_info.cumulative_difficulty_high,
),
chain_id: alt_block_height.chain_id.into(),
})
}
/// Retrieves the hash of the block at the given `block_height` on the alt chain with
/// the given [`ChainId`].
///
/// This function will get blocks from the whole chain, for example if you were to ask for height
/// `0` with any [`ChainId`] (as long that chain actually exists) you will get the main chain genesis.
///
#[doc = doc_error!()]
pub fn get_alt_block_hash(
block_height: &BlockHeight,
alt_chain: ChainId,
tables: &impl Tables,
) -> Result<BlockHash, RuntimeError> {
let alt_chains = tables.alt_chain_infos();
// First find what [`ChainId`] this block would be stored under.
let original_chain = {
let mut chain = alt_chain.into();
loop {
let chain_info = alt_chains.get(&chain)?;
if chain_info.common_ancestor_height < *block_height {
break Chain::Alt(chain.into());
}
match chain_info.parent_chain.into() {
Chain::Main => break Chain::Main,
Chain::Alt(alt_chain_id) => {
chain = alt_chain_id.into();
continue;
}
}
}
};
// Get the block hash.
match original_chain {
Chain::Main => {
get_block_info(block_height, tables.block_infos()).map(|info| info.block_hash)
}
Chain::Alt(chain_id) => tables
.alt_blocks_info()
.get(&AltBlockHeight {
chain_id: chain_id.into(),
height: *block_height,
})
.map(|info| info.block_hash),
}
}
/// Retrieves the [`ExtendedBlockHeader`] of the alt-block with an exact [`AltBlockHeight`].
///
/// This function will look at only the blocks with the given [`AltBlockHeight::chain_id`], no others
/// even if they are technically part of this chain.
///
#[doc = doc_error!()]
pub fn get_alt_block_extended_header_from_height(
height: &AltBlockHeight,
table: &impl Tables,
) -> Result<ExtendedBlockHeader, RuntimeError> {
let block_info = table.alt_blocks_info().get(height)?;
let block_blob = table.alt_block_blobs().get(height)?.0;
let block_header = BlockHeader::read(&mut block_blob.as_slice())?;
Ok(ExtendedBlockHeader {
version: HardFork::from_version(block_header.hardfork_version)
.expect("Block in DB must have correct version"),
vote: block_header.hardfork_version,
timestamp: block_header.timestamp,
cumulative_difficulty: combine_low_high_bits_to_u128(
block_info.cumulative_difficulty_low,
block_info.cumulative_difficulty_high,
),
block_weight: block_info.weight,
long_term_weight: block_info.long_term_weight,
})
}
#[cfg(test)]
mod tests {
use std::num::NonZero;
use cuprate_database::{Env, EnvInner, TxRw};
use cuprate_test_utils::data::{BLOCK_V16_TX0, BLOCK_V1_TX2, BLOCK_V9_TX3};
use cuprate_types::{Chain, ChainId};
use crate::{
ops::{
alt_block::{
add_alt_block, flush_alt_blocks, get_alt_block,
get_alt_block_extended_header_from_height, get_alt_block_hash,
get_alt_chain_history_ranges,
},
block::{add_block, pop_block},
},
tables::{OpenTables, Tables},
tests::{assert_all_tables_are_empty, map_verified_block_to_alt, tmp_concrete_env},
types::AltBlockHeight,
};
#[expect(clippy::range_plus_one)]
#[test]
fn all_alt_blocks() {
let (env, _tmp) = tmp_concrete_env();
let env_inner = env.env_inner();
assert_all_tables_are_empty(&env);
let chain_id = ChainId(NonZero::new(1).unwrap());
// Add initial block.
{
let tx_rw = env_inner.tx_rw().unwrap();
let mut tables = env_inner.open_tables_mut(&tx_rw).unwrap();
let mut initial_block = BLOCK_V1_TX2.clone();
initial_block.height = 0;
add_block(&initial_block, &mut tables).unwrap();
drop(tables);
TxRw::commit(tx_rw).unwrap();
}
let alt_blocks = [
map_verified_block_to_alt(BLOCK_V9_TX3.clone(), chain_id),
map_verified_block_to_alt(BLOCK_V16_TX0.clone(), chain_id),
];
// Add alt-blocks
{
let tx_rw = env_inner.tx_rw().unwrap();
let mut tables = env_inner.open_tables_mut(&tx_rw).unwrap();
let mut prev_hash = BLOCK_V1_TX2.block_hash;
for (i, mut alt_block) in alt_blocks.into_iter().enumerate() {
let height = i + 1;
alt_block.height = height;
alt_block.block.header.previous = prev_hash;
alt_block.block_blob = alt_block.block.serialize();
add_alt_block(&alt_block, &mut tables).unwrap();
let alt_height = AltBlockHeight {
chain_id: chain_id.into(),
height,
};
let alt_block_2 = get_alt_block(&alt_height, &tables).unwrap();
assert_eq!(alt_block.block, alt_block_2.block);
let headers = get_alt_chain_history_ranges(
0..(height + 1),
chain_id,
tables.alt_chain_infos(),
)
.unwrap();
assert_eq!(headers.len(), 2);
assert_eq!(headers[1], (Chain::Main, 0..1));
assert_eq!(headers[0], (Chain::Alt(chain_id), 1..(height + 1)));
prev_hash = alt_block.block_hash;
let header =
get_alt_block_extended_header_from_height(&alt_height, &tables).unwrap();
assert_eq!(header.timestamp, alt_block.block.header.timestamp);
assert_eq!(header.block_weight, alt_block.weight);
assert_eq!(header.long_term_weight, alt_block.long_term_weight);
assert_eq!(
header.cumulative_difficulty,
alt_block.cumulative_difficulty
);
assert_eq!(
header.version.as_u8(),
alt_block.block.header.hardfork_version
);
assert_eq!(header.vote, alt_block.block.header.hardfork_signal);
let block_hash = get_alt_block_hash(&height, chain_id, &tables).unwrap();
assert_eq!(block_hash, alt_block.block_hash);
}
drop(tables);
TxRw::commit(tx_rw).unwrap();
}
{
let mut tx_rw = env_inner.tx_rw().unwrap();
flush_alt_blocks(&env_inner, &mut tx_rw).unwrap();
let mut tables = env_inner.open_tables_mut(&tx_rw).unwrap();
pop_block(None, &mut tables).unwrap();
drop(tables);
TxRw::commit(tx_rw).unwrap();
}
assert_all_tables_are_empty(&env);
}
}

View file

@ -0,0 +1,117 @@
use std::cmp::{max, min};
use cuprate_database::{DatabaseRo, DatabaseRw, RuntimeError};
use cuprate_types::{Chain, ChainId};
use crate::{
ops::macros::{doc_add_alt_block_inner_invariant, doc_error},
tables::{AltChainInfos, TablesMut},
types::{AltBlockHeight, AltChainInfo, BlockHash, BlockHeight},
};
/// Updates the [`AltChainInfo`] with information on a new alt-block.
///
#[doc = doc_add_alt_block_inner_invariant!()]
#[doc = doc_error!()]
///
/// # Panics
///
/// This will panic if [`AltBlockHeight::height`] == `0`.
pub fn update_alt_chain_info(
alt_block_height: &AltBlockHeight,
prev_hash: &BlockHash,
tables: &mut impl TablesMut,
) -> Result<(), RuntimeError> {
let parent_chain = match tables.alt_block_heights().get(prev_hash) {
Ok(alt_parent_height) => Chain::Alt(alt_parent_height.chain_id.into()),
Err(RuntimeError::KeyNotFound) => Chain::Main,
Err(e) => return Err(e),
};
// try update the info if one exists for this chain.
let update = tables
.alt_chain_infos_mut()
.update(&alt_block_height.chain_id, |mut info| {
if info.chain_height < alt_block_height.height + 1 {
// If the chain height is increasing we only need to update the chain height.
info.chain_height = alt_block_height.height + 1;
} else {
// If the chain height is not increasing we are popping blocks and need to update the
// split point.
info.common_ancestor_height = alt_block_height.height.checked_sub(1).unwrap();
info.parent_chain = parent_chain.into();
}
info.chain_height = alt_block_height.height + 1;
Some(info)
});
match update {
Ok(()) => return Ok(()),
Err(RuntimeError::KeyNotFound) => (),
Err(e) => return Err(e),
}
// If one doesn't already exist add it.
tables.alt_chain_infos_mut().put(
&alt_block_height.chain_id,
&AltChainInfo {
parent_chain: parent_chain.into(),
common_ancestor_height: alt_block_height.height.checked_sub(1).unwrap(),
chain_height: alt_block_height.height + 1,
},
)
}
/// Get the height history of an alt-chain in reverse chronological order.
///
/// Height history is a list of height ranges with the corresponding [`Chain`] they are stored under.
/// For example if your range goes from height `0` the last entry in the list will be [`Chain::Main`]
/// upto the height where the first split occurs.
#[doc = doc_error!()]
pub fn get_alt_chain_history_ranges(
range: std::ops::Range<BlockHeight>,
alt_chain: ChainId,
alt_chain_infos: &impl DatabaseRo<AltChainInfos>,
) -> Result<Vec<(Chain, std::ops::Range<BlockHeight>)>, RuntimeError> {
let mut ranges = Vec::with_capacity(5);
let mut i = range.end;
let mut current_chain_id = alt_chain.into();
while i > range.start {
let chain_info = alt_chain_infos.get(&current_chain_id)?;
let start_height = max(range.start, chain_info.common_ancestor_height + 1);
let end_height = min(i, chain_info.chain_height);
ranges.push((
Chain::Alt(current_chain_id.into()),
start_height..end_height,
));
i = chain_info.common_ancestor_height + 1;
match chain_info.parent_chain.into() {
Chain::Main => {
ranges.push((Chain::Main, range.start..i));
break;
}
Chain::Alt(alt_chain_id) => {
let alt_chain_id = alt_chain_id.into();
// This shouldn't be possible to hit, however in a test with custom (invalid) block data
// this caused an infinite loop.
if alt_chain_id == current_chain_id {
return Err(RuntimeError::Io(std::io::Error::other(
"Loop detected in ChainIDs, invalid alt chain.",
)));
}
current_chain_id = alt_chain_id;
continue;
}
}
}
Ok(ranges)
}

View file

@ -0,0 +1,58 @@
//! Alternative Block/Chain Ops
//!
//! Alternative chains are chains that potentially have more proof-of-work than the main-chain
//! which we are tracking to potentially re-org to.
//!
//! Cuprate uses an ID system for alt-chains. When a split is made from the main-chain we generate
//! a random [`ChainID`](cuprate_types::ChainId) and assign it to the chain:
//!
//! ```text
//! |
//! |
//! | split
//! |-------------
//! | |
//! | |
//! \|/ \|/
//! main-chain ChainID(X)
//! ```
//!
//! In that example if we were to receive an alt-block which immediately follows the top block of `ChainID(X)`
//! then that block will also be stored under `ChainID(X)`. However, if it follows from another block from `ChainID(X)`
//! we will split into a chain with a different ID:
//!
//! ```text
//! |
//! |
//! | split
//! |-------------
//! | | split
//! | |-------------|
//! | | |
//! | | |
//! | | |
//! \|/ \|/ \|/
//! main-chain ChainID(X) ChainID(Z)
//! ```
//!
//! As you can see if we wanted to get all the alt-blocks in `ChainID(Z)` that now includes some blocks from `ChainID(X)` as well.
//! [`get_alt_chain_history_ranges`] covers this and is the method to get the ranges of heights needed from each [`ChainID`](cuprate_types::ChainId)
//! to get all the alt-blocks in a given [`ChainID`](cuprate_types::ChainId).
//!
//! Although this should be kept in mind as a possibility, because Cuprate's block downloader will only track a single chain it is
//! unlikely that we will be tracking [`ChainID`](cuprate_types::ChainId)s that don't immediately connect to the main-chain.
//!
//! ## Why not use the block's `previous` field?
//!
//! Although that would be easier, it makes getting a range of block extremely slow, as we have to build the weight cache to verify
//! blocks, roughly 100,000 block headers needed, this cost is too high.
mod block;
mod chain;
mod tx;
pub use block::{
add_alt_block, flush_alt_blocks, get_alt_block, get_alt_block_extended_header_from_height,
get_alt_block_hash,
};
pub use chain::{get_alt_chain_history_ranges, update_alt_chain_info};
pub use tx::{add_alt_transaction_blob, get_alt_transaction};

View file

@ -0,0 +1,76 @@
use bytemuck::TransparentWrapper;
use monero_serai::transaction::Transaction;
use cuprate_database::{DatabaseRo, DatabaseRw, RuntimeError, StorableVec};
use cuprate_types::VerifiedTransactionInformation;
use crate::{
ops::macros::{doc_add_alt_block_inner_invariant, doc_error},
tables::{Tables, TablesMut},
types::{AltTransactionInfo, TxHash},
};
/// Adds a [`VerifiedTransactionInformation`] from an alt-block
/// if it is not already in the DB.
///
/// If the transaction is in the main-chain this function will still fill in the
/// [`AltTransactionInfos`](crate::tables::AltTransactionInfos) table, as that
/// table holds data which we don't keep around for main-chain txs.
///
#[doc = doc_add_alt_block_inner_invariant!()]
#[doc = doc_error!()]
pub fn add_alt_transaction_blob(
tx: &VerifiedTransactionInformation,
tables: &mut impl TablesMut,
) -> Result<(), RuntimeError> {
tables.alt_transaction_infos_mut().put(
&tx.tx_hash,
&AltTransactionInfo {
tx_weight: tx.tx_weight,
fee: tx.fee,
tx_hash: tx.tx_hash,
},
)?;
if tables.tx_ids().get(&tx.tx_hash).is_ok()
|| tables.alt_transaction_blobs().get(&tx.tx_hash).is_ok()
{
return Ok(());
}
tables
.alt_transaction_blobs_mut()
.put(&tx.tx_hash, StorableVec::wrap_ref(&tx.tx_blob))?;
Ok(())
}
/// Retrieve a [`VerifiedTransactionInformation`] from the database.
///
#[doc = doc_error!()]
pub fn get_alt_transaction(
tx_hash: &TxHash,
tables: &impl Tables,
) -> Result<VerifiedTransactionInformation, RuntimeError> {
let tx_info = tables.alt_transaction_infos().get(tx_hash)?;
let tx_blob = match tables.alt_transaction_blobs().get(tx_hash) {
Ok(blob) => blob.0,
Err(RuntimeError::KeyNotFound) => {
let tx_id = tables.tx_ids().get(tx_hash)?;
let blob = tables.tx_blobs().get(&tx_id)?;
blob.0
}
Err(e) => return Err(e),
};
Ok(VerifiedTransactionInformation {
tx: Transaction::read(&mut tx_blob.as_slice()).unwrap(),
tx_blob,
tx_weight: tx_info.tx_weight,
fee: tx_info.fee,
tx_hash: tx_info.tx_hash,
})
}

View file

@ -2,16 +2,26 @@
//---------------------------------------------------------------------------------------------------- Import
use bytemuck::TransparentWrapper;
use monero_serai::block::Block;
use monero_serai::{
block::{Block, BlockHeader},
transaction::Transaction,
};
use cuprate_database::{
RuntimeError, StorableVec, {DatabaseRo, DatabaseRw},
};
use cuprate_helper::map::{combine_low_high_bits_to_u128, split_u128_into_low_high_bits};
use cuprate_types::{ExtendedBlockHeader, HardFork, VerifiedBlockInformation};
use cuprate_helper::{
map::{combine_low_high_bits_to_u128, split_u128_into_low_high_bits},
tx::tx_fee,
};
use cuprate_types::{
AltBlockInformation, ChainId, ExtendedBlockHeader, HardFork, VerifiedBlockInformation,
VerifiedTransactionInformation,
};
use crate::{
ops::{
alt_block,
blockchain::{chain_height, cumulative_generated_coins},
macros::doc_error,
output::get_rct_num_outputs,
@ -35,11 +45,6 @@ use super::blockchain::top_block_height;
/// This function will panic if:
/// - `block.height > u32::MAX` (not normally possible)
/// - `block.height` is not != [`chain_height`]
///
/// # Already exists
/// This function will operate normally even if `block` already
/// exists, i.e., this function will not return `Err` even if you
/// call this function infinitely with the same block.
// no inline, too big.
pub fn add_block(
block: &VerifiedBlockInformation,
@ -76,10 +81,10 @@ pub fn add_block(
//------------------------------------------------------ Transaction / Outputs / Key Images
// Add the miner transaction first.
{
let mining_tx_index = {
let tx = &block.block.miner_transaction;
add_tx(tx, &tx.serialize(), &tx.hash(), &chain_height, tables)?;
}
add_tx(tx, &tx.serialize(), &tx.hash(), &chain_height, tables)?
};
for tx in &block.txs {
add_tx(&tx.tx, &tx.tx_blob, &tx.tx_hash, &chain_height, tables)?;
@ -91,9 +96,10 @@ pub fn add_block(
// RCT output count needs account for _this_ block's outputs.
let cumulative_rct_outs = get_rct_num_outputs(tables.rct_outputs())?;
// `saturating_add` is used here as cumulative generated coins overflows due to tail emission.
let cumulative_generated_coins =
cumulative_generated_coins(&block.height.saturating_sub(1), tables.block_infos())?
+ block.generated_coins;
.saturating_add(block.generated_coins);
let (cumulative_difficulty_low, cumulative_difficulty_high) =
split_u128_into_low_high_bits(block.cumulative_difficulty);
@ -108,16 +114,23 @@ pub fn add_block(
cumulative_rct_outs,
timestamp: block.block.header.timestamp,
block_hash: block.block_hash,
// INVARIANT: #[cfg] @ lib.rs asserts `usize == u64`
weight: block.weight as u64,
long_term_weight: block.long_term_weight as u64,
weight: block.weight,
long_term_weight: block.long_term_weight,
mining_tx_index,
},
)?;
// Block blobs.
tables
.block_blobs_mut()
.put(&block.height, StorableVec::wrap_ref(&block.block_blob))?;
// Block header blob.
tables.block_header_blobs_mut().put(
&block.height,
StorableVec::wrap_ref(&block.block.header.serialize()),
)?;
// Block transaction hashes
tables.block_txs_hashes_mut().put(
&block.height,
StorableVec::wrap_ref(&block.block.transactions),
)?;
// Block heights.
tables
@ -131,37 +144,87 @@ pub fn add_block(
/// Remove the top/latest block from the database.
///
/// The removed block's data is returned.
///
/// If a [`ChainId`] is specified the popped block will be added to the alt block tables under
/// that [`ChainId`]. Otherwise, the block will be completely removed from the DB.
#[doc = doc_error!()]
///
/// In `pop_block()`'s case, [`RuntimeError::KeyNotFound`]
/// will be returned if there are no blocks left.
// no inline, too big
pub fn pop_block(
move_to_alt_chain: Option<ChainId>,
tables: &mut impl TablesMut,
) -> Result<(BlockHeight, BlockHash, Block), RuntimeError> {
//------------------------------------------------------ Block Info
// Remove block data from tables.
let (block_height, block_hash) = {
let (block_height, block_info) = tables.block_infos_mut().pop_last()?;
(block_height, block_info.block_hash)
};
let (block_height, block_info) = tables.block_infos_mut().pop_last()?;
// Block heights.
tables.block_heights_mut().delete(&block_hash)?;
tables.block_heights_mut().delete(&block_info.block_hash)?;
// Block blobs.
// We deserialize the block blob into a `Block`, such
// that we can remove the associated transactions later.
let block_blob = tables.block_blobs_mut().take(&block_height)?.0;
let block = Block::read(&mut block_blob.as_slice())?;
//
// We deserialize the block header blob and mining transaction blob
// to form a `Block`, such that we can remove the associated transactions
// later.
let block_header = tables.block_header_blobs_mut().take(&block_height)?.0;
let block_txs_hashes = tables.block_txs_hashes_mut().take(&block_height)?.0;
let miner_transaction = tables.tx_blobs().get(&block_info.mining_tx_index)?.0;
let block = Block {
header: BlockHeader::read(&mut block_header.as_slice())?,
miner_transaction: Transaction::read(&mut miner_transaction.as_slice())?,
transactions: block_txs_hashes,
};
//------------------------------------------------------ Transaction / Outputs / Key Images
remove_tx(&block.miner_transaction.hash(), tables)?;
for tx_hash in &block.transactions {
remove_tx(tx_hash, tables)?;
let remove_tx_iter = block.transactions.iter().map(|tx_hash| {
let (_, tx) = remove_tx(tx_hash, tables)?;
Ok::<_, RuntimeError>(tx)
});
if let Some(chain_id) = move_to_alt_chain {
let txs = remove_tx_iter
.map(|result| {
let tx = result?;
Ok(VerifiedTransactionInformation {
tx_weight: tx.weight(),
tx_blob: tx.serialize(),
tx_hash: tx.hash(),
fee: tx_fee(&tx),
tx,
})
})
.collect::<Result<Vec<VerifiedTransactionInformation>, RuntimeError>>()?;
alt_block::add_alt_block(
&AltBlockInformation {
block: block.clone(),
block_blob: block.serialize(),
txs,
block_hash: block_info.block_hash,
// We know the PoW is valid for this block so just set it so it will always verify as valid.
pow_hash: [0; 32],
height: block_height,
weight: block_info.weight,
long_term_weight: block_info.long_term_weight,
cumulative_difficulty: combine_low_high_bits_to_u128(
block_info.cumulative_difficulty_low,
block_info.cumulative_difficulty_high,
),
chain_id,
},
tables,
)?;
} else {
for result in remove_tx_iter {
drop(result?);
}
}
Ok((block_height, block_hash, block))
Ok((block_height, block_info.block_hash, block))
}
//---------------------------------------------------------------------------------------------------- `get_block_*`
@ -231,31 +294,32 @@ pub fn get_block_extended_header(
/// Same as [`get_block_extended_header`] but with a [`BlockHeight`].
#[doc = doc_error!()]
#[allow(clippy::missing_panics_doc)] // The panic is only possible with a corrupt DB
#[expect(
clippy::missing_panics_doc,
reason = "The panic is only possible with a corrupt DB"
)]
#[inline]
pub fn get_block_extended_header_from_height(
block_height: &BlockHeight,
tables: &impl Tables,
) -> Result<ExtendedBlockHeader, RuntimeError> {
let block_info = tables.block_infos().get(block_height)?;
let block_blob = tables.block_blobs().get(block_height)?.0;
let block = Block::read(&mut block_blob.as_slice())?;
let block_header_blob = tables.block_header_blobs().get(block_height)?.0;
let block_header = BlockHeader::read(&mut block_header_blob.as_slice())?;
let cumulative_difficulty = combine_low_high_bits_to_u128(
block_info.cumulative_difficulty_low,
block_info.cumulative_difficulty_high,
);
// INVARIANT: #[cfg] @ lib.rs asserts `usize == u64`
#[allow(clippy::cast_possible_truncation)]
Ok(ExtendedBlockHeader {
cumulative_difficulty,
version: HardFork::from_version(block.header.hardfork_version)
version: HardFork::from_version(block_header.hardfork_version)
.expect("Stored block must have a valid hard-fork"),
vote: block.header.hardfork_signal,
timestamp: block.header.timestamp,
block_weight: block_info.weight as usize,
long_term_weight: block_info.long_term_weight as usize,
block_weight: block_info.weight,
long_term_weight: block_info.long_term_weight,
height: *block_height as u64,
})
}
@ -309,25 +373,21 @@ pub fn block_exists(
//---------------------------------------------------------------------------------------------------- Tests
#[cfg(test)]
#[allow(
clippy::significant_drop_tightening,
clippy::cognitive_complexity,
clippy::too_many_lines
)]
#[expect(clippy::too_many_lines)]
mod test {
use pretty_assertions::assert_eq;
use cuprate_database::{Env, EnvInner, TxRw};
use cuprate_test_utils::data::{BLOCK_V16_TX0, BLOCK_V1_TX2, BLOCK_V9_TX3};
use super::*;
use crate::{
ops::tx::{get_tx, tx_exists},
tables::OpenTables,
tests::{assert_all_tables_are_empty, tmp_concrete_env, AssertTableLen},
};
use super::*;
/// Tests all above block functions.
///
/// Note that this doesn't test the correctness of values added, as the
@ -379,7 +439,8 @@ mod test {
// Assert only the proper tables were added to.
AssertTableLen {
block_infos: 3,
block_blobs: 3,
block_header_blobs: 3,
block_txs_hashes: 3,
block_heights: 3,
key_images: 69,
num_outputs: 41,
@ -462,7 +523,8 @@ mod test {
for block_hash in block_hashes.into_iter().rev() {
println!("pop_block(): block_hash: {}", hex::encode(block_hash));
let (_popped_height, popped_hash, _popped_block) = pop_block(&mut tables).unwrap();
let (_popped_height, popped_hash, _popped_block) =
pop_block(None, &mut tables).unwrap();
assert_eq!(block_hash, popped_hash);

View file

@ -26,7 +26,7 @@ use crate::{
pub fn chain_height(
table_block_heights: &impl DatabaseRo<BlockHeights>,
) -> Result<BlockHeight, RuntimeError> {
#[allow(clippy::cast_possible_truncation)] // we enforce 64-bit
#[expect(clippy::cast_possible_truncation, reason = "we enforce 64-bit")]
table_block_heights.len().map(|height| height as usize)
}
@ -49,7 +49,7 @@ pub fn top_block_height(
) -> Result<BlockHeight, RuntimeError> {
match table_block_heights.len()? {
0 => Err(RuntimeError::KeyNotFound),
#[allow(clippy::cast_possible_truncation)] // we enforce 64-bit
#[expect(clippy::cast_possible_truncation, reason = "we enforce 64-bit")]
height => Ok(height as usize - 1),
}
}
@ -148,7 +148,8 @@ mod test {
// Assert reads are correct.
AssertTableLen {
block_infos: 3,
block_blobs: 3,
block_header_blobs: 3,
block_txs_hashes: 3,
block_heights: 3,
key_images: 69,
num_outputs: 41,

View file

@ -31,3 +31,25 @@ When calling this function, ensure that either:
};
}
pub(super) use doc_add_block_inner_invariant;
/// Generate `# Invariant` documentation for internal alt block `fn`'s
/// that should be called directly with caution.
///
/// This is pretty much the same as [`doc_add_block_inner_invariant`],
/// it's not worth the effort to reduce the duplication.
macro_rules! doc_add_alt_block_inner_invariant {
() => {
r#"# ⚠️ Invariant ⚠️
This function mainly exists to be used internally by the parent function [`crate::ops::alt_block::add_alt_block`].
`add_alt_block()` makes sure all data related to the input is mutated, while
this function _does not_, it specifically mutates _particular_ tables.
This is usually undesired - although this function is still available to call directly.
When calling this function, ensure that either:
1. This effect (incomplete database mutation) is what is desired, or that...
2. ...the other tables will also be mutated to a correct state"#
};
}
pub(super) use doc_add_alt_block_inner_invariant;

View file

@ -94,7 +94,7 @@
//! // Read the data, assert it is correct.
//! let tx_rw = env_inner.tx_rw()?;
//! let mut tables = env_inner.open_tables_mut(&tx_rw)?;
//! let (height, hash, serai_block) = pop_block(&mut tables)?;
//! let (height, hash, serai_block) = pop_block(None, &mut tables)?;
//!
//! assert_eq!(height, 0);
//! assert_eq!(serai_block, block.block);
@ -102,6 +102,7 @@
//! # Ok(()) }
//! ```
pub mod alt_block;
pub mod block;
pub mod blockchain;
pub mod key_image;

View file

@ -316,7 +316,8 @@ mod test {
// Assert proper tables were added to.
AssertTableLen {
block_infos: 0,
block_blobs: 0,
block_header_blobs: 0,
block_txs_hashes: 0,
block_heights: 0,
key_images: 0,
num_outputs: 1,

View file

@ -366,7 +366,8 @@ mod test {
// Assert only the proper tables were added to.
AssertTableLen {
block_infos: 0,
block_blobs: 0,
block_header_blobs: 0,
block_txs_hashes: 0,
block_heights: 0,
key_images: 4, // added to key images
pruned_tx_blobs: 0,

View file

@ -4,11 +4,14 @@
use std::sync::Arc;
use cuprate_database::{ConcreteEnv, InitError};
use cuprate_types::{AltBlockInformation, VerifiedBlockInformation};
use crate::service::{init_read_service, init_write_service};
use crate::{
config::Config,
service::types::{BlockchainReadHandle, BlockchainWriteHandle},
service::{
init_read_service, init_write_service,
types::{BlockchainReadHandle, BlockchainWriteHandle},
},
};
//---------------------------------------------------------------------------------------------------- Init
@ -81,6 +84,44 @@ pub(super) const fn compact_history_genesis_not_included<const INITIAL_BLOCKS: u
top_block_height > INITIAL_BLOCKS && !(top_block_height - INITIAL_BLOCKS + 2).is_power_of_two()
}
//---------------------------------------------------------------------------------------------------- Map Block
/// Maps [`AltBlockInformation`] to [`VerifiedBlockInformation`]
///
/// # Panics
/// This will panic if the block is invalid, so should only be used on blocks that have been popped from
/// the main-chain.
pub(super) fn map_valid_alt_block_to_verified_block(
alt_block: AltBlockInformation,
) -> VerifiedBlockInformation {
let total_fees = alt_block.txs.iter().map(|tx| tx.fee).sum::<u64>();
let total_miner_output = alt_block
.block
.miner_transaction
.prefix()
.outputs
.iter()
.map(|out| out.amount.unwrap_or(0))
.sum::<u64>();
VerifiedBlockInformation {
block: alt_block.block,
block_blob: alt_block.block_blob,
txs: alt_block
.txs
.into_iter()
.map(TryInto::try_into)
.collect::<Result<_, _>>()
.unwrap(),
block_hash: alt_block.block_hash,
pow_hash: alt_block.pow_hash,
height: alt_block.height,
generated_coins: total_miner_output - total_fees,
weight: alt_block.weight,
long_term_weight: alt_block.long_term_weight,
cumulative_difficulty: alt_block.cumulative_difficulty,
}
}
//---------------------------------------------------------------------------------------------------- Tests
#[cfg(test)]

View file

@ -94,7 +94,7 @@
//!
//! // Block write was OK.
//! let response = response_channel.await?;
//! assert_eq!(response, BlockchainResponse::WriteBlockOk);
//! assert_eq!(response, BlockchainResponse::Ok);
//!
//! // Now, let's try getting the block hash
//! // of the block we just wrote.

View file

@ -9,6 +9,7 @@ use std::{
use monero_serai::block::Block;
use rayon::{
iter::{IntoParallelIterator, ParallelIterator},
prelude::*,
ThreadPool,
};
use thread_local::ThreadLocal;
@ -18,11 +19,15 @@ use cuprate_database_service::{init_thread_pool, DatabaseReadService, ReaderThre
use cuprate_helper::map::combine_low_high_bits_to_u128;
use cuprate_types::{
blockchain::{BlockchainReadRequest, BlockchainResponse},
Chain, ExtendedBlockHeader, OutputOnChain,
Chain, ChainId, ExtendedBlockHeader, OutputOnChain,
};
use crate::{
ops::{
alt_block::{
get_alt_block, get_alt_block_extended_header_from_height, get_alt_block_hash,
get_alt_chain_history_ranges,
},
block::{
block_exists, get_block, get_block_by_hash, get_block_extended_header,
get_block_extended_header_from_height, get_block_extended_header_top, get_block_height,
@ -36,8 +41,10 @@ use crate::{
free::{compact_history_genesis_not_included, compact_history_index_to_height_offset},
types::{BlockchainReadHandle, ResponseResult},
},
tables::{BlockBlobs, BlockHeights, BlockInfos, KeyImages, OpenTables, Tables},
types::{Amount, AmountIndex, BlockHash, BlockHeight, KeyImage, PreRctOutputId},
tables::{AltBlockHeights, BlockHeights, BlockInfos, OpenTables, Tables},
types::{
AltBlockHeight, Amount, AmountIndex, BlockHash, BlockHeight, KeyImage, PreRctOutputId,
},
};
//---------------------------------------------------------------------------------------------------- init_read_service
@ -97,7 +104,7 @@ fn map_request(
R::TopBlockFull => top_block_full(env),
R::CurrentHardFork => current_hard_fork(env),
R::BlockHash(height, chain) => block_hash(env, height, chain),
R::FindBlock(_) => todo!("Add alt blocks to DB"),
R::FindBlock(block_hash) => find_block(env, block_hash),
R::FilterUnknownHashes(hashes) => filter_unknown_hashes(env, hashes),
R::BlockExtendedHeaderInRange(range, chain) => {
block_extended_header_in_range(env, range, chain)
@ -111,6 +118,7 @@ fn map_request(
R::CompactChainHistory => compact_chain_history(env),
R::FindFirstUnknown(block_ids) => find_first_unknown(env, &block_ids),
R::CumulativeBlockWeightLimit => cumulative_block_weight_limit(env),
R::AltBlocksInChain(chain_id) => alt_blocks_in_chain(env, chain_id),
}
/* SOMEDAY: post-request handling, run some code for each request? */
@ -154,7 +162,6 @@ fn thread_local<T: Send>(env: &impl Env) -> ThreadLocal<T> {
macro_rules! get_tables {
($env_inner:ident, $tx_ro:ident, $tables:ident) => {{
$tables.get_or_try(|| {
#[allow(clippy::significant_drop_in_scrutinee)]
match $env_inner.open_tables($tx_ro) {
// SAFETY: see above macro doc comment.
Ok(tables) => Ok(unsafe { crate::unsafe_sendable::UnsafeSendable::new(tables) }),
@ -190,46 +197,49 @@ macro_rules! get_tables {
/// [`BlockchainReadRequest::Block`].
#[inline]
fn block(env: &ConcreteEnv, block_height: BlockHeight) -> ResponseResult {
// Single-threaded, no `ThreadLocal` required.
let env_inner = env.env_inner();
let tx_ro = env_inner.tx_ro()?;
let table_block_blobs = env_inner.open_db_ro::<BlockBlobs>(&tx_ro)?;
Ok(todo!())
// // Single-threaded, no `ThreadLocal` required.
// let env_inner = env.env_inner();
// let tx_ro = env_inner.tx_ro()?;
// let table_block_blobs = env_inner.open_db_ro::<BlockBlobs>(&tx_ro)?;
Ok(BlockchainResponse::Block(get_block(
&block_height,
&table_block_blobs,
)?))
// Ok(BlockchainResponse::Block(get_block(
// &block_height,
// &table_block_blobs,
// )?))
}
/// [`BlockchainReadRequest::BlockByHash`].
#[inline]
fn block_by_hash(env: &ConcreteEnv, block_hash: BlockHash) -> ResponseResult {
// Single-threaded, no `ThreadLocal` required.
let env_inner = env.env_inner();
let tx_ro = env_inner.tx_ro()?;
let table_block_heights = env_inner.open_db_ro::<BlockHeights>(&tx_ro)?;
let table_block_blobs = env_inner.open_db_ro::<BlockBlobs>(&tx_ro)?;
Ok(todo!())
// // Single-threaded, no `ThreadLocal` required.
// let env_inner = env.env_inner();
// let tx_ro = env_inner.tx_ro()?;
// let table_block_heights = env_inner.open_db_ro::<BlockHeights>(&tx_ro)?;
// let table_block_blobs = env_inner.open_db_ro::<BlockBlobs>(&tx_ro)?;
Ok(BlockchainResponse::BlockByHash(get_block_by_hash(
&block_hash,
&table_block_heights,
&table_block_blobs,
)?))
// Ok(BlockchainResponse::BlockByHash(get_block_by_hash(
// &block_hash,
// &table_block_heights,
// &table_block_blobs,
// )?))
}
/// [`BlockchainReadRequest::TopBlock`].
#[inline]
fn top_block(env: &ConcreteEnv) -> ResponseResult {
// Single-threaded, no `ThreadLocal` required.
let env_inner = env.env_inner();
let tx_ro = env_inner.tx_ro()?;
let table_block_heights = env_inner.open_db_ro::<BlockHeights>(&tx_ro)?;
let table_block_blobs = env_inner.open_db_ro::<BlockBlobs>(&tx_ro)?;
Ok(todo!())
// // Single-threaded, no `ThreadLocal` required.
// let env_inner = env.env_inner();
// let tx_ro = env_inner.tx_ro()?;
// let table_block_heights = env_inner.open_db_ro::<BlockHeights>(&tx_ro)?;
// let table_block_blobs = env_inner.open_db_ro::<BlockBlobs>(&tx_ro)?;
Ok(BlockchainResponse::TopBlock(get_top_block(
&table_block_heights,
&table_block_blobs,
)?))
// Ok(BlockchainResponse::TopBlock(get_top_block(
// &table_block_heights,
// &table_block_blobs,
// )?))
}
/// [`BlockchainReadRequest::BlockExtendedHeader`].
@ -307,12 +317,41 @@ fn block_hash(env: &ConcreteEnv, block_height: BlockHeight, chain: Chain) -> Res
let block_hash = match chain {
Chain::Main => get_block_info(&block_height, &table_block_infos)?.block_hash,
Chain::Alt(_) => todo!("Add alt blocks to DB"),
Chain::Alt(chain) => {
get_alt_block_hash(&block_height, chain, &env_inner.open_tables(&tx_ro)?)?
}
};
Ok(BlockchainResponse::BlockHash(block_hash))
}
/// [`BlockchainReadRequest::FindBlock`]
fn find_block(env: &ConcreteEnv, block_hash: BlockHash) -> ResponseResult {
// Single-threaded, no `ThreadLocal` required.
let env_inner = env.env_inner();
let tx_ro = env_inner.tx_ro()?;
let table_block_heights = env_inner.open_db_ro::<BlockHeights>(&tx_ro)?;
// Check the main chain first.
match table_block_heights.get(&block_hash) {
Ok(height) => return Ok(BlockchainResponse::FindBlock(Some((Chain::Main, height)))),
Err(RuntimeError::KeyNotFound) => (),
Err(e) => return Err(e),
}
let table_alt_block_heights = env_inner.open_db_ro::<AltBlockHeights>(&tx_ro)?;
match table_alt_block_heights.get(&block_hash) {
Ok(height) => Ok(BlockchainResponse::FindBlock(Some((
Chain::Alt(height.chain_id.into()),
height.height,
)))),
Err(RuntimeError::KeyNotFound) => Ok(BlockchainResponse::FindBlock(None)),
Err(e) => Err(e),
}
}
/// [`BlockchainReadRequest::FilterUnknownHashes`].
#[inline]
fn filter_unknown_hashes(env: &ConcreteEnv, mut hashes: HashSet<BlockHash>) -> ResponseResult {
@ -363,7 +402,37 @@ fn block_extended_header_in_range(
get_block_extended_header_from_height(&block_height, tables)
})
.collect::<Result<Vec<ExtendedBlockHeader>, RuntimeError>>()?,
Chain::Alt(_) => todo!("Add alt blocks to DB"),
Chain::Alt(chain_id) => {
let ranges = {
let tx_ro = tx_ro.get_or_try(|| env_inner.tx_ro())?;
let tables = get_tables!(env_inner, tx_ro, tables)?.as_ref();
let alt_chains = tables.alt_chain_infos();
get_alt_chain_history_ranges(range, chain_id, alt_chains)?
};
ranges
.par_iter()
.rev()
.flat_map(|(chain, range)| {
range.clone().into_par_iter().map(|height| {
let tx_ro = tx_ro.get_or_try(|| env_inner.tx_ro())?;
let tables = get_tables!(env_inner, tx_ro, tables)?.as_ref();
match *chain {
Chain::Main => get_block_extended_header_from_height(&height, tables),
Chain::Alt(chain_id) => get_alt_block_extended_header_from_height(
&AltBlockHeight {
chain_id: chain_id.into(),
height,
},
tables,
),
}
})
})
.collect::<Result<Vec<_>, _>>()?
}
};
Ok(BlockchainResponse::BlockExtendedHeaderInRange(vec))
@ -448,8 +517,10 @@ fn number_outputs_with_amount(env: &ConcreteEnv, amounts: Vec<Amount>) -> Respon
let tables = thread_local(env);
// Cache the amount of RCT outputs once.
// INVARIANT: #[cfg] @ lib.rs asserts `usize == u64`
#[allow(clippy::cast_possible_truncation)]
#[expect(
clippy::cast_possible_truncation,
reason = "INVARIANT: #[cfg] @ lib.rs asserts `usize == u64`"
)]
let num_rct_outputs = {
let tx_ro = env_inner.tx_ro()?;
let tables = env_inner.open_tables(&tx_ro)?;
@ -469,8 +540,10 @@ fn number_outputs_with_amount(env: &ConcreteEnv, amounts: Vec<Amount>) -> Respon
} else {
// v1 transactions.
match tables.num_outputs().get(&amount) {
// INVARIANT: #[cfg] @ lib.rs asserts `usize == u64`
#[allow(clippy::cast_possible_truncation)]
#[expect(
clippy::cast_possible_truncation,
reason = "INVARIANT: #[cfg] @ lib.rs asserts `usize == u64`"
)]
Ok(count) => Ok((amount, count as usize)),
// If we get a request for an `amount` that doesn't exist,
// we return `0` instead of an error.
@ -487,16 +560,18 @@ fn number_outputs_with_amount(env: &ConcreteEnv, amounts: Vec<Amount>) -> Respon
/// [`BlockchainReadRequest::KeyImageSpent`].
#[inline]
fn key_image_spent(env: &ConcreteEnv, key_image: KeyImage) -> ResponseResult {
// Single-threaded, no `ThreadLocal` required.
let env_inner = env.env_inner();
let tx_ro = env_inner.tx_ro()?;
let table_key_images = env_inner.open_db_ro::<KeyImages>(&tx_ro)?;
Ok(todo!())
match key_image_exists(&key_image, &table_key_images) {
Ok(false) => Ok(BlockchainResponse::KeyImagesSpent(false)), // Key image was NOT found.
Ok(true) => Ok(BlockchainResponse::KeyImagesSpent(true)), // Key image was found.
Err(e) => Err(e), // A database error occurred.
}
// // Single-threaded, no `ThreadLocal` required.
// let env_inner = env.env_inner();
// let tx_ro = env_inner.tx_ro()?;
// let table_key_images = env_inner.open_db_ro::<KeyImages>(&tx_ro)?;
// match key_image_exists(&key_image, &table_key_images) {
// Ok(false) => Ok(BlockchainResponse::KeyImagesSpent(false)), // Key image was NOT found.
// Ok(true) => Ok(BlockchainResponse::KeyImagesSpent(true)), // Key image was found.
// Err(e) => Err(e), // A database error occurred.
// }
}
/// [`BlockchainReadRequest::KeyImagesSpent`].
@ -625,3 +700,45 @@ fn cumulative_block_weight_limit(env: &ConcreteEnv) -> ResponseResult {
Ok(BlockchainResponse::CumulativeBlockWeightLimit(limit))
}
/// [`BlockchainReadRequest::AltBlocksInChain`]
fn alt_blocks_in_chain(env: &ConcreteEnv, chain_id: ChainId) -> ResponseResult {
// Prepare tx/tables in `ThreadLocal`.
let env_inner = env.env_inner();
let tx_ro = thread_local(env);
let tables = thread_local(env);
// Get the history of this alt-chain.
let history = {
let tx_ro = tx_ro.get_or_try(|| env_inner.tx_ro())?;
let tables = get_tables!(env_inner, tx_ro, tables)?.as_ref();
get_alt_chain_history_ranges(0..usize::MAX, chain_id, tables.alt_chain_infos())?
};
// Get all the blocks until we join the main-chain.
let blocks = history
.par_iter()
.rev()
.skip(1)
.flat_map(|(chain_id, range)| {
let Chain::Alt(chain_id) = chain_id else {
panic!("Should not have main chain blocks here we skipped last range");
};
range.clone().into_par_iter().map(|height| {
let tx_ro = tx_ro.get_or_try(|| env_inner.tx_ro())?;
let tables = get_tables!(env_inner, tx_ro, tables)?.as_ref();
get_alt_block(
&AltBlockHeight {
chain_id: (*chain_id).into(),
height,
},
tables,
)
})
})
.collect::<Result<_, _>>()?;
Ok(BlockchainResponse::AltBlocksInChain(blocks))
}

View file

@ -13,13 +13,14 @@ use std::{
};
use pretty_assertions::assert_eq;
use rand::Rng;
use tower::{Service, ServiceExt};
use cuprate_database::{ConcreteEnv, DatabaseIter, DatabaseRo, Env, EnvInner, RuntimeError};
use cuprate_test_utils::data::{BLOCK_V16_TX0, BLOCK_V1_TX2, BLOCK_V9_TX3};
use cuprate_types::{
blockchain::{BlockchainReadRequest, BlockchainResponse, BlockchainWriteRequest},
Chain, OutputOnChain, VerifiedBlockInformation,
Chain, ChainId, OutputOnChain, VerifiedBlockInformation,
};
use crate::{
@ -31,7 +32,7 @@ use crate::{
},
service::{init, BlockchainReadHandle, BlockchainWriteHandle},
tables::{OpenTables, Tables, TablesIter},
tests::AssertTableLen,
tests::{map_verified_block_to_alt, AssertTableLen},
types::{Amount, AmountIndex, PreRctOutputId},
};
@ -58,7 +59,10 @@ fn init_service() -> (
/// - Receive response(s)
/// - Assert proper tables were mutated
/// - Assert read requests lead to expected responses
#[allow(clippy::future_not_send)] // INVARIANT: tests are using a single threaded runtime
#[expect(
clippy::future_not_send,
reason = "INVARIANT: tests are using a single threaded runtime"
)]
async fn test_template(
// Which block(s) to add?
blocks: &[&VerifiedBlockInformation],
@ -84,7 +88,7 @@ async fn test_template(
let request = BlockchainWriteRequest::WriteBlock(block);
let response_channel = writer.call(request);
let response = response_channel.await.unwrap();
assert_eq!(response, BlockchainResponse::WriteBlock);
assert_eq!(response, BlockchainResponse::Ok);
}
//----------------------------------------------------------------------- Reset the transaction
@ -164,8 +168,10 @@ async fn test_template(
num_req
.iter()
.map(|amount| match tables.num_outputs().get(amount) {
// INVARIANT: #[cfg] @ lib.rs asserts `usize == u64`
#[allow(clippy::cast_possible_truncation)]
#[expect(
clippy::cast_possible_truncation,
reason = "INVARIANT: #[cfg] @ lib.rs asserts `usize == u64`"
)]
Ok(count) => (*amount, count as usize),
Err(RuntimeError::KeyNotFound) => (*amount, 0),
Err(e) => panic!("{e:?}"),
@ -235,42 +241,38 @@ async fn test_template(
//----------------------------------------------------------------------- Output checks
// Create the map of amounts and amount indices.
//
// FIXME: There's definitely a better way to map
// `Vec<PreRctOutputId>` -> `HashMap<u64, HashSet<u64>>`
let (map, output_count) = {
let mut ids = tables
.outputs_iter()
.keys()
.unwrap()
.map(Result::unwrap)
.collect::<Vec<PreRctOutputId>>();
ids.extend(
tables
.rct_outputs_iter()
.keys()
.unwrap()
.map(Result::unwrap)
.map(|amount_index| PreRctOutputId {
amount: 0,
amount_index,
}),
);
let mut map = HashMap::<Amount, HashSet<AmountIndex>>::new();
// Used later to compare the amount of Outputs
// returned in the Response is equal to the amount
// we asked for.
let output_count = ids.len();
let mut output_count: usize = 0;
let mut map = HashMap::<Amount, HashSet<AmountIndex>>::new();
for id in ids {
map.entry(id.amount)
.and_modify(|set| {
set.insert(id.amount_index);
})
.or_insert_with(|| HashSet::from([id.amount_index]));
}
tables
.outputs_iter()
.keys()
.unwrap()
.map(Result::unwrap)
.chain(
tables
.rct_outputs_iter()
.keys()
.unwrap()
.map(Result::unwrap)
.map(|amount_index| PreRctOutputId {
amount: 0,
amount_index,
}),
)
.for_each(|id| {
output_count += 1;
map.entry(id.amount)
.and_modify(|set| {
set.insert(id.amount_index);
})
.or_insert_with(|| HashSet::from([id.amount_index]));
});
(map, output_count)
};
@ -304,7 +306,10 @@ async fn test_template(
// Assert we get back the same map of
// `Amount`'s and `AmountIndex`'s.
let mut response_output_count = 0;
#[allow(clippy::iter_over_hash_type)] // order doesn't matter in this test
#[expect(
clippy::iter_over_hash_type,
reason = "order doesn't matter in this test"
)]
for (amount, output_map) in response {
let amount_index_set = &map[&amount];
@ -338,7 +343,8 @@ async fn v1_tx2() {
14_535_350_982_449,
AssertTableLen {
block_infos: 1,
block_blobs: 1,
block_header_blobs: 1,
block_txs_hashes: 1,
block_heights: 1,
key_images: 65,
num_outputs: 41,
@ -364,7 +370,8 @@ async fn v9_tx3() {
3_403_774_022_163,
AssertTableLen {
block_infos: 1,
block_blobs: 1,
block_header_blobs: 1,
block_txs_hashes: 1,
block_heights: 1,
key_images: 4,
num_outputs: 0,
@ -390,7 +397,8 @@ async fn v16_tx0() {
600_000_000_000,
AssertTableLen {
block_infos: 1,
block_blobs: 1,
block_header_blobs: 1,
block_txs_hashes: 1,
block_heights: 1,
key_images: 0,
num_outputs: 0,
@ -407,3 +415,92 @@ async fn v16_tx0() {
)
.await;
}
/// Tests the alt-chain requests and responses.
#[tokio::test]
async fn alt_chain_requests() {
let (reader, mut writer, _, _tempdir) = init_service();
// Set up the test by adding blocks to the main-chain.
for (i, mut block) in [BLOCK_V9_TX3.clone(), BLOCK_V16_TX0.clone()]
.into_iter()
.enumerate()
{
block.height = i;
let request = BlockchainWriteRequest::WriteBlock(block);
writer.call(request).await.unwrap();
}
// Generate the alt-blocks.
let mut prev_hash = BLOCK_V9_TX3.block_hash;
let mut chain_id = 1;
let alt_blocks = [&BLOCK_V16_TX0, &BLOCK_V9_TX3, &BLOCK_V1_TX2]
.into_iter()
.enumerate()
.map(|(i, block)| {
let mut block = (**block).clone();
block.height = i + 1;
block.block.header.previous = prev_hash;
block.block_blob = block.block.serialize();
prev_hash = block.block_hash;
// Randomly either keep the [`ChainId`] the same or change it to a new value.
chain_id += rand::thread_rng().gen_range(0..=1);
map_verified_block_to_alt(block, ChainId(chain_id.try_into().unwrap()))
})
.collect::<Vec<_>>();
for block in &alt_blocks {
// Request a block to be written, assert it was written.
let request = BlockchainWriteRequest::WriteAltBlock(block.clone());
let response_channel = writer.call(request);
let response = response_channel.await.unwrap();
assert_eq!(response, BlockchainResponse::Ok);
}
// Get the full alt-chain
let request = BlockchainReadRequest::AltBlocksInChain(ChainId(chain_id.try_into().unwrap()));
let response = reader.clone().oneshot(request).await.unwrap();
let BlockchainResponse::AltBlocksInChain(blocks) = response else {
panic!("Wrong response type was returned");
};
assert_eq!(blocks.len(), alt_blocks.len());
for (got_block, alt_block) in blocks.into_iter().zip(alt_blocks) {
assert_eq!(got_block.block_blob, alt_block.block_blob);
assert_eq!(got_block.block_hash, alt_block.block_hash);
assert_eq!(got_block.chain_id, alt_block.chain_id);
assert_eq!(got_block.txs, alt_block.txs);
}
// Flush all alt blocks.
let request = BlockchainWriteRequest::FlushAltBlocks;
let response = writer.ready().await.unwrap().call(request).await.unwrap();
assert_eq!(response, BlockchainResponse::Ok);
// Pop blocks from the main chain
let request = BlockchainWriteRequest::PopBlocks(1);
let response = writer.ready().await.unwrap().call(request).await.unwrap();
let BlockchainResponse::PopBlocks(_, old_main_chain_id) = response else {
panic!("Wrong response type was returned");
};
// Check we have popped the top block.
let request = BlockchainReadRequest::ChainHeight;
let response = reader.clone().oneshot(request).await.unwrap();
assert!(matches!(response, BlockchainResponse::ChainHeight(1, _)));
// Attempt to add the popped block back.
let request = BlockchainWriteRequest::ReverseReorg(old_main_chain_id);
let response = writer.ready().await.unwrap().call(request).await.unwrap();
assert_eq!(response, BlockchainResponse::Ok);
// Check we have the popped block back.
let request = BlockchainReadRequest::ChainHeight;
let response = reader.clone().oneshot(request).await.unwrap();
assert!(matches!(response, BlockchainResponse::ChainHeight(2, _)));
}

View file

@ -1,21 +1,31 @@
//! Database writer thread definitions and logic.
//---------------------------------------------------------------------------------------------------- Import
use std::sync::Arc;
use cuprate_database::{ConcreteEnv, Env, EnvInner, RuntimeError, TxRw};
use cuprate_database::{ConcreteEnv, DatabaseRo, Env, EnvInner, RuntimeError, TxRw};
use cuprate_database_service::DatabaseWriteHandle;
use cuprate_types::{
blockchain::{BlockchainResponse, BlockchainWriteRequest},
VerifiedBlockInformation,
AltBlockInformation, Chain, ChainId, VerifiedBlockInformation,
};
use crate::{
ops,
service::types::{BlockchainWriteHandle, ResponseResult},
tables::OpenTables,
ops::{alt_block, block, blockchain},
service::{
free::map_valid_alt_block_to_verified_block,
types::{BlockchainWriteHandle, ResponseResult},
},
tables::{OpenTables, Tables},
types::AltBlockHeight,
};
/// Write functions within this module abort if the write transaction
/// could not be aborted successfully to maintain atomicity.
///
/// This is the panic message if the `abort()` fails.
const TX_RW_ABORT_FAIL: &str =
"Could not maintain blockchain database atomicity by aborting write transaction";
//---------------------------------------------------------------------------------------------------- init_write_service
/// Initialize the blockchain write service from a [`ConcreteEnv`].
pub fn init_write_service(env: Arc<ConcreteEnv>) -> BlockchainWriteHandle {
@ -30,7 +40,12 @@ fn handle_blockchain_request(
) -> Result<BlockchainResponse, RuntimeError> {
match req {
BlockchainWriteRequest::WriteBlock(block) => write_block(env, block),
BlockchainWriteRequest::PopBlocks(nblocks) => pop_blocks(env, *nblocks),
BlockchainWriteRequest::WriteAltBlock(alt_block) => write_alt_block(env, alt_block),
BlockchainWriteRequest::PopBlocks(numb_blocks) => pop_blocks(env, *numb_blocks),
BlockchainWriteRequest::ReverseReorg(old_main_chain_id) => {
reverse_reorg(env, *old_main_chain_id)
}
BlockchainWriteRequest::FlushAltBlocks => flush_alt_blocks(env),
}
}
@ -51,51 +66,145 @@ fn write_block(env: &ConcreteEnv, block: &VerifiedBlockInformation) -> ResponseR
let result = {
let mut tables_mut = env_inner.open_tables_mut(&tx_rw)?;
ops::block::add_block(block, &mut tables_mut)
block::add_block(block, &mut tables_mut)
};
match result {
Ok(()) => {
TxRw::commit(tx_rw)?;
Ok(BlockchainResponse::WriteBlock)
Ok(BlockchainResponse::Ok)
}
Err(e) => {
// INVARIANT: ensure database atomicity by aborting
// the transaction on `add_block()` failures.
TxRw::abort(tx_rw)
.expect("could not maintain database atomicity by aborting write transaction");
TxRw::abort(tx_rw).expect(TX_RW_ABORT_FAIL);
Err(e)
}
}
}
/// [`BlockchainWriteRequest::WriteAltBlock`].
#[inline]
fn write_alt_block(env: &ConcreteEnv, block: &AltBlockInformation) -> ResponseResult {
let env_inner = env.env_inner();
let tx_rw = env_inner.tx_rw()?;
let result = {
let mut tables_mut = env_inner.open_tables_mut(&tx_rw)?;
alt_block::add_alt_block(block, &mut tables_mut)
};
match result {
Ok(()) => {
TxRw::commit(tx_rw)?;
Ok(BlockchainResponse::Ok)
}
Err(e) => {
TxRw::abort(tx_rw).expect(TX_RW_ABORT_FAIL);
Err(e)
}
}
}
/// [`BlockchainWriteRequest::PopBlocks`].
#[inline]
fn pop_blocks(env: &ConcreteEnv, nblocks: u64) -> ResponseResult {
fn pop_blocks(env: &ConcreteEnv, numb_blocks: usize) -> ResponseResult {
let env_inner = env.env_inner();
let tx_rw = env_inner.tx_rw()?;
let mut tx_rw = env_inner.tx_rw()?;
// FIXME: turn this function into a try block once stable.
let mut result = || {
// flush all the current alt blocks as they may reference blocks to be popped.
alt_block::flush_alt_blocks(&env_inner, &mut tx_rw)?;
let result = || {
let mut tables_mut = env_inner.open_tables_mut(&tx_rw)?;
let mut height = 0;
// generate a `ChainId` for the popped blocks.
let old_main_chain_id = ChainId(rand::random());
for _ in 0..nblocks {
(height, _, _) = ops::block::pop_block(&mut tables_mut)?;
// pop the blocks
for _ in 0..numb_blocks {
block::pop_block(Some(old_main_chain_id), &mut tables_mut)?;
}
Ok(height)
Ok(old_main_chain_id)
};
match result() {
Ok(height) => {
Ok(old_main_chain_id) => {
TxRw::commit(tx_rw)?;
Ok(BlockchainResponse::PopBlocks(height))
Ok(BlockchainResponse::PopBlocks(todo!(), old_main_chain_id))
}
Err(e) => {
// INVARIANT: ensure database atomicity by aborting
// the transaction on `add_block()` failures.
TxRw::abort(tx_rw)
.expect("could not maintain database atomicity by aborting write transaction");
TxRw::abort(tx_rw).expect(TX_RW_ABORT_FAIL);
Err(e)
}
}
}
/// [`BlockchainWriteRequest::ReverseReorg`].
fn reverse_reorg(env: &ConcreteEnv, chain_id: ChainId) -> ResponseResult {
let env_inner = env.env_inner();
let mut tx_rw = env_inner.tx_rw()?;
// FIXME: turn this function into a try block once stable.
let mut result = || {
let mut tables_mut = env_inner.open_tables_mut(&tx_rw)?;
let chain_info = tables_mut.alt_chain_infos().get(&chain_id.into())?;
// Although this doesn't guarantee the chain was popped from the main-chain, it's an easy
// thing for us to check.
assert_eq!(Chain::from(chain_info.parent_chain), Chain::Main);
let top_block_height = blockchain::top_block_height(tables_mut.block_heights())?;
// pop any blocks that were added as part of a re-org.
for _ in chain_info.common_ancestor_height..top_block_height {
block::pop_block(None, &mut tables_mut)?;
}
// Add the old main chain blocks back to the main chain.
for height in (chain_info.common_ancestor_height + 1)..chain_info.chain_height {
let alt_block = alt_block::get_alt_block(
&AltBlockHeight {
chain_id: chain_id.into(),
height,
},
&tables_mut,
)?;
let verified_block = map_valid_alt_block_to_verified_block(alt_block);
block::add_block(&verified_block, &mut tables_mut)?;
}
drop(tables_mut);
alt_block::flush_alt_blocks(&env_inner, &mut tx_rw)?;
Ok(())
};
match result() {
Ok(()) => {
TxRw::commit(tx_rw)?;
Ok(BlockchainResponse::Ok)
}
Err(e) => {
TxRw::abort(tx_rw).expect(TX_RW_ABORT_FAIL);
Err(e)
}
}
}
/// [`BlockchainWriteRequest::FlushAltBlocks`].
#[inline]
fn flush_alt_blocks(env: &ConcreteEnv) -> ResponseResult {
let env_inner = env.env_inner();
let mut tx_rw = env_inner.tx_rw()?;
let result = alt_block::flush_alt_blocks(&env_inner, &mut tx_rw);
match result {
Ok(()) => {
TxRw::commit(tx_rw)?;
Ok(BlockchainResponse::Ok)
}
Err(e) => {
TxRw::abort(tx_rw).expect(TX_RW_ABORT_FAIL);
Err(e)
}
}

View file

@ -9,7 +9,7 @@
//! Table structs are `CamelCase`, and their static string
//! names used by the actual database backend are `snake_case`.
//!
//! For example: [`BlockBlobs`] -> `block_blobs`.
//! For example: [`BlockHeaderBlobs`] -> `block_header_blobs`.
//!
//! # Traits
//! This module also contains a set of traits for
@ -17,9 +17,10 @@
//---------------------------------------------------------------------------------------------------- Import
use crate::types::{
Amount, AmountIndex, AmountIndices, BlockBlob, BlockHash, BlockHeight, BlockInfo, KeyImage,
Output, PreRctOutputId, PrunableBlob, PrunableHash, PrunedBlob, RctOutput, TxBlob, TxHash,
TxId, UnlockTime,
AltBlockHeight, AltChainInfo, AltTransactionInfo, Amount, AmountIndex, AmountIndices,
BlockBlob, BlockHash, BlockHeaderBlob, BlockHeight, BlockInfo, BlockTxHashes,
CompactAltBlockInfo, KeyImage, Output, PreRctOutputId, PrunableBlob, PrunableHash, PrunedBlob,
RawChainId, RctOutput, TxBlob, TxHash, TxId, UnlockTime,
};
//---------------------------------------------------------------------------------------------------- Tables
@ -29,22 +30,28 @@ use crate::types::{
// - If adding/changing a table also edit:
// - the tests in `src/backend/tests.rs`
cuprate_database::define_tables! {
/// Serialized block blobs (bytes).
/// Serialized block header blobs (bytes).
///
/// Contains the serialized version of all blocks.
0 => BlockBlobs,
BlockHeight => BlockBlob,
/// Contains the serialized version of all blocks headers.
0 => BlockHeaderBlobs,
BlockHeight => BlockHeaderBlob,
/// Block transactions hashes
///
/// Contains all the transaction hashes of all blocks.
1 => BlockTxsHashes,
BlockHeight => BlockTxHashes,
/// Block heights.
///
/// Contains the height of all blocks.
1 => BlockHeights,
2 => BlockHeights,
BlockHash => BlockHeight,
/// Block information.
///
/// Contains metadata of all blocks.
2 => BlockInfos,
3 => BlockInfos,
BlockHeight => BlockInfo,
/// Set of key images.
@ -53,38 +60,38 @@ cuprate_database::define_tables! {
///
/// This table has `()` as the value type, as in,
/// it is a set of key images.
3 => KeyImages,
4 => KeyImages,
KeyImage => (),
/// Maps an output's amount to the number of outputs with that amount.
///
/// For example, if there are 5 outputs with `amount = 123`
/// then calling `get(123)` on this table will return 5.
4 => NumOutputs,
5 => NumOutputs,
Amount => u64,
/// Pre-RCT output data.
5 => Outputs,
6 => Outputs,
PreRctOutputId => Output,
/// Pruned transaction blobs (bytes).
///
/// Contains the pruned portion of serialized transaction data.
6 => PrunedTxBlobs,
7 => PrunedTxBlobs,
TxId => PrunedBlob,
/// Prunable transaction blobs (bytes).
///
/// Contains the prunable portion of serialized transaction data.
// SOMEDAY: impl when `monero-serai` supports pruning
7 => PrunableTxBlobs,
8 => PrunableTxBlobs,
TxId => PrunableBlob,
/// Prunable transaction hashes.
///
/// Contains the prunable portion of transaction hashes.
// SOMEDAY: impl when `monero-serai` supports pruning
8 => PrunableHashes,
9 => PrunableHashes,
TxId => PrunableHash,
// SOMEDAY: impl a properties table:
@ -94,41 +101,75 @@ cuprate_database::define_tables! {
// StorableString => StorableVec,
/// RCT output data.
9 => RctOutputs,
10 => RctOutputs,
AmountIndex => RctOutput,
/// Transaction blobs (bytes).
///
/// Contains the serialized version of all transactions.
// SOMEDAY: remove when `monero-serai` supports pruning
10 => TxBlobs,
11 => TxBlobs,
TxId => TxBlob,
/// Transaction indices.
///
/// Contains the indices all transactions.
11 => TxIds,
12 => TxIds,
TxHash => TxId,
/// Transaction heights.
///
/// Contains the block height associated with all transactions.
12 => TxHeights,
13 => TxHeights,
TxId => BlockHeight,
/// Transaction outputs.
///
/// Contains the list of `AmountIndex`'s of the
/// outputs associated with all transactions.
13 => TxOutputs,
14 => TxOutputs,
TxId => AmountIndices,
/// Transaction unlock time.
///
/// Contains the unlock time of transactions IF they have one.
/// Transactions without unlock times will not exist in this table.
14 => TxUnlockTime,
15 => TxUnlockTime,
TxId => UnlockTime,
/// Information on alt-chains.
16 => AltChainInfos,
RawChainId => AltChainInfo,
/// Alt-block heights.
///
/// Contains the height of all alt-blocks.
17 => AltBlockHeights,
BlockHash => AltBlockHeight,
/// Alt-block information.
///
/// Contains information on all alt-blocks.
18 => AltBlocksInfo,
AltBlockHeight => CompactAltBlockInfo,
/// Alt-block blobs.
///
/// Contains the raw bytes of all alt-blocks.
19 => AltBlockBlobs,
AltBlockHeight => BlockBlob,
/// Alt-block transaction blobs.
///
/// Contains the raw bytes of alt transactions, if those transactions are not in the main-chain.
20 => AltTransactionBlobs,
TxHash => TxBlob,
/// Alt-block transaction information.
///
/// Contains information on all alt transactions, even if they are in the main-chain.
21 => AltTransactionInfos,
TxHash => AltTransactionInfo,
}
//---------------------------------------------------------------------------------------------------- Tests

View file

@ -9,7 +9,8 @@ use std::{borrow::Cow, fmt::Debug};
use pretty_assertions::assert_eq;
use cuprate_database::{ConcreteEnv, DatabaseRo, Env, EnvInner};
use cuprate_database::{DatabaseRo, Env, EnvInner};
use cuprate_types::{AltBlockInformation, ChainId, VerifiedBlockInformation};
use crate::{
config::ConfigBuilder,
@ -25,7 +26,8 @@ use crate::{
#[derive(Copy, Clone, Debug, Default, PartialEq, Eq, PartialOrd, Ord, Hash)]
pub(crate) struct AssertTableLen {
pub(crate) block_infos: u64,
pub(crate) block_blobs: u64,
pub(crate) block_header_blobs: u64,
pub(crate) block_txs_hashes: u64,
pub(crate) block_heights: u64,
pub(crate) key_images: u64,
pub(crate) num_outputs: u64,
@ -45,7 +47,8 @@ impl AssertTableLen {
pub(crate) fn assert(self, tables: &impl Tables) {
let other = Self {
block_infos: tables.block_infos().len().unwrap(),
block_blobs: tables.block_blobs().len().unwrap(),
block_header_blobs: tables.block_header_blobs().len().unwrap(),
block_txs_hashes: tables.block_txs_hashes().len().unwrap(),
block_heights: tables.block_heights().len().unwrap(),
key_images: tables.key_images().len().unwrap(),
num_outputs: tables.num_outputs().len().unwrap(),
@ -68,8 +71,7 @@ impl AssertTableLen {
/// Create an `Env` in a temporarily directory.
/// The directory is automatically removed after the `TempDir` is dropped.
///
/// FIXME: changing this to `-> impl Env` causes lifetime errors...
pub(crate) fn tmp_concrete_env() -> (ConcreteEnv, tempfile::TempDir) {
pub(crate) fn tmp_concrete_env() -> (impl Env, tempfile::TempDir) {
let tempdir = tempfile::tempdir().unwrap();
let config = ConfigBuilder::new()
.db_directory(Cow::Owned(tempdir.path().into()))
@ -81,10 +83,28 @@ pub(crate) fn tmp_concrete_env() -> (ConcreteEnv, tempfile::TempDir) {
}
/// Assert all the tables in the environment are empty.
pub(crate) fn assert_all_tables_are_empty(env: &ConcreteEnv) {
pub(crate) fn assert_all_tables_are_empty(env: &impl Env) {
let env_inner = env.env_inner();
let tx_ro = env_inner.tx_ro().unwrap();
let tables = env_inner.open_tables(&tx_ro).unwrap();
assert!(tables.all_tables_empty().unwrap());
assert_eq!(crate::ops::tx::get_num_tx(tables.tx_ids()).unwrap(), 0);
}
pub(crate) fn map_verified_block_to_alt(
verified_block: VerifiedBlockInformation,
chain_id: ChainId,
) -> AltBlockInformation {
AltBlockInformation {
block: verified_block.block,
block_blob: verified_block.block_blob,
txs: verified_block.txs,
block_hash: verified_block.block_hash,
pow_hash: verified_block.pow_hash,
height: verified_block.height,
weight: verified_block.weight,
long_term_weight: verified_block.long_term_weight,
cumulative_difficulty: verified_block.cumulative_difficulty,
chain_id,
}
}

View file

@ -41,12 +41,14 @@
#![forbid(unsafe_code)] // if you remove this line i will steal your monero
//---------------------------------------------------------------------------------------------------- Import
use bytemuck::{Pod, Zeroable};
use std::num::NonZero;
use bytemuck::{Pod, Zeroable};
#[cfg(feature = "serde")]
use serde::{Deserialize, Serialize};
use cuprate_database::{Key, StorableVec};
use cuprate_types::{Chain, ChainId};
//---------------------------------------------------------------------------------------------------- Aliases
// These type aliases exist as many Monero-related types are the exact same.
@ -64,6 +66,12 @@ pub type AmountIndices = StorableVec<AmountIndex>;
/// A serialized block.
pub type BlockBlob = StorableVec<u8>;
/// A serialized block header
pub type BlockHeaderBlob = StorableVec<u8>;
/// A block transaction hashes
pub type BlockTxHashes = StorableVec<[u8; 32]>;
/// A block's hash.
pub type BlockHash = [u8; 32];
@ -164,6 +172,7 @@ impl Key for PreRctOutputId {}
/// block_hash: [54; 32],
/// cumulative_rct_outs: 2389,
/// long_term_weight: 2389,
/// mining_tx_index: 23
/// };
/// let b = Storable::as_bytes(&a);
/// let c: BlockInfo = Storable::from_bytes(b);
@ -173,7 +182,7 @@ impl Key for PreRctOutputId {}
/// # Size & Alignment
/// ```rust
/// # use cuprate_blockchain::types::*;
/// assert_eq!(size_of::<BlockInfo>(), 88);
/// assert_eq!(size_of::<BlockInfo>(), 96);
/// assert_eq!(align_of::<BlockInfo>(), 8);
/// ```
#[cfg_attr(feature = "serde", derive(Serialize, Deserialize))]
@ -187,7 +196,7 @@ pub struct BlockInfo {
/// The adjusted block size, in bytes.
///
/// See [`block_weight`](https://monero-book.cuprate.org/consensus_rules/blocks/weights.html#blocks-weight).
pub weight: u64,
pub weight: usize,
/// Least-significant 64 bits of the 128-bit cumulative difficulty.
pub cumulative_difficulty_low: u64,
/// Most-significant 64 bits of the 128-bit cumulative difficulty.
@ -199,7 +208,9 @@ pub struct BlockInfo {
/// The long term block weight, based on the median weight of the preceding `100_000` blocks.
///
/// See [`long_term_weight`](https://monero-book.cuprate.org/consensus_rules/blocks/weights.html#long-term-block-weight).
pub long_term_weight: u64,
pub long_term_weight: usize,
/// [`TxId`] (u64) of the block coinbase transaction.
pub mining_tx_index: TxId,
}
//---------------------------------------------------------------------------------------------------- OutputFlags
@ -324,6 +335,259 @@ pub struct RctOutput {
}
// TODO: local_index?
//---------------------------------------------------------------------------------------------------- RawChain
/// [`Chain`] in a format which can be stored in the DB.
///
/// Implements [`Into`] and [`From`] for [`Chain`].
///
/// ```rust
/// # use std::borrow::*;
/// # use cuprate_blockchain::{*, types::*};
/// use cuprate_database::Storable;
/// use cuprate_types::Chain;
///
/// // Assert Storable is correct.
/// let a: RawChain = Chain::Main.into();
/// let b = Storable::as_bytes(&a);
/// let c: RawChain = Storable::from_bytes(b);
/// assert_eq!(a, c);
/// ```
///
/// # Size & Alignment
/// ```rust
/// # use cuprate_blockchain::types::*;
/// assert_eq!(size_of::<RawChain>(), 8);
/// assert_eq!(align_of::<RawChain>(), 8);
/// ```
#[derive(Copy, Clone, Debug, PartialEq, PartialOrd, Eq, Ord, Hash, Pod, Zeroable)]
#[repr(transparent)]
pub struct RawChain(u64);
impl From<Chain> for RawChain {
fn from(value: Chain) -> Self {
match value {
Chain::Main => Self(0),
Chain::Alt(chain_id) => Self(chain_id.0.get()),
}
}
}
impl From<RawChain> for Chain {
fn from(value: RawChain) -> Self {
NonZero::new(value.0).map_or(Self::Main, |id| Self::Alt(ChainId(id)))
}
}
impl From<RawChainId> for RawChain {
fn from(value: RawChainId) -> Self {
// A [`ChainID`] with an inner value of `0` is invalid.
assert_ne!(value.0, 0);
Self(value.0)
}
}
//---------------------------------------------------------------------------------------------------- RawChainId
/// [`ChainId`] in a format which can be stored in the DB.
///
/// Implements [`Into`] and [`From`] for [`ChainId`].
///
/// ```rust
/// # use std::borrow::*;
/// # use cuprate_blockchain::{*, types::*};
/// use cuprate_database::Storable;
/// use cuprate_types::ChainId;
///
/// // Assert Storable is correct.
/// let a: RawChainId = ChainId(10.try_into().unwrap()).into();
/// let b = Storable::as_bytes(&a);
/// let c: RawChainId = Storable::from_bytes(b);
/// assert_eq!(a, c);
/// ```
///
/// # Size & Alignment
/// ```rust
/// # use cuprate_blockchain::types::*;
/// assert_eq!(size_of::<RawChainId>(), 8);
/// assert_eq!(align_of::<RawChainId>(), 8);
/// ```
#[derive(Copy, Clone, Debug, PartialEq, PartialOrd, Eq, Ord, Hash, Pod, Zeroable)]
#[repr(transparent)]
pub struct RawChainId(u64);
impl From<ChainId> for RawChainId {
fn from(value: ChainId) -> Self {
Self(value.0.get())
}
}
impl From<RawChainId> for ChainId {
fn from(value: RawChainId) -> Self {
Self(NonZero::new(value.0).expect("RawChainId cannot have a value of `0`"))
}
}
impl Key for RawChainId {}
//---------------------------------------------------------------------------------------------------- AltChainInfo
/// Information on an alternative chain.
///
/// ```rust
/// # use std::borrow::*;
/// # use cuprate_blockchain::{*, types::*};
/// use cuprate_database::Storable;
/// use cuprate_types::Chain;
///
/// // Assert Storable is correct.
/// let a: AltChainInfo = AltChainInfo {
/// parent_chain: Chain::Main.into(),
/// common_ancestor_height: 0,
/// chain_height: 1,
/// };
/// let b = Storable::as_bytes(&a);
/// let c: AltChainInfo = Storable::from_bytes(b);
/// assert_eq!(a, c);
/// ```
///
/// # Size & Alignment
/// ```rust
/// # use cuprate_blockchain::types::*;
/// assert_eq!(size_of::<AltChainInfo>(), 24);
/// assert_eq!(align_of::<AltChainInfo>(), 8);
/// ```
#[derive(Copy, Clone, Debug, PartialEq, PartialOrd, Eq, Ord, Hash, Pod, Zeroable)]
#[repr(C)]
pub struct AltChainInfo {
/// The chain this alt chain forks from.
pub parent_chain: RawChain,
/// The height of the first block we share with the parent chain.
pub common_ancestor_height: usize,
/// The chain height of the blocks in this alt chain.
pub chain_height: usize,
}
//---------------------------------------------------------------------------------------------------- AltBlockHeight
/// Represents the height of a block on an alt-chain.
///
/// ```rust
/// # use std::borrow::*;
/// # use cuprate_blockchain::{*, types::*};
/// use cuprate_database::Storable;
/// use cuprate_types::ChainId;
///
/// // Assert Storable is correct.
/// let a: AltBlockHeight = AltBlockHeight {
/// chain_id: ChainId(1.try_into().unwrap()).into(),
/// height: 1,
/// };
/// let b = Storable::as_bytes(&a);
/// let c: AltBlockHeight = Storable::from_bytes(b);
/// assert_eq!(a, c);
/// ```
///
/// # Size & Alignment
/// ```rust
/// # use cuprate_blockchain::types::*;
/// assert_eq!(size_of::<AltBlockHeight>(), 16);
/// assert_eq!(align_of::<AltBlockHeight>(), 8);
/// ```
#[derive(Copy, Clone, Debug, PartialEq, PartialOrd, Eq, Ord, Hash, Pod, Zeroable)]
#[repr(C)]
pub struct AltBlockHeight {
/// The [`ChainId`] of the chain this alt block is on, in raw form.
pub chain_id: RawChainId,
/// The height of this alt-block.
pub height: usize,
}
impl Key for AltBlockHeight {}
//---------------------------------------------------------------------------------------------------- CompactAltBlockInfo
/// Represents information on an alt-chain.
///
/// ```rust
/// # use std::borrow::*;
/// # use cuprate_blockchain::{*, types::*};
/// use cuprate_database::Storable;
///
/// // Assert Storable is correct.
/// let a: CompactAltBlockInfo = CompactAltBlockInfo {
/// block_hash: [1; 32],
/// pow_hash: [2; 32],
/// height: 10,
/// weight: 20,
/// long_term_weight: 30,
/// cumulative_difficulty_low: 40,
/// cumulative_difficulty_high: 50,
/// };
///
/// let b = Storable::as_bytes(&a);
/// let c: CompactAltBlockInfo = Storable::from_bytes(b);
/// assert_eq!(a, c);
/// ```
///
/// # Size & Alignment
/// ```rust
/// # use cuprate_blockchain::types::*;
/// assert_eq!(size_of::<CompactAltBlockInfo>(), 104);
/// assert_eq!(align_of::<CompactAltBlockInfo>(), 8);
/// ```
#[derive(Copy, Clone, Debug, PartialEq, PartialOrd, Eq, Ord, Hash, Pod, Zeroable)]
#[repr(C)]
pub struct CompactAltBlockInfo {
/// The block's hash.
pub block_hash: [u8; 32],
/// The block's proof-of-work hash.
pub pow_hash: [u8; 32],
/// The block's height.
pub height: usize,
/// The adjusted block size, in bytes.
pub weight: usize,
/// The long term block weight, which is the weight factored in with previous block weights.
pub long_term_weight: usize,
/// The low 64 bits of the cumulative difficulty.
pub cumulative_difficulty_low: u64,
/// The high 64 bits of the cumulative difficulty.
pub cumulative_difficulty_high: u64,
}
//---------------------------------------------------------------------------------------------------- AltTransactionInfo
/// Represents information on an alt transaction.
///
/// ```rust
/// # use std::borrow::*;
/// # use cuprate_blockchain::{*, types::*};
/// use cuprate_database::Storable;
///
/// // Assert Storable is correct.
/// let a: AltTransactionInfo = AltTransactionInfo {
/// tx_weight: 1,
/// fee: 6,
/// tx_hash: [6; 32],
/// };
///
/// let b = Storable::as_bytes(&a);
/// let c: AltTransactionInfo = Storable::from_bytes(b);
/// assert_eq!(a, c);
/// ```
///
/// # Size & Alignment
/// ```rust
/// # use cuprate_blockchain::types::*;
/// assert_eq!(size_of::<AltTransactionInfo>(), 48);
/// assert_eq!(align_of::<AltTransactionInfo>(), 8);
/// ```
#[derive(Copy, Clone, Debug, PartialEq, PartialOrd, Eq, Ord, Hash, Pod, Zeroable)]
#[repr(C)]
pub struct AltTransactionInfo {
/// The transaction's weight.
pub tx_weight: usize,
/// The transaction's total fees.
pub fee: u64,
/// The transaction's hash.
pub tx_hash: [u8; 32],
}
//---------------------------------------------------------------------------------------------------- Tests
#[cfg(test)]
mod test {

View file

@ -26,7 +26,7 @@ use bytemuck::TransparentWrapper;
/// Notably, `heed`'s table type uses this inside `service`.
pub(crate) struct UnsafeSendable<T>(T);
#[allow(clippy::non_send_fields_in_send_ty)]
#[expect(clippy::non_send_fields_in_send_ty)]
// SAFETY: Users ensure that their usage of this type is safe.
unsafe impl<T> Send for UnsafeSendable<T> {}
@ -41,7 +41,7 @@ impl<T> UnsafeSendable<T> {
}
/// Extract the inner `T`.
#[allow(dead_code)]
#[expect(dead_code)]
pub(crate) fn into_inner(self) -> T {
self.0
}

View file

@ -144,7 +144,7 @@ impl Env for ConcreteEnv {
// (current disk size) + (a bit of leeway)
// to account for empty databases where we
// need to write same tables.
#[allow(clippy::cast_possible_truncation)] // only 64-bit targets
#[expect(clippy::cast_possible_truncation, reason = "only 64-bit targets")]
let disk_size_bytes = match std::fs::File::open(&config.db_file) {
Ok(file) => file.metadata()?.len() as usize,
// The database file doesn't exist, 0 bytes.

View file

@ -57,7 +57,10 @@ impl From<heed::Error> for crate::InitError {
}
//---------------------------------------------------------------------------------------------------- RuntimeError
#[allow(clippy::fallible_impl_from)] // We need to panic sometimes.
#[expect(
clippy::fallible_impl_from,
reason = "We need to panic sometimes for safety"
)]
impl From<heed::Error> for crate::RuntimeError {
/// # Panics
/// This will panic on unrecoverable errors for safety.

View file

@ -194,7 +194,7 @@ fn db_read_write() {
// Insert keys.
let mut key = KEY;
#[allow(clippy::explicit_counter_loop)] // we need the +1 side effect
#[expect(clippy::explicit_counter_loop, reason = "we need the +1 side effect")]
for _ in 0..N {
table.put(&key, &VALUE).unwrap();
key += 1;
@ -269,7 +269,7 @@ fn db_read_write() {
assert_ne!(table.get(&KEY).unwrap(), NEW_VALUE);
#[allow(unused_assignments)]
#[expect(unused_assignments)]
table
.update(&KEY, |mut value| {
value = NEW_VALUE;

View file

@ -33,7 +33,7 @@
//! # Ok(()) }
//! ```
#[allow(clippy::module_inception)]
#[expect(clippy::module_inception)]
mod config;
pub use config::{Config, ConfigBuilder, READER_THREADS_DEFAULT};

View file

@ -54,7 +54,7 @@ pub trait DatabaseIter<T: Table> {
/// Get an [`Iterator`] that returns the `(key, value)` types for this database.
#[doc = doc_iter!()]
#[allow(clippy::iter_not_returning_iterator)]
#[expect(clippy::iter_not_returning_iterator)]
fn iter(
&self,
) -> Result<impl Iterator<Item = Result<(T::Key, T::Value), RuntimeError>> + '_, RuntimeError>;

Some files were not shown because too many files have changed in this diff Show more