initial database code (#6)

* commit to start the draft pull request.

added a space

* Please don't look to close.It might hurt your eyes

* impl associated types

* errors, docs & divided ro/rw tx

Added some more errors to DB_FAILURES, rewrited crates docs, and specified
WriteTransaction subtype which implement write mode method.

* more changes see description

changed blockchain_db folder by database. Implemented (just for test) get_block_hash, open, from to Interface.
Also rewrited a declarative macro for tables. Will have to add Dummy Tables later.

* small changes

* Organized modules & implemented get_block_hash

* write prototype & error

Added prototype functions for clear(), put() & delete() in mdbx implementation. They still don't
consider table flags. Also added a temporary DB_FAILURES::EncodingError for monero-rs consensus_encode
errors. Still have to rethink about it to resend a reference to the data that can't be encoded.

* Multiple changes

- hse.rs
Added hse.rs that will contain db implementations for HSE. Since the codebase can't welcome unsafe
code, the wrapper will be written outside of the project.
- lib.rs
Added a specific FailedToCommit error. (will investigate if really necessary).
Added DupTable trait, which is a Table with DUPSORT/DUPFIXED support and its declarative macro.
Added two other tables, blockheaders that give block's header with specified hash & blockbody that give block's body with specified hash
Added Cursor methods, that are likely to be deprecated if I found a way to implemen Iterator on top of it.
Added WriteCursor trait & methods, which is basically put & del.
Added mandatory type for Cursors in Transaction & WriteTransactions
Refactored get_block_hash interface method.
- mdbx.rs
Added partial implementation of Cursor & WriteCursor trait for libmdbx::Cursor. Only the first() & get() methods are implemented
Added implementation of get & commit for Transaction

* put mdbx as features with its dependency

* save

* refactored some method with macros

* more mdbx errors, docs correction, moved to error.rs

* finish nodup mdbx impl, errors.rs, macros, tables

Finished the initial implementation of Cursor, WriteCursor, Transaction and WriteTransaction in mdbx.rs. Corrected some macros in mdbx.rs to simplify the implementations. There is certainly rooms to more flexible macros. Also added 3 other tables. I started to divide errors into category to more easily handle them at higher-level. Due to the large number of errors i just moved them into another file. There is know DB_SERIAL enum for errors relating of decoding/encoding error & DB_FULL enum for every errors relating a component being overeaching its capacity.

* bye bye match statement in mdbx.rs

* defined all blockchain tables (not txpool)

* dupsort/fixed support, dupcursor, basic block interface

* tables, types, encoding and documentations

Redefined all the database types from @Boog900's monero-rs db branch and added the needed
implementations. The database now use bincode2 for encoding and decoding. We observe that bincode was
5 times faster at serializing than monero::consensus_encode. Since we still use monero-rs types but can't implement
foreign trait to them, the encoding module contain a compatibility layer, the time we switch from monero-rs to properly
implement it. All the tables are now defined. (can be subject to change if there is good reason for). added documentations
to modules and types.

* replaced macros and added hfversion table

* save

* multiple changes

* modified database schema. deprecated output global index and splited up pre-rct from rct output.

* Fixed DupCursor function to return subkey (thx to rust turbofish inference).

* Added some output functions

* Added two new DB_FAILURES, one to handle a prohibited None case and one for undefined case where a dev msg is needed.

* fixed TxOutputIdx, previously used global index, now is a tuple of amount/amount_index.

* i hate lifetimes

* read-only method now use read-only tx

* initial output fn

* some tx functions. Yes I'll refactor them

* moved interface in a module

* redefined errors, more tx fn, None->error

* corrected a table + started blk fns

* save

* fixed TxOutputIdx + pop_block

* IIRC I finished initial interface fns

* fixed table name const + db build/check/open fn

* switched important tables to dummy keys + rm blockhfversion

* minor docs correction

* fixed mentioned issues

* make a test bin, just for fun

* fixed issues + cargo fmt

* removed monerod part

* fixed a comment
This commit is contained in:
Someone Else 2023-04-20 17:20:32 +00:00 committed by GitHub
parent fc9b077d94
commit be43216b3f
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
35 changed files with 4431 additions and 2389 deletions

3
.gitignore vendored
View file

@ -1,3 +1,4 @@
target/
Cargo.lock
.vscode
.vscode
monerod

View file

@ -18,14 +18,18 @@ authors=[
[workspace]
members = [
"blockchain_db",
"cuprate",
"database",
"net/levin",
"net/monero-wire"
]
[workspace.dependencies]
monero = {version = "*", features = ['serde']}
monero = { version = "*" }
bincode = { version = "2.0.0-rc.3" }
serde = { version = "*", features =["derive"]}
tracing = "*"
tracing-subscriber = "*"
# As suggested by /u/danda :
thiserror = "*"
@ -38,4 +42,11 @@ lto = "thin"
panic = "abort"
[build]
rustflags=["-Zcf-protection=full", "-Zsanitizer=cfi", "-Crelocation-model=pie", "-Cstack-protector=all"]
linker="clang"
rustflags=[
"-Clink-arg=-fuse-ld=mold",
"-Zcf-protection=full",
"-Zsanitizer=cfi",
"-Crelocation-model=pie",
"-Cstack-protector=all",
]

View file

@ -1,18 +0,0 @@
[package]
name = "blockchain_db"
version = "0.0.1"
edition = "2021"
rust-version = "1.67.0"
license = "AGPL-3.0-only"
# All Contributors on github
authors=[
"SyntheticBird45 <@someoneelse495495:matrix.org>"
]
[dependencies]
monero = {workspace = true, features = ['serde']}
serde = { workspace = true}
thiserror = {workspace = true }
rocksdb = { version = "*", features = ["multi-threaded-cf"]}

View file

@ -1,883 +0,0 @@
// Copyright (C) 2023 Cuprate Contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//!
//! blockchain_db crates:
//! Contains the implementation of interaction between the blockchain and the database backend.
//! There is actually only one storage engine available:
//! - RocksDB
//! There is two other storage engine planned:
//! - HSE (Heteregeonous Storage Engine)
//! - LMDB (like monerod)
#![deny(unused_attributes)]
#![forbid(unsafe_code)]
#![allow(non_camel_case_types)]
#![deny(clippy::expect_used, clippy::panic)]
use thiserror::Error;
use monero::{Hash, Transaction, Block, BlockHeader, consensus::Encodable, util::ringct::RctSig};
use std::{error::Error, ops::Range};
const MONERO_DEFAULT_LOG_CATEGORY: &str = "blockchain.db";
pub type difficulty_type = u128;
type Blobdata = Vec<u8>;
type BlobdataRef = [u8];
type TxOutIndex = (Hash, u64);
/// Methods tracking how a tx was received and relayed
pub enum RelayMethod {
none, //< Received via RPC with `do_not_relay` set
local, //< Received via RPC; trying to send over i2p/tor, etc.
forward, //< Received over i2p/tor; timer delayed before ipv4/6 public broadcast
stem, //< Received/send over network using Dandelion++ stem
fluff, //< Received/sent over network using Dandelion++ fluff
block, //< Received in block, takes precedence over others
}
// the database types are going to be defined in the monero rust library.
pub enum RelayCategory {
broadcasted, //< Public txes received via block/fluff
relayable, //< Every tx not marked `relay_method::none`
legacy, //< `relay_category::broadcasted` + `relay_method::none` for rpc relay requests or historical reasons
all, //< Everything in the db
}
fn matches_category(relay_method: RelayMethod, relay_category: RelayCategory) -> bool {
todo!()
}
/**
* @brief a struct containing output metadata
*/
pub struct output_data_t {
pubkey: monero::util::key::PublicKey, //< the output's public key (for spend verification)
unlock_time: u64, //< the output's unlock time (or height)
height: u64, //< the height of the block which created the output
commitment: monero::util::ringct::Key, //< the output's amount commitment (for spend verification)
}
pub struct tx_data_t {
tx_id: u64,
unlock_time: u64,
block_id: u64,
}
pub struct alt_block_data_t {
height: u64,
cumulative_weight: u64,
cumulative_difficulty_low: u64,
cumulative_difficulty_high: u64,
already_generated_coins: u64,
}
pub struct txpool_tx_meta_t {
max_used_block_id: monero::cryptonote::hash::Hash,
last_failed_id: monero::cryptonote::hash::Hash,
weight: u64,
fee: u64,
max_used_block_height: u64,
last_failed_height: u64,
receive_time: u64,
last_relayed_time: u64, //< If received over i2p/tor, randomized forward time. If Dandelion++stem, randomized embargo time. Otherwise, last relayed timestamp
// 112 bytes
kept_by_block: u8,
relayed: u8,
do_not_relay: u8,
double_spend_seen: u8,
pruned: u8,
is_local: u8,
dandelionpp_stem: u8,
is_forwarding: u8,
bf_padding: u8,
padding: [u8; 76],
}
impl txpool_tx_meta_t {
fn set_relay_method(relay_method: RelayMethod) {}
fn upgrade_relay_method(relay_method: RelayMethod) -> bool {
todo!()
}
/// See `relay_category` description
fn matches(category: RelayCategory) -> bool {
return matches_category(todo!(), category);
}
}
pub enum OBJECT_TYPE {
BLOCK,
BLOCK_BLOB,
TRANSACTION,
TRANSACTION_BLOB,
OUTPUTS,
TXPOOL,
}
#[non_exhaustive] // < to remove
#[allow(dead_code)] // < to remove
#[derive(Error, Debug)]
pub enum DB_FAILURES {
#[error("DB_ERROR: `{0}`. The database is likely corrupted.")]
DB_ERROR(String),
#[error("DB_ERROR_TXN_START: `{0}`. The database failed starting a txn.")]
DB_ERROR_TXN_START(String),
#[error("DB_OPEN_FAILURE: Failed to open the database.")]
DB_OPEN_FAILURE,
#[error("DB_CREATE_FAILURE: Failed to create the database.")]
DB_CREATE_FAILURE,
#[error("DB_SYNC_FAILURE: Failed to sync the database.")]
DB_SYNC_FAILURE,
#[error("BLOCK_DNE: `{0}`. The block requested does not exist")]
BLOCK_DNE(String),
#[error("BLOCK_PARENT_DNE: `{0}` The parent of the block does not exist")]
BLOCK_PARENT_DNE(String),
#[error("BLOCK_EXISTS. The block to be added already exists!")]
BLOCK_EXISTS,
#[error("BLOCK_INVALID: `{0}`. The block to be added did not pass validation!")]
BLOCK_INVALID(String),
#[error("TX_EXISTS. The transaction to be added already exists!")]
TX_EXISTS,
#[error("TX_DNE: `{0}`. The transaction requested does not exist!")]
TX_DNE(String),
#[error("OUTPUTS_EXISTS. The output to be added already exists!")]
OUTPUT_EXISTS,
#[error("OUTPUT_DNE: `{0}`. The output requested does not exist")]
OUTPUT_DNE(String),
#[error("KEY_IMAGE_EXISTS. The spent key imge to be added already exists!")]
KEY_IMAGE_EXISTS,
#[error("ARITHMETIC_COUNT: `{0}`. An error occured due to a bad arithmetic/count logic")]
ARITHEMTIC_COUNT(String),
#[error("HASH_DNE. ")]
HASH_DNE(Option<Hash>),
}
pub trait KeyValueDatabase {
fn add_data_to_cf<D: ?Sized + Encodable>(cf: &str, data: &D) -> Result<Hash, DB_FAILURES>;
}
pub trait BlockchainDB: KeyValueDatabase {
// supposed to be private
// TODO: understand
//fn remove_block() -> Result<(), DB_FAILURES>; useless as it just delete data from the table. pop_block that use it internally also use remove_transaction to literally delete the block. Can be implemented without
fn add_spent_key() -> Result<(), DB_FAILURES>;
fn remove_spent_key() -> Result<(), DB_FAILURES>;
fn get_tx_amount_output_indices(tx_id: u64, n_txes: usize) -> Vec<Vec<u64>>;
fn has_key_image(img: &monero::blockdata::transaction::KeyImage) -> bool;
fn prune_outputs(amount: u64);
fn get_blockchain_pruning_seed() -> u32;
// variables part.
// uint64_t num_calls = 0; //!< a performance metric
// uint64_t time_blk_hash = 0; //!< a performance metric
// uint64_t time_add_block1 = 0; //!< a performance metric
// uint64_t time_add_transaction = 0; //!< a performance metric
// supposed to be protected
// mutable uint64_t time_tx_exists = 0; //!< a performance metric
// uint64_t time_commit1 = 0; //!< a performance metric
// bool m_auto_remove_logs = true; //!< whether or not to automatically remove old logs
// HardFork* m_hardfork; | protected: int *m_hardfork
// bool m_open;
// mutable epee::critical_section m_synchronization_lock; //!< A lock, currently for when BlockchainLMDB needs to resize the backing db file
// supposed to be public
/* handled by the DB.
fn batch_start(batch_num_blocks: u64, batch_bytes: u64) -> Result<bool,DB_FAILURES>;
fn batch_abort() -> Result<(),DB_FAILURES>;
fn set_batch_transactions() -> Result<(),DB_FAILURES>;
fn block_wtxn_start();
fn block_wtxn_stop();
fn block_wtxn_abort();
fn block_rtxn_start();
fn block_rtxn_stop();
fn block_rtxn_abort();
*/
//fn set_hard_fork(); // (HardFork* hf)
//fn add_block_public() -> Result<u64, DB_FAILURES>;
//fn block_exists(h: monero::cryptonote::hash::Hash, height: u64) -> bool;
// fn tx_exists(h: monero::cryptonote::hash::Hash, tx_id: Option<u64>) -> Result<(),()>; // Maybe error should be DB_FAILURES, not specified in docs
//fn tx_exist(h: monero::cryptonote::hash::Hash, tx: monero::Transaction) -> bool;
//fn get_tx_blob(h: monero::cryptonote::hash::Hash, tx: String) -> bool;
//fn get_pruned_tx_blob(h: monero::cryptonote::hash::Hash, tx: &mut String) -> bool;
//fn update_pruning() -> bool;
//fn check_pruning() -> bool;
//fn get_max_block_size() -> u64; It is never used
//fn add_max_block_size() -> u64; For reason above
//fn for_all_txpool_txes(wat: fn(wat1: &monero::Hash, wat2: &txpool_tx_meta_t, wat3: &String) -> bool, include_blob: bool, category: RelayCategory) -> Result<bool,DB_FAILURES>;
//fn for_all_keys_images(wat: fn(ki: &monero::blockdata::transaction::KeyImage) -> bool) -> Result<bool,DB_FAILURES>;
//fn for_blocks_range(h1: &u64, h2: &u64, wat: fn(u: u64, h: &monero::Hash, blk: &Block) -> bool) -> Result<bool,DB_FAILURES>; // u: u64 should be mut u: u64
//fn for_all_transactions(wat: fn(h: &monero::Hash, tx: &monero::Transaction) -> bool, pruned: bool) -> Result<bool,DB_FAILURES>;
//fn for_all_outputs();
//fn for_all_alt_blocks();
// ------------------------------------------| Blockchain |------------------------------------------------------------
/// `height` fetch the current blockchain height.
///
/// Return the current blockchain height. In case of failures, a DB_FAILURES will be return.
///
/// No parameters is required.
fn height(&mut self) -> Result<u64, DB_FAILURES>;
/// `set_hard_fork_version` sets which hardfork version a height is on.
///
/// In case of failures, a `DB_FAILURES` will be return.
///
/// Parameters:
/// `height`: is the height where the hard fork happen.
/// `version`: is the version of the hard fork.
fn set_hard_fork_version(&mut self);
/// `get_hard_fork_version` checks which hardfork version a height is on.
///
/// In case of failures, a `DB_FAILURES` will be return.
///
/// Parameters:
/// `height:` is the height to check.
fn get_hard_fork_version(&mut self);
/// May not need to be used
fn fixup(&mut self);
// -------------------------------------------| Outputs |------------------------------------------------------------
/// `add_output` add an output data to it's storage .
///
/// It internally keep track of the global output count. The global output count is also used to index outputs based on
/// their order of creations.
///
/// Should return the amount output index. In case of failures, a DB_FAILURES will be return.
///
/// Parameters:
/// `tx_hash`: is the hash of the transaction where the output comes from.
/// `output`: is the output to store.
/// `index`: is the local output's index (from transaction).
/// `unlock_time`: is the unlock time (height) of the output.
/// `commitment`: is the RingCT commitment of this output.
fn add_output(
&mut self,
tx_hash: &Hash,
output: &Hash,
index: TxOutIndex,
unlock_time: u64,
commitment: RctSig,
) -> Result<u64, DB_FAILURES>;
/// `add_tx_amount_output_indices` store amount output indices for a tx's outputs
///
/// TODO
fn add_tx_amount_output_indices() -> Result<(), DB_FAILURES>;
/// `get_output_key` get some of an output's data
///
/// Return the public key, unlock time, and block height for the output with the given amount and index, collected in a struct
/// In case of failures, a `DB_FAILURES` will be return. Precisely, if the output cannot be found, an `OUTPUT_DNE` error will be return.
/// If any of the required part for the final struct isn't found, a `DB_ERROR` will be return
///
/// Parameters:
/// `amount`: is the corresponding amount of the output
/// `index`: is the output's index (indexed by amount)
/// `include_commitment` : `true` by default.
fn get_output_key(
&mut self,
amount: u64,
index: u64,
include_commitmemt: bool,
) -> Result<output_data_t, DB_FAILURES>;
/// `get_output_tx_and_index_from_global`gets an output's transaction hash and index from output's global index.
///
/// Return a tuple containing the transaction hash and the output index. In case of failures, a `DB_FAILURES` will be return.
///
/// Parameters:
/// `index`: is the output's global index.
fn get_output_tx_and_index_from_global(&mut self, index: u64) -> Result<TxOutIndex, DB_FAILURES>;
/// `get_output_key_list` gets outputs' metadata from a corresponding collection.
///
/// Return a collection of output's metadata. In case of failurse, a `DB_FAILURES` will be return.
///
/// Parameters:
/// `amounts`: is the collection of amounts corresponding to the requested outputs.
/// `offsets`: is a collection of outputs' index (indexed by amount).
/// `allow partial`: `false` by default.
fn get_output_key_list(
&mut self,
amounts: &Vec<u64>,
offsets: &Vec<u64>,
allow_partial: bool,
) -> Result<Vec<output_data_t>, DB_FAILURES>;
/// `get_output_tx_and_index` gets an output's transaction hash and index
///
/// Return a tuple containing the transaction hash and the output index. In case of failures, a `DB_FAILURES` will be return.
///
/// Parameters:
/// `amount`: is the corresponding amount of the output
/// `index`: is the output's index (indexed by amount)
fn get_output_tx_and_index(&mut self, amount: u64, index: u64) -> Result<TxOutIndex, DB_FAILURES>;
/// `get_num_outputs` fetches the number of outputs of a given amount.
///
/// Return a count of outputs of the given amount. in case of failures a `DB_FAILURES` will be return.
///
/// Parameters:
/// `amount`: is the output amount being looked up.
fn get_num_outputs(amount: &u64) -> Result<u64, DB_FAILURES>;
// -----------------------------------------| Transactions |----------------------------------------------------------
/// `add_transaction` add the corresponding transaction and its hash to the specified block.
///
/// In case of failures, a DB_FAILURES will be return. Precisely, a TX_EXISTS will be returned if the
/// transaction to be added already exists in the database.
///
/// Parameters:
/// `blk_hash`: is the hash of the block which inherit the transaction
/// `tx`: is obviously the transaction to add
/// `tx_hash`: is the hash of the transaction.
/// `tx_prunable_hash_ptr`: is the hash of the prunable part of the transaction.
fn add_transaction(
&mut self,
blk_hash: &Hash,
tx: Transaction,
tx_hash: &Hash,
tx_prunable_hash_ptr: &Hash,
) -> Result<(), DB_FAILURES>;
/// `add_transaction_data` add the specified transaction data to its storage.
///
/// It only add the transaction blob and tx's metadata, not the collection of outputs.
///
/// Return the hash of the transaction added. In case of failures, a DB_FAILURES will be return.
///
/// Parameters:
/// `blk_hash`: is the hash of the block containing the transaction
/// `tx_and_hash`: is a tuple containing the transaction and it's hash
/// `tx_prunable_hash`: is the hash of the prunable part of the transaction
fn add_transaction_data(
&mut self,
blk_hash: &Hash,
tx_and_hash: (Transaction, &Hash),
tx_prunable_hash: &Hash,
) -> Result<Hash, DB_FAILURES>;
/// `remove_transaction_data` remove data about a transaction specified by its hash.
///
/// In case of failures, a `DB_FAILURES` will be return. Precisely, a `TX_DNE` will be return if the specified transaction can't be found.
///
/// Parameters:
/// `tx_hash`: is the transaction's hash to remove data from.
fn remove_transaction_data(&mut self, tx_hash: &Hash) -> Result<(), DB_FAILURES>;
/// `get_tx_count` fetches the total number of transactions stored in the database
///
/// Should return the count. In case of failure, a DB_FAILURES will be return.
///
/// No parameters is required.
fn get_tx_count(&mut self) -> Result<u64, DB_FAILURES>;
/// `tx_exists` check if a transaction exist with the given hash.
///
/// Return `true` if the transaction exist, `false` otherwise. In case of failure, a DB_FAILURES will be return.
///
/// Parameters :
/// `h` is the given hash of transaction to check.
/// `tx_id` is an optional mutable reference to get the transaction id out of the found transaction.
fn tx_exists(&mut self, h: &Hash, tx_id: &mut Option<u64>) -> Result<bool, DB_FAILURES>;
/// `get_tx_unlock_time` fetch a transaction's unlock time/height
///
/// Should return the unlock time/height in u64. In case of failure, a DB_FAILURES will be return.
///
/// Parameters:
/// `h`: is the given hash of the transaction to check.
fn get_tx_unlock_time(&mut self, h: &Hash) -> Result<u64, DB_FAILURES>;
/// `get_tx` fetches the transaction with the given hash.
///
/// Should return the transaction. In case of failure, a DB_FAILURES will be return.
///
/// Parameters:
/// `h`: is the given hash of transaction to fetch.
fn get_tx(&mut self, h: &Hash) -> Result<Transaction, DB_FAILURES>;
/// `get_pruned_tx` fetches the transaction base with the given hash.
///
/// Should return the transaction. In case of failure, a DB_FAILURES will be return.
///
/// Parameters:
/// `h`: is the given hash of transaction to fetch.
fn get_pruned_tx(&mut self, h: &Hash) -> Result<Transaction, DB_FAILURES>;
/// `get_tx_list` fetches the transactions with given hashes.
///
/// Should return a vector with the requested transactions. In case of failures, a DB_FAILURES will be return.
/// Precisly, a HASH_DNE error will be returned with the correspondig hash of transaction that is not found in the DB.
///
/// `hlist`: is the given collection of hashes correspondig to the transactions to fetch.
fn get_tx_list(&mut self, hlist: &Vec<Hash>) -> Result<Vec<monero::Transaction>, DB_FAILURES>;
/// `get_tx_blob` fetches the transaction blob with the given hash.
///
/// Should return the transaction blob. In case of failure, a DB_FAILURES will be return.
///
/// Parameters:
/// `h`: is the given hash of the transaction to fetch.
fn get_tx_blob(&mut self, h: &Hash) -> Result<Blobdata, DB_FAILURES>;
/// `get_pruned_tx_blob` fetches the pruned transaction blob with the given hash.
///
/// Should return the transaction blob. In case of failure, a DB_FAILURES will be return.
///
/// Parameters:
/// `h`: is the given hash of the transaction to fetch.
fn get_pruned_tx_blob(&mut self, h: &Hash) -> Result<Blobdata, DB_FAILURES>;
/// `get_prunable_tx_blob` fetches the prunable transaction blob with the given hash.
///
/// Should return the transaction blob, In case of failure, a DB_FAILURES, will be return.
///
/// Parameters:
/// `h`: is the given hash of the transaction to fetch.
fn get_prunable_tx_blob(&mut self, h: &Hash) -> Result<Blobdata, DB_FAILURES>;
/// `get_prunable_tx_hash` fetches the prunable transaction hash
///
/// Should return the hash of the prunable transaction data. In case of failures, a DB_FAILURES, will be return.
///
/// Parameters:
/// `tx_hash`: is the given hash of the transaction to fetch.
fn get_prunable_tx_hash(&mut self, tx_hash: &Hash) -> Result<Hash, DB_FAILURES>;
/// `get_pruned_tx_blobs_from` fetches a number of pruned transaction blob from the given hash, in canonical blockchain order.
///
/// Should return the pruned transactions stored from the one with the given hash. In case of failure, a DB_FAILURES will be return.
/// Precisly, an ARITHMETIC_COUNT error will be returned if the first transaction does not exist or their are fewer transactions than the count.
///
/// Parameters:
/// `h`: is the given hash of the first transaction/
/// `count`: is the number of transaction to fetch in canoncial blockchain order.
fn get_pruned_tx_blobs_from(&mut self, h: &Hash, count: usize) -> Result<Vec<Blobdata>, DB_FAILURES>;
/// `get_tx_block_height` fetches the height of a transaction's block
///
/// Should return the height of the block containing the transaction with the given hash. In case
/// of failures, a DB FAILURES will be return. Precisely, a TX_DNE error will be return if the transaction cannot be found.
///
/// Parameters:
/// `h`: is the fiven hash of the first transaction
fn get_tx_block_height(&mut self, h: &Hash) -> Result<u64, DB_FAILURES>;
// -----------------------------------------| Blocks |----------------------------------------------------------
/// `add_block` add the block and metadata to the db.
///
/// In case of failures, a `DB_FAILURES` will be return. Precisely, a BLOCK_EXISTS error will be returned if
/// the block to be added already exist. a BLOCK_INVALID will be returned if the block to be added did not pass validation.
///
/// Parameters:
/// `blk`: is the block to be added
/// `block_weight`: is the weight of the block (data's total)
/// `long_term_block_weight`: is the long term weight of the block (data's total)
/// `cumulative_difficulty`: is the accumulated difficulty at this block.
/// `coins_generated` is the number of coins generated after this block.
/// `blk_hash`: is the hash of the block.
fn add_block(
blk: Block,
blk_hash: Hash,
block_weight: u64,
long_term_block_weight: u64,
cumulative_difficulty: u128,
coins_generated: u64,
) -> Result<(), DB_FAILURES>;
/// `pop_block` pops the top block off the blockchain.
///
/// Return the block that was popped. In case of failures, a `DB_FAILURES` will be return.
///
/// No parameters is required.
fn pop_block(&mut self) -> Result<Block, DB_FAILURES>;
/// `blocks_exists` check if the given block exists
///
/// Return `true` if the block exist, `false` otherwise. In case of failures, a `DB_FAILURES` will be return.
///
/// Parameters:
/// `h`: is the given hash of the requested block.
fn block_exists(&mut self, h: &Hash) -> Result<bool, DB_FAILURES>;
/// `get_block` fetches the block with the given hash.
///
/// Return the requested block. In case of failures, a `DB_FAILURES` will be return. Precisely, a `BLOCK_DNE`
/// error will be returned if the requested block can't be found.
///
/// Parameters:
/// `h`: is the given hash of the requested block.
fn get_block(&mut self, h: &Hash) -> Result<Block, DB_FAILURES>;
/// `get_block_from_height` fetches the block located at the given height.
///
/// Return the requested block. In case of failures, a `DB_FAILURES` will be return. Precisely, a `BLOCK_DNE`
/// error will be returned if the requested block can't be found.
///
/// Parameters:
/// `height`: is the height where the requested block is located.
fn get_block_from_height(&mut self, height: u64) -> Result<Block, DB_FAILURES>;
/// `get_block_from_range` fetches the blocks located from and to the specified heights.
///
/// Return the requested blocks. In case of failures, a `DB_FAILURES` will be return. Precisely, a `BLOCK_DNE`
/// error will be returned if at least requested block can't be found. If the range requested past the end of the blockchain,
/// an `ARITHMETIC_COUNT` error will be return.
///
/// Parameters:
/// `height_range`: is the range of height where the requested blocks are located.
fn get_blocks_from_range(&mut self, height_range: Range<u64>) -> Result<Vec<Block>, DB_FAILURES>;
/// `get_block_blob` fetches the block blob with the given hash.
///
/// Return the requested block blob. In case of failures, a `DB_FAILURES` will be return. Precisely, a `BLOCK_DNE`
/// error will be returned if the requested block can't be found.
///
/// Parameters:
/// `h`: is the given hash of the requested block.
fn get_block_blob(&mut self, h: &Hash) -> Result<Blobdata, DB_FAILURES>;
/// `get_block_blob_from_height` fetches the block blob located at the given height in the blockchain.
///
/// Return the requested block blob. In case of failures, a `DB_FAILURES` will be return. Precisely, a `BLOCK_DNE`
/// error will be returned if the requested block can't be found.
///
/// Parameters:
/// `height`: is the given height of the corresponding block blob to fetch.
fn get_block_blob_from_height(&mut self, height: u64) -> Result<Blobdata, DB_FAILURES>;
/// `get_block_header` fetches the block's header with the given hash.
///
/// Return the requested block header. In case of failures, a `DB_FAILURES` will be return. Precisely, a `BLOCK_DNE`
/// error will be returned if the requested block can't be found.
///
/// Parameters:
/// `h`: is the given hash of the requested block.
fn get_block_header(&mut self, h: &Hash) -> Result<BlockHeader, DB_FAILURES>;
/// `get_block_hash_from_height` fetch block's hash located at the given height.
///
/// Return the hash of the block with the given height. In case of failures, a DB_FAILURES will be return. Precisely, a `BLOCK_DNE`
/// error will be returned if the requested block can't be found.
///
/// Parameters:
/// `height`: is the given height where the requested block is located.
fn get_block_hash_from_height(&mut self, height: u64) -> Result<Hash, DB_FAILURES>;
/// `get_blocks_hashes_from_range` fetch blocks' hashes located from, between and to the given heights.
///
/// Return a collection of hases corresponding to the scoped blocks. In case of failures, a DB_FAILURES will be return. Precisely, a `BLOCK_DNE`
/// error will be returned if at least one of the requested blocks can't be found.
///
/// Parameters:
/// `height`: is the given height where the requested block is located.
fn get_blocks_hashes_from_range(&mut self, range: Range<u64>) -> Result<Vec<Hash>, DB_FAILURES>;
/// `get_top_block` fetch the last/top block of the blockchain
///
/// Return the last/top block of the blockchain. In case of failures, a DB_FAILURES, will be return.
///
/// No parameters is required.
fn get_top_block(&mut self) -> Block;
/// `get_top_block_hash` fetch the block's hash located at the top of the blockchain (the last one).
///
/// Return the hash of the last block. In case of failures, a DB_FAILURES will be return.
///
/// No parameters is required
fn get_top_block_hash(&mut self) -> Result<Hash, DB_FAILURES>;
// ! TODO: redefine the result & docs. see what could be improved. Do we really need this function?
/// `get_blocks_from` fetches a variable number of blocks and transactions from the given height, in canonical blockchain order as long as it meets the parameters.
///
/// Should return the blocks stored starting from the given height. The number of blocks returned is variable, based on the max_size defined. There will be at least `min_block_count`
/// if possible, even if this contravenes max_tx_count. In case of failures, a `DB_FAILURES` error will be return.
///
/// Parameters:
/// `start_height`: is the given height to start from.
/// `min_block_count`: is the minimum number of blocks to return. If there are fewer blocks, it'll return fewer blocks than the minimum.
/// `max_block_count`: is the maximum number of blocks to return.
/// `max_size`: is the maximum size of block/transaction data to return (can be exceeded on time if min_count is met).
/// `max_tx_count`: is the maximum number of txes to return.
/// `pruned`: is whether to return full or pruned tx data.
/// `skip_coinbase`: is whether to return or skip coinbase transactions (they're in blocks regardless).
/// `get_miner_tx_hash`: is whether to calculate and return the miner (coinbase) tx hash.
fn get_blocks_from(
&mut self,
start_height: u64,
min_block_count: u64,
max_block_count: u64,
max_size: usize,
max_tx_count: u64,
pruned: bool,
skip_coinbase: bool,
get_miner_tx_hash: bool,
) -> Result<Vec<((String, Hash), Vec<(Hash, String)>)>, DB_FAILURES>;
/// `get_block_height` gets the height of the block with a given hash
///
/// Return the requested height.
fn get_block_height(&mut self, h: &Hash) -> Result<u64, DB_FAILURES>;
/// `get_block_weights` fetch the block's weight located at the given height.
///
/// Return the requested block weight. In case of failures, a `DB_FAILURES` will be return. Precisely, a `BLOCK_DNE`
/// error will be returned if the requested block can't be found.
///
/// Parameters:
/// `height`: is the given height where the requested block is located.
fn get_block_weight(&mut self, height: u64) -> Result<u64, DB_FAILURES>;
/// `get_block_weights` fetch the last `count` blocks' weights.
///
/// Return a collection of weights. In case of failures, a `DB_FAILURES` will be return. Precisely, an 'ARITHMETIC_COUNT'
/// error will be returned if there are fewer than `count` blocks.
///
/// Parameters:
/// `start_height`: is the height to seek before collecting block weights.
/// `count`: is the number of last blocks' weight to fetch.
fn get_block_weights(&mut self, start_height: u64, count: usize) -> Result<Vec<u64>, DB_FAILURES>;
/// `get_block_already_generated_coins` fetch a block's already generated coins
///
/// Return the total coins generated as of the block with the given height. In case of failures, a `DB_FAILURES` will be return. Precisely, a `BLOCK_DNE`
/// error will be returned if the requested block can't be found.
///
/// Parameters:
/// `height`: is the given height of the block to seek.
fn get_block_already_generated_coins(&mut self, height: u64) -> Result<u64, DB_FAILURES>;
/// `get_block_long_term_weight` fetch a block's long term weight.
///
/// Should return block's long term weight. In case of failures, a DB_FAILURES will be return. Precisely, a `BLOCK_DNE`
/// error will be returned if the requested block can't be found.
///
/// Parameters:
/// `height`: is the given height where the requested block is located.
fn get_block_long_term_weight(&mut self, height: u64) -> Result<u64, DB_FAILURES>;
/// `get_long_term_block_weights` fetch the last `count` blocks' long term weights
///
/// Should return a collection of blocks' long term weights. In case of failures, a DB_FAILURES will be return. Precisely, a `BLOCK_DNE`
/// error will be returned if the requested block can't be found. If there are fewer than `count` blocks, the returned collection will be
/// smaller than `count`.
///
/// Parameters:
/// `start_height`: is the height to seek before collecting block weights.
/// `count`: is the number of last blocks' long term weight to fetch.
fn get_long_term_block_weights(&mut self, height: u64, count: usize) -> Result<Vec<u64>, DB_FAILURES>;
/// `get_block_timestamp` fetch a block's timestamp.
///
/// Should return the timestamp of the block with given height. In case of failures, a DB_FAILURES will be return. Precisely, a `BLOCK_DNE`
/// error will be returned if the requested block can't be found.
///
/// Parameters:
/// `height`: is the given height where the requested block to fetch timestamp is located.
fn get_block_timestamp(&mut self, height: u64) -> Result<u64, DB_FAILURES>;
/// `get_block_cumulative_rct_outputs` fetch a blocks' cumulative number of RingCT outputs
///
/// Should return the number of RingCT outputs in the blockchain up to the blocks located at the given heights. In case of failures, a DB_FAILURES will be return. Precisely, a `BLOCK_DNE`
/// error will be returned if the requested block can't be found.
///
/// Parameters:
/// `heights`: is the collection of height to check for RingCT distribution.
fn get_block_cumulative_rct_outputs(&mut self, heights: Vec<u64>) -> Result<Vec<u64>, DB_FAILURES>;
/// `get_top_block_timestamp` fetch the top block's timestamp
///
/// Should reutnr the timestamp of the most recent block. In case of failures, a DB_FAILURES will be return.
///
/// No parameters is required.
fn get_top_block_timestamp(&mut self) -> Result<u64, DB_FAILURES>;
/// `correct_block_cumulative_difficulties` correct blocks cumulative difficulties that were incorrectly calculated due to the 'difficulty drift' bug
///
/// Should return nothing. In case of failures, a DB_FAILURES will be return. Precisely, a `BLOCK_DNE`
/// error will be returned if the requested block can't be found.
///
/// Parameters:
/// `start_height`: is the height of the block where the drifts start.
/// `new_cumulative_difficulties`: is the collection of new cumulative difficulties to be stored
fn correct_block_cumulative_difficulties(
&mut self,
start_height: u64,
new_cumulative_difficulties: Vec<difficulty_type>,
) -> Result<(), DB_FAILURES>;
// --------------------------------------------| Alt-Block |------------------------------------------------------------
/// `add_alt_block` add a new alternative block.
///
/// In case of failures, a DB_FAILURES will be return.
///
/// Parameters:
/// blkid: is the hash of the original block
/// data: is the metadata for the block
/// blob: is the blobdata of this alternative block.
fn add_alt_block(&mut self, blkid: &Hash, data: &alt_block_data_t, blob: &Blobdata) -> Result<(), DB_FAILURES>;
/// `get_alt_block` gets the specified alternative block.
///
/// Return a tuple containing the blobdata of the alternative block and its metadata. In case of failures, a DB_FAILURES will be return.
///
/// Parameters:
/// `blkid`: is the hash of the requested alternative block.
fn get_alt_block(&mut self, blkid: &Hash) -> Result<(alt_block_data_t, Blobdata), DB_FAILURES>;
/// `remove_alt_block` remove the specified alternative block
///
/// In case of failures, a DB_FAILURES will be return.
///
/// Parameters:
/// `blkid`: is the hash of the alternative block to remove.
fn remove_alt_block(&mut self, blkid: &Hash) -> Result<(), DB_FAILURES>;
/// `get_alt_block` gets the total number of alternative blocks stored
///
/// In case of failures, a DB_FAILURES will be return.
///
/// No parameters is required.
fn get_alt_block_count(&mut self) -> Result<u64, DB_FAILURES>;
/// `drop_alt_block` drop all alternative blocks.
///
/// In case of failures, a DB_FAILURES will be return.
///
/// No parameters is required.
fn drop_alt_blocks(&mut self) -> Result<(), DB_FAILURES>;
// --------------------------------------------| TxPool |------------------------------------------------------------
/// `add_txpool_tx` add a Pool's transaction to the database.
///
/// In case of failures, a DB_FAILURES will be return.
///
/// Parameters:
/// `txid`: is the hash of the transaction to add.
/// `blob`: is the blobdata of the transaction to add.
/// `details`: is the metadata of the transaction pool at this specific transaction.
fn add_txpool_tx(&mut self, txid: &Hash, blob: &BlobdataRef, details: &txpool_tx_meta_t)
-> Result<(), DB_FAILURES>;
/// `update_txpool_tx` replace pool's transaction metadata.
///
/// In case of failures, a DB_FAILURES will be return.
///
/// Parameters:
/// `txid`: is the hash of the transaction to edit
/// `details`: is the new metadata to insert.
fn update_txpool_tx(&mut self, txid: &monero::Hash, details: &txpool_tx_meta_t) -> Result<(), DB_FAILURES>;
/// `get_txpool_tx_count` gets the number of transactions in the txpool.
///
/// Return the number of transaction in the txpool. In case of failures, a DB_FAILURES will be return.
///
/// Parameters:
/// `tx_category`: is the relay's category where the tx are coming from. (RelayCategory::broadcasted by default)
fn get_txpool_tx_count(&mut self, tx_category: RelayCategory) -> Result<u64, DB_FAILURES>;
/// `txpool_has_tx`checks if the specified transaction exist in the transaction's pool and if it belongs
/// to the specified category.
///
/// Return `true` if the condition above are met, `false otherwise`. In case of failures, a DB_FAILURES will be return.
///
/// Parameters:
/// `txid`: is the hash of the transaction to check for
/// `tx_category`: is the relay's category where the tx is supposed to come from.
fn txpool_has_tx(&mut self, txid: &Hash, tx_category: &RelayCategory) -> Result<bool, DB_FAILURES>;
/// `remove_txpool_tx` remove the specified transaction from the transaction pool.
///
/// In case of failures, a DB_FAILURES will be return.
///
/// Parameters:
/// `txid`: is the hash of the transaction to remove.
fn remove_txpool_tx(&mut self, txid: &Hash) -> Result<(), DB_FAILURES>;
/// `get_txpool_tx_meta` gets transaction's pool metadata recorded at the specified transaction.
///
/// In case of failures, a DB_FAILURES will be return.
///
/// Parameters:
/// `txid`: is the hash of metadata's transaction hash.
fn get_txpool_tx_meta(&mut self, txid: &Hash) -> Result<txpool_tx_meta_t, DB_FAILURES>;
/// `get_txpool_tx_blob` gets the txpool transaction's blob.
///
/// In case of failures, a DB_FAILURES will be return.
///
/// Parameters:
/// `txid`: is the hash of the transaction to fetch blobdata from.
/// `tx_category`: is the relay's category where the tx are coming from. < monerod note: for filtering out hidden/private txes.
fn get_txpool_tx_blob(&mut self, txid: &Hash, tx_category: RelayCategory) -> Result<Blobdata, DB_FAILURES>;
/// `txpool_tx_matches_category` checks if the corresponding transaction belongs to the specified category.
///
/// Return `true` if the transaction belongs to the category, `false` otherwise. In case of failures, a DB_FAILURES will be return.
///
/// Parameters:
/// `tx_hash`: is the hash of the transaction to lookup.
/// `category`: is the relay's category to check.
fn txpool_tx_matches_category(&mut self, tx_hash: &Hash, category: RelayCategory) -> Result<bool, DB_FAILURES>;
}
// functions defined as useless : init_options(), is_open(), reset_stats(), show_stats(), open(), close(), get_output_histogram(), safesyncmode, get_filenames(), get_db_name(), remove_data_file(), lock(), unlock(), is_read_only(), get_database_size(), get_output_distribution(), set_auto_remove_logs(), check_hard_fork_info(), drop_hard_fork_info(), get_indexing_base();

View file

@ -1,64 +0,0 @@
// Copyright (C) 2023 Cuprate Contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//!
//! RocksDB implementation.
//!
//! Database Schema:
//! ---------------------------------------
//! Column | Key | Data
//! ---------------------------------------
//! *block*------------------------------------------------------
//!
//! blocks height {blob}
//! heights hash height
//! b_metadata height {b_metdata}
//!
//! *transactions*-----------------------------------------------
//!
//! tx_prefix tx ID {blob}
//! tx_prunable tx ID {blob}
//! tx_hash tx ID hash
//! tx_opti_h hash height
//! tx_outputs tx ID {amount,output,indices}
//!
//! *outputs*----------------------------------------------------
//!
//! ouputs_txs op ID {tx hash, l_index}
//! outputs_am amount {amount output index, metdata}
//!
//! *spent keys*--------------------------------------------------
//!
//! spent_keys hash well... obvious?
//!
//! *tx pool*------------------------------------------------------
//!
//! txp_meta hash {txp_metadata}
//! txp_blob hash {blob}
//!
//! *alt blocks*----------------------------------------------------
//!
//! alt_blocks hash {bock data, block blob}
// Defining tables
const CF_BLOCKS: &str = "blocks";
const CF_HEIGHTS: &str = "heights";
const CF_BLOCK_METADATA: &str = "b_metadata";
const CF_TX_PREFIX: &str = "tx_prefix";
const CF_TX_PRUNABLE: &str = "tx_prunable";
const CF_TX_HASH: &str = "tx_hash";
const CF_TX_OPTI_H: &str = "tx_opti_h";
const CF_TX_OUTPUTS: &str = "tx_outputs";
const CF_OUTPUTS_TXS: &str = "outputs_txs";

12
cuprate/Cargo.toml Normal file
View file

@ -0,0 +1,12 @@
[package]
name = "cuprate-bin"
version = "0.1.0"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
clap = { version = "*", features = [] }
clap_complete = "*"
tracing = { workspace = true }
tracing-subscriber = { workspace = true }

50
cuprate/src/cli.rs Normal file
View file

@ -0,0 +1,50 @@
use crate::CUPRATE_VERSION;
use clap::{value_parser, Arg, ArgAction, ArgGroup, ArgMatches, Command};
use tracing::{event, span, Level, Span};
/// This function simply contains clap arguments
pub fn args() -> ArgMatches {
Command::new("Cuprate")
.version(CUPRATE_VERSION)
.author("Cuprate's contributors")
.about("An upcoming experimental, modern, and secure monero node")
// Generic Arguments
.arg(
Arg::new("log")
.long("log-level")
.value_name("Level")
.help("Set the log level")
.value_parser(value_parser!(u8))
.default_value("1")
.long_help("Set the log level. There is 3 log level: <1~INFO, 2~DEBUG >3~TRACE.")
.required(false)
.action(ArgAction::Set),
)
.get_matches()
}
/// This function initialize the FmtSubscriber used by tracing to display event in the console. It send back a span used during runtime.
pub fn init(matches: &ArgMatches) -> Span {
// Getting the log level from args
let log_level = matches.get_one::<u8>("log").unwrap();
let level_filter = match log_level {
2 => Level::DEBUG,
x if x > &2 => Level::TRACE,
_ => Level::INFO,
};
// Initializing tracing subscriber and runtime span
let subscriber = tracing_subscriber::FmtSubscriber::builder()
.with_max_level(level_filter)
.with_target(false)
.finish();
tracing::subscriber::set_global_default(subscriber).expect("Failed to set global subscriber for tracing. We prefer to abort the node since without it you have no output in the console");
let runtime_span = span!(Level::INFO, "Runtime");
let _guard = runtime_span.enter();
// Notifying log level
event!(Level::INFO, "Log level set to {}", level_filter);
drop(_guard);
runtime_span
}

13
cuprate/src/main.rs Normal file
View file

@ -0,0 +1,13 @@
use tracing::{event, info, span, Level};
pub mod cli;
const CUPRATE_VERSION: &str = "0.1.0";
fn main() {
// Collecting options
let matches = cli::args();
// Initializing tracing subscriber and runtime span
let _runtime_span = cli::init(&matches);
}

33
database/Cargo.toml Normal file
View file

@ -0,0 +1,33 @@
[package]
name = "cuprate-database"
version = "0.0.1"
edition = "2021"
license = "AGPL-3.0-only"
# All Contributors on github
authors=[
"SyntheticBird45 <@someoneelse495495:matrix.org>",
"Boog900"
]
[features]
mdbx = ["dep:libmdbx"]
hse = []
[dependencies]
monero = {workspace = true, features = ["serde"]}
tiny-keccak = { version = "2.0", features = ["sha3"] }
serde = { workspace = true}
thiserror = {workspace = true }
bincode = { workspace = true }
libmdbx = { version = "0.3.1", optional = true }
[build]
linker="clang"
rustflags=[
"-Clink-arg=-fuse-ld=mold",
"-Zcf-protection=full",
"-Zsanitizer=cfi",
"-Crelocation-model=pie",
"-Cstack-protector=all",
]

78
database/src/encoding.rs Normal file
View file

@ -0,0 +1,78 @@
//! ### Encoding module
//! The encoding module contains a trait that permit compatibility between `monero-rs` consensus encoding/decoding logic and `bincode` traits.
//! The database tables only accept types that implement [`bincode::Encode`] and [`bincode::Decode`] and since we can't implement these on `monero-rs` types directly
//! we use a wrapper struct `Compat<T>` that permit us to use `monero-rs`'s `consensus_encode`/`consensus_decode` functions under bincode traits.
//! The choice of using `bincode` comes from performance measurement at encoding. Sometimes `bincode` implementations was 5 times faster than `monero-rs` impl.
use bincode::{de::read::Reader, enc::write::Writer};
use monero::consensus::{Decodable, Encodable};
use std::{fmt::Debug, io::Read, ops::Deref};
#[derive(Debug, Clone)]
/// A single-tuple struct, used to contains monero-rs types that implement [`monero::consensus::Encodable`] and [`monero::consensus::Decodable`]
pub struct Compat<T: Encodable + Decodable>(pub T);
/// A wrapper around a [`bincode::de::read::Reader`] type. Permit us to use [`std::io::Read`] and feed monero-rs functions with an actual `&[u8]`
pub struct ReaderCompat<'src, R: Reader>(pub &'src mut R);
// Actual implementation of `std::io::read` for `bincode`'s `Reader` types
impl<'src, R: Reader> Read for ReaderCompat<'src, R> {
fn read(&mut self, buf: &mut [u8]) -> std::io::Result<usize> {
self.0
.read(buf)
.map_err(|_| std::io::Error::new(std::io::ErrorKind::Other, "bincode reader Error"))?;
Ok(buf.len())
}
}
// Convenient implementation. `Deref` and `From`
impl<T: Encodable + Decodable> Deref for Compat<T> {
type Target = T;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl<T: Encodable + Decodable> From<T> for Compat<T> {
fn from(value: T) -> Self {
Compat(value)
}
}
// TODO: Investigate specialization optimization
// Implementation of `bincode::Decode` for monero-rs `Decodable` type
impl<T: Encodable + Decodable + Debug> bincode::Decode for Compat<T> {
fn decode<D: bincode::de::Decoder>(
decoder: &mut D,
) -> Result<Self, bincode::error::DecodeError> {
Ok(Compat(
Decodable::consensus_decode(&mut ReaderCompat(decoder.reader()))
.map_err(|_| bincode::error::DecodeError::Other("Monero-rs decoding failed"))?,
))
}
}
// Implementation of `bincode::BorrowDecode` for monero-rs `Decodable` type
impl<'de, T: Encodable + Decodable + Debug> bincode::BorrowDecode<'de> for Compat<T> {
fn borrow_decode<D: bincode::de::BorrowDecoder<'de>>(
decoder: &mut D,
) -> Result<Self, bincode::error::DecodeError> {
Ok(Compat(
Decodable::consensus_decode(&mut ReaderCompat(decoder.borrow_reader()))
.map_err(|_| bincode::error::DecodeError::Other("Monero-rs decoding failed"))?,
))
}
}
// Implementation of `bincode::Encode` for monero-rs `Encodable` type
impl<T: Encodable + Decodable + Debug> bincode::Encode for Compat<T> {
fn encode<E: bincode::enc::Encoder>(
&self,
encoder: &mut E,
) -> Result<(), bincode::error::EncodeError> {
let writer = encoder.writer();
let buf = monero::consensus::serialize(&self.0);
writer.write(&buf)
}
}

53
database/src/error.rs Normal file
View file

@ -0,0 +1,53 @@
//! ### Error module
//! This module contains all errors abstraction used by the database crate. By implementing [`From<E>`] to the specific errors of storage engine crates, it let us
//! handle more easily any type of error that can happen. This module does **NOT** contain interpretation of these errors, as these are defined for Blockchain abstraction. This is another difference
//! from monerod which interpret these errors directly in its database functions:
//! ```cpp
//! /**
//! * @brief A base class for BlockchainDB exceptions
//! */
//! class DB_EXCEPTION : public std::exception
//! ```
//! see `blockchain_db/blockchain_db.h` in monerod `src/` folder for more details.
#[derive(thiserror::Error, Debug)]
/// `DB_FAILURES` is an enum for backend-agnostic, internal database errors. The `From` Trait must be implemented to the specific backend errors to match DB_FAILURES.
pub enum DB_FAILURES {
#[error("MDBX returned an error {0}")]
MDBX_Error(#[from] libmdbx::Error),
#[error("\n<DB_FAILURES::EncodingError> Failed to encode some data : `{0}`")]
SerializeIssue(DB_SERIAL),
#[error("\nObject already exist in the database : {0}")]
AlreadyExist(&'static str),
#[error("NotFound? {0}")]
NotFound(&'static str),
#[error("\n<DB_FAILURES::Other> `{0}`")]
Other(&'static str),
#[error(
"\n<DB_FAILURES::FailedToCommit> A transaction tried to commit to the db, but failed."
)]
FailedToCommit,
}
#[derive(thiserror::Error, Debug)]
pub enum DB_SERIAL {
#[error("An object failed to be serialized into bytes. It is likely an issue from monero-rs library. Please report this error on cuprate's github : https://github.com/Cuprate/cuprate/issues")]
ConsensusEncode,
#[error("Bytes failed to be deserialized into the requested object. It is likely an issue from monero-rs library. Please report this error on cuprate's github : https://github.com/Cuprate/cuprate/issues")]
ConsensusDecode(Vec<u8>),
#[error("monero-rs encoding|decoding logic failed : {0}")]
MoneroEncode(#[from] monero::consensus::encode::Error),
#[error("Bincode failed to decode a type from the database : {0}")]
BincodeDecode(#[from] bincode::error::DecodeError),
#[error("Bincode failed to encode a type for the database : {0}")]
BincodeEncode(#[from] bincode::error::EncodeError),
}

11
database/src/hse.rs Normal file
View file

@ -0,0 +1,11 @@
/* There is nothing here as no wrapper exist for HSE yet */
/* KVS supported functions :
-------------------------------------
hse_kvs_delete
hse_kvs_get
hse_kvs_name_get
hse_kvs_param_get
hse_kvs_prefix_delete
hse_kvs_put
*/

1036
database/src/interface.rs Normal file

File diff suppressed because it is too large Load diff

221
database/src/lib.rs Normal file
View file

@ -0,0 +1,221 @@
// Copyright (C) 2023 Cuprate Contributors
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
//! The cuprate-db crate implement (as its name suggests) the relations between the blockchain/txpool objects and their databases.
//! `lib.rs` contains all the generics, trait and specification for interfaces between blockchain and a backend-agnostic database
//! Every other files in this folder are implementation of these traits/methods to real storage engine.
//!
//! At the moment, the only storage engine available is MDBX.
//! The next storage engine planned is HSE (Heteregeonous Storage Engine) from Micron.
//!
//! For more informations, please consult this docs:
#![deny(unused_attributes)]
#![forbid(unsafe_code)]
#![allow(non_camel_case_types)]
#![deny(clippy::expect_used, clippy::panic)]
#![allow(dead_code, unused_macros)] // temporary
use monero::{util::ringct::RctSig, Block, BlockHeader, Hash};
use std::ops::Range;
use thiserror::Error;
#[cfg(feature = "mdbx")]
pub mod mdbx;
//#[cfg(feature = "hse")]
//pub mod hse;
pub mod encoding;
pub mod error;
pub mod interface;
pub mod table;
pub mod types;
const DEFAULT_BLOCKCHAIN_DATABASE_DIRECTORY: &str = "blockchain";
const DEFAULT_TXPOOL_DATABASE_DIRECTORY: &str = "txpool_mem";
const BINCODE_CONFIG: bincode::config::Configuration<
bincode::config::LittleEndian,
bincode::config::Fixint,
> = bincode::config::standard().with_fixed_int_encoding();
// ------------------------------------------| Database |------------------------------------------
pub mod database {
//! This module contains the Database abstraction trait. Any key/value storage engine implemented need
//! to fullfil these associated types and functions, in order to be usable. This module also contains the
//! Interface struct which is used by the DB Reactor to interact with the database.
use crate::{
error::DB_FAILURES,
transaction::{Transaction, WriteTransaction},
};
use std::{ops::Deref, path::PathBuf, sync::Arc};
/// `Database` Trait implement all the methods necessary to generate transactions as well as execute specific functions. It also implement generic associated types to identify the
/// different transaction modes (read & write) and it's native errors.
pub trait Database<'a> {
type TX: Transaction<'a>;
type TXMut: WriteTransaction<'a>;
type Error: Into<DB_FAILURES>;
// Create a transaction from the database
fn tx(&'a self) -> Result<Self::TX, Self::Error>;
// Create a mutable transaction from the database
fn tx_mut(&'a self) -> Result<Self::TXMut, Self::Error>;
// Open a database from the specified path
fn open(path: PathBuf) -> Result<Self, Self::Error>
where
Self: std::marker::Sized;
// Check if the database is built.
fn check_all_tables_exist(&'a self) -> Result<(), Self::Error>;
// Build the database
fn build(&'a self) -> Result<(), Self::Error>;
}
/// `Interface` is a struct containing a shared pointer to the database and transaction's to be used for the implemented method of Interface.
pub struct Interface<'a, D: Database<'a>> {
pub db: Arc<D>,
pub tx: Option<<D as Database<'a>>::TXMut>,
}
// Convenient implementations for database
impl<'service, D: Database<'service>> Interface<'service, D> {
fn from(db: Arc<D>) -> Result<Self, DB_FAILURES> {
Ok(Self { db, tx: None })
}
fn open(&'service mut self) -> Result<(), DB_FAILURES> {
let tx = self.db.tx_mut().map_err(Into::into)?;
self.tx = Some(tx);
Ok(())
}
}
impl<'service, D: Database<'service>> Deref for Interface<'service, D> {
type Target = <D as Database<'service>>::TXMut;
fn deref(&self) -> &Self::Target {
return self.tx.as_ref().unwrap();
}
}
}
// ------------------------------------------| DatabaseTx |------------------------------------------
pub mod transaction {
//! This module contains the abstractions of Transactional Key/Value database functions.
//! Any key/value database/storage engine can be implemented easily for Cuprate as long as
//! these functions or equivalent logic exist for it.
use crate::{
error::DB_FAILURES,
table::{DupTable, Table},
};
// Abstraction of a read-only cursor, for simple tables
#[allow(clippy::type_complexity)]
pub trait Cursor<'t, T: Table> {
fn first(&mut self) -> Result<Option<(T::Key, T::Value)>, DB_FAILURES>;
fn get_cursor(&mut self) -> Result<Option<(T::Key, T::Value)>, DB_FAILURES>;
fn last(&mut self) -> Result<Option<(T::Key, T::Value)>, DB_FAILURES>;
fn next(&mut self) -> Result<Option<(T::Key, T::Value)>, DB_FAILURES>;
fn prev(&mut self) -> Result<Option<(T::Key, T::Value)>, DB_FAILURES>;
fn set(&mut self, key: &T::Key) -> Result<Option<T::Value>, DB_FAILURES>;
}
// Abstraction of a read-only cursor with support for duplicated tables. DupCursor inherit Cursor methods as
// a duplicated table can be treated as a simple table.
#[allow(clippy::type_complexity)]
pub trait DupCursor<'t, T: DupTable>: Cursor<'t, T> {
fn first_dup(&mut self) -> Result<Option<(T::SubKey, T::Value)>, DB_FAILURES>;
fn get_dup(
&mut self,
key: &T::Key,
subkey: &T::SubKey,
) -> Result<Option<T::Value>, DB_FAILURES>;
fn last_dup(&mut self) -> Result<Option<(T::SubKey, T::Value)>, DB_FAILURES>;
fn next_dup(&mut self) -> Result<Option<(T::Key, (T::SubKey, T::Value))>, DB_FAILURES>;
fn prev_dup(&mut self) -> Result<Option<(T::Key, (T::SubKey, T::Value))>, DB_FAILURES>;
}
// Abstraction of a read-write cursor, for simple tables. WriteCursor inherit Cursor methods.
pub trait WriteCursor<'t, T: Table>: Cursor<'t, T> {
fn put_cursor(&mut self, key: &T::Key, value: &T::Value) -> Result<(), DB_FAILURES>;
fn del(&mut self) -> Result<(), DB_FAILURES>;
}
// Abstraction of a read-write cursor with support for duplicated tables. DupWriteCursor inherit DupCursor and WriteCursor methods.
pub trait DupWriteCursor<'t, T: DupTable>: WriteCursor<'t, T> {
fn put_cursor_dup(
&mut self,
key: &T::Key,
subkey: &T::SubKey,
value: &T::Value,
) -> Result<(), DB_FAILURES>;
/// Delete all data under associated to its key
fn del_nodup(&mut self) -> Result<(), DB_FAILURES>;
}
// Abstraction of a read-only transaction.
pub trait Transaction<'a>: Send + Sync {
type Cursor<T: Table>: Cursor<'a, T>;
type DupCursor<T: DupTable>: DupCursor<'a, T> + Cursor<'a, T>;
fn get<T: Table>(&self, key: &T::Key) -> Result<Option<T::Value>, DB_FAILURES>;
fn commit(self) -> Result<(), DB_FAILURES>;
fn cursor<T: Table>(&self) -> Result<Self::Cursor<T>, DB_FAILURES>;
fn cursor_dup<T: DupTable>(&self) -> Result<Self::DupCursor<T>, DB_FAILURES>;
fn num_entries<T: Table>(&self) -> Result<usize, DB_FAILURES>;
}
// Abstraction of a read-write transaction. WriteTransaction inherits Transaction methods.
pub trait WriteTransaction<'a>: Transaction<'a> {
type WriteCursor<T: Table>: WriteCursor<'a, T>;
type DupWriteCursor<T: DupTable>: DupWriteCursor<'a, T> + DupCursor<'a, T>;
fn put<T: Table>(&self, key: &T::Key, value: &T::Value) -> Result<(), DB_FAILURES>;
fn delete<T: Table>(
&self,
key: &T::Key,
value: &Option<T::Value>,
) -> Result<(), DB_FAILURES>;
fn clear<T: Table>(&self) -> Result<(), DB_FAILURES>;
fn write_cursor<T: Table>(&self) -> Result<Self::WriteCursor<T>, DB_FAILURES>;
fn write_cursor_dup<T: DupTable>(&self) -> Result<Self::DupWriteCursor<T>, DB_FAILURES>;
}
}

474
database/src/mdbx.rs Normal file
View file

@ -0,0 +1,474 @@
//! ### MDBX implementation
//! This module contains the implementation of all the database traits for the MDBX storage engine.
//! This include basic transactions methods, cursors and errors conversion.
use crate::{
database::Database,
error::{DB_FAILURES, DB_SERIAL},
table::{self, DupTable, Table},
transaction::{Transaction, WriteTransaction},
BINCODE_CONFIG,
};
use libmdbx::{
Cursor, DatabaseFlags, DatabaseKind, Geometry, Mode, PageSize, SyncMode, TableFlags,
TransactionKind, WriteFlags, RO, RW,
};
use std::ops::Range;
// Constant used in mdbx implementation
const MDBX_DEFAULT_SYNC_MODE: SyncMode = SyncMode::Durable;
const MDBX_MAX_MAP_SIZE: usize = 4 * 1024usize.pow(3); // 4TB
const MDBX_GROWTH_STEP: isize = 100 * 1024isize.pow(2); // 100MB
const MDBX_PAGE_SIZE: Option<PageSize> = None;
const MDBX_GEOMETRY: Geometry<Range<usize>> = Geometry {
size: Some(0..MDBX_MAX_MAP_SIZE),
growth_step: Some(MDBX_GROWTH_STEP),
shrink_threshold: None,
page_size: MDBX_PAGE_SIZE,
};
/// [`mdbx_decode`] is a function which the supplied bytes will be deserialized using `bincode::decode_from_slice(src, BINCODE_CONFIG)`
/// function. Return `Err(DB_FAILURES::SerializeIssue(DB_SERIAL::BincodeDecode(err)))` if it failed to decode the value. It is used for clarity purpose.
fn mdbx_decode<T: bincode::Decode>(src: &[u8]) -> Result<(T, usize), DB_FAILURES> {
bincode::decode_from_slice(src, BINCODE_CONFIG)
.map_err(|e| DB_FAILURES::SerializeIssue(DB_SERIAL::BincodeDecode(e)))
}
/// [`mdbx_encode`] is a function that serialize a given value into a vector using `bincode::encode_to_vec(src, BINCODE_CONFIG)`
/// function. Return `Err(DB_FAILURES::SerializeIssue(DB_SERIAL::BincodeEncode(err)))` if it failed to encode the value. It is used for clarity purpose.
fn mdbx_encode<T: bincode::Encode>(src: &T) -> Result<Vec<u8>, DB_FAILURES> {
bincode::encode_to_vec(src, BINCODE_CONFIG)
.map_err(|e| DB_FAILURES::SerializeIssue(DB_SERIAL::BincodeEncode(e)))
}
/// [`mdbx_open_table`] is a simple function used for syntax clarity. It try to open the table, and return a `DB_FAILURES` if it failed.
fn mdbx_open_table<'db, K: TransactionKind, E: DatabaseKind, T: Table>(
tx: &'db libmdbx::Transaction<'db, K, E>,
) -> Result<libmdbx::Table, DB_FAILURES> {
tx.open_table(Some(T::TABLE_NAME))
.map_err(std::convert::Into::<DB_FAILURES>::into)
}
/// [`cursor_pair_decode`] is a function defining a conditional return used in (almost) every cursor functions. If a pair of key/value effectively exist from the cursor,
/// the two values are decoded using `mdbx_decode` function. Return `Err(DB_FAILURES::SerializeIssue(DB_SERIAL::BincodeEncode(err)))` if it failed to encode the value.
/// It is used for clarity purpose.
fn cursor_pair_decode<L: bincode::Decode, R: bincode::Decode>(
pair: Option<(Vec<u8>, Vec<u8>)>,
) -> Result<Option<(L, R)>, DB_FAILURES> {
if let Some(pair) = pair {
let decoded_key = mdbx_decode(pair.0.as_slice())?;
let decoded_value = mdbx_decode(pair.1.as_slice())?;
Ok(Some((decoded_key.0, decoded_value.0)))
} else {
Ok(None)
}
}
// Implementation of the database trait with mdbx types
impl<'a, E> Database<'a> for libmdbx::Database<E>
where
E: DatabaseKind,
{
type TX = libmdbx::Transaction<'a, RO, E>;
type TXMut = libmdbx::Transaction<'a, RW, E>;
type Error = libmdbx::Error;
// Open a Read-Only transaction
fn tx(&'a self) -> Result<Self::TX, Self::Error> {
self.begin_ro_txn()
}
// Open a Read-Write transaction
fn tx_mut(&'a self) -> Result<Self::TXMut, Self::Error> {
self.begin_rw_txn()
}
// Open the database with the given path
fn open(path: std::path::PathBuf) -> Result<Self, Self::Error> {
let db: libmdbx::Database<E> = libmdbx::Database::new()
.set_flags(DatabaseFlags::from(Mode::ReadWrite {
sync_mode: MDBX_DEFAULT_SYNC_MODE,
}))
.set_geometry(MDBX_GEOMETRY)
.set_max_readers(32)
.set_max_tables(15)
.open(path.as_path())?;
Ok(db)
}
// Open each tables to verify if the database is complete.
fn check_all_tables_exist(&'a self) -> Result<(), Self::Error> {
let ro_tx = self.begin_ro_txn()?;
// ----- BLOCKS -----
ro_tx.open_table(Some(table::blockhash::TABLE_NAME))?;
ro_tx.open_table(Some(table::blockmetadata::TABLE_NAME))?;
ro_tx.open_table(Some(table::blocks::TABLE_NAME))?;
ro_tx.open_table(Some(table::altblock::TABLE_NAME))?;
// ------ TXNs ------
ro_tx.open_table(Some(table::txspruned::TABLE_NAME))?;
ro_tx.open_table(Some(table::txsprunablehash::TABLE_NAME))?;
ro_tx.open_table(Some(table::txsprunabletip::TABLE_NAME))?;
ro_tx.open_table(Some(table::txsprunable::TABLE_NAME))?;
ro_tx.open_table(Some(table::txsoutputs::TABLE_NAME))?;
ro_tx.open_table(Some(table::txsidentifier::TABLE_NAME))?;
// ---- OUTPUTS -----
ro_tx.open_table(Some(table::prerctoutputmetadata::TABLE_NAME))?;
ro_tx.open_table(Some(table::outputmetadata::TABLE_NAME))?;
// ---- SPT KEYS ----
ro_tx.open_table(Some(table::spentkeys::TABLE_NAME))?;
// --- PROPERTIES ---
ro_tx.open_table(Some(table::properties::TABLE_NAME))?;
Ok(())
}
// Construct the table of the database
fn build(&'a self) -> Result<(), Self::Error> {
let rw_tx = self.begin_rw_txn()?;
// Constructing the tables
// ----- BLOCKS -----
rw_tx.create_table(
Some(table::blockhash::TABLE_NAME),
TableFlags::INTEGER_KEY | TableFlags::DUP_FIXED | TableFlags::DUP_SORT,
)?;
rw_tx.create_table(
Some(table::blockmetadata::TABLE_NAME),
TableFlags::INTEGER_KEY | TableFlags::DUP_FIXED | TableFlags::DUP_SORT,
)?;
rw_tx.create_table(Some(table::blocks::TABLE_NAME), TableFlags::INTEGER_KEY)?;
rw_tx.create_table(Some(table::altblock::TABLE_NAME), TableFlags::INTEGER_KEY)?;
// ------ TXNs ------
rw_tx.create_table(Some(table::txspruned::TABLE_NAME), TableFlags::INTEGER_KEY)?;
rw_tx.create_table(
Some(table::txsprunable::TABLE_NAME),
TableFlags::INTEGER_KEY,
)?;
rw_tx.create_table(
Some(table::txsprunablehash::TABLE_NAME),
TableFlags::INTEGER_KEY | TableFlags::DUP_FIXED | TableFlags::DUP_SORT,
)?;
rw_tx.create_table(
Some(table::txsprunabletip::TABLE_NAME),
TableFlags::INTEGER_KEY,
)?;
rw_tx.create_table(
Some(table::txsoutputs::TABLE_NAME),
TableFlags::INTEGER_KEY | TableFlags::DUP_FIXED | TableFlags::DUP_SORT,
)?;
rw_tx.create_table(
Some(table::txsidentifier::TABLE_NAME),
TableFlags::INTEGER_KEY | TableFlags::DUP_FIXED | TableFlags::DUP_SORT,
)?;
// ---- OUTPUTS -----
rw_tx.create_table(
Some(table::prerctoutputmetadata::TABLE_NAME),
TableFlags::INTEGER_KEY | TableFlags::DUP_FIXED | TableFlags::DUP_SORT,
)?;
rw_tx.create_table(
Some(table::outputmetadata::TABLE_NAME),
TableFlags::INTEGER_KEY | TableFlags::DUP_FIXED | TableFlags::DUP_SORT,
)?;
// ---- SPT KEYS ----
rw_tx.create_table(
Some(table::spentkeys::TABLE_NAME),
TableFlags::INTEGER_KEY | TableFlags::DUP_FIXED | TableFlags::DUP_SORT,
)?;
// --- PROPERTIES ---
rw_tx.create_table(Some(table::properties::TABLE_NAME), TableFlags::INTEGER_KEY)?;
rw_tx.commit()?;
Ok(())
}
}
// Implementation of the Cursor trait for mdbx's Cursors
impl<'a, T, R> crate::transaction::Cursor<'a, T> for Cursor<'a, R>
where
T: Table,
R: TransactionKind,
{
fn first(&mut self) -> Result<Option<(T::Key, T::Value)>, DB_FAILURES> {
let pair = self
.first::<Vec<u8>, Vec<u8>>()
.map_err(std::convert::Into::<DB_FAILURES>::into)?;
cursor_pair_decode(pair)
}
fn get_cursor(
&mut self,
) -> Result<Option<(<T as Table>::Key, <T as Table>::Value)>, DB_FAILURES> {
let pair = self
.get_current::<Vec<u8>, Vec<u8>>()
.map_err(std::convert::Into::<DB_FAILURES>::into)?;
cursor_pair_decode(pair)
}
fn last(&mut self) -> Result<Option<(<T as Table>::Key, <T as Table>::Value)>, DB_FAILURES> {
let pair = self
.last::<Vec<u8>, Vec<u8>>()
.map_err(std::convert::Into::<DB_FAILURES>::into)?;
cursor_pair_decode(pair)
}
fn next(&mut self) -> Result<Option<(<T as Table>::Key, <T as Table>::Value)>, DB_FAILURES> {
let pair = self
.next::<Vec<u8>, Vec<u8>>()
.map_err(std::convert::Into::<DB_FAILURES>::into)?;
cursor_pair_decode(pair)
}
fn prev(&mut self) -> Result<Option<(<T as Table>::Key, <T as Table>::Value)>, DB_FAILURES> {
let pair = self
.prev::<Vec<u8>, Vec<u8>>()
.map_err(std::convert::Into::<DB_FAILURES>::into)?;
cursor_pair_decode(pair)
}
fn set(&mut self, key: &T::Key) -> Result<Option<<T as Table>::Value>, DB_FAILURES> {
let encoded_key = mdbx_encode(key)?;
let value = self
.set::<Vec<u8>>(&encoded_key)
.map_err(std::convert::Into::<DB_FAILURES>::into)?;
if let Some(value) = value {
return Ok(Some(mdbx_decode(value.as_slice())?.0));
}
Ok(None)
}
}
// Implementation of the DupCursor trait for mdbx's Cursors
impl<'t, T, R> crate::transaction::DupCursor<'t, T> for Cursor<'t, R>
where
R: TransactionKind,
T: DupTable,
{
fn first_dup(&mut self) -> Result<Option<(T::SubKey, T::Value)>, DB_FAILURES> {
let value = self
.first_dup::<Vec<u8>>()
.map_err(std::convert::Into::<DB_FAILURES>::into)?;
if let Some(value) = value {
return Ok(Some(mdbx_decode(value.as_slice())?.0));
}
Ok(None)
}
fn get_dup(
&mut self,
key: &T::Key,
subkey: &T::SubKey,
) -> Result<Option<<T>::Value>, DB_FAILURES> {
let (encoded_key, encoded_subkey) = (mdbx_encode(key)?, mdbx_encode(subkey)?);
let value = self
.get_both::<Vec<u8>>(&encoded_key, &encoded_subkey)
.map_err(std::convert::Into::<DB_FAILURES>::into)?;
if let Some(value) = value {
return Ok(Some(mdbx_decode(value.as_slice())?.0));
}
Ok(None)
}
fn last_dup(&mut self) -> Result<Option<(T::SubKey, T::Value)>, DB_FAILURES> {
let value = self
.last_dup::<Vec<u8>>()
.map_err(std::convert::Into::<DB_FAILURES>::into)?;
if let Some(value) = value {
return Ok(Some(mdbx_decode(value.as_slice())?.0));
}
Ok(None)
}
fn next_dup(&mut self) -> Result<Option<(T::Key, (T::SubKey, T::Value))>, DB_FAILURES> {
let pair = self
.next_dup::<Vec<u8>, Vec<u8>>()
.map_err(std::convert::Into::<DB_FAILURES>::into)?;
if let Some(pair) = pair {
let (decoded_key, decoded_value) = (
mdbx_decode(pair.0.as_slice())?,
mdbx_decode(pair.1.as_slice())?,
);
return Ok(Some((decoded_key.0, decoded_value.0)));
}
Ok(None)
}
fn prev_dup(&mut self) -> Result<Option<(T::Key, (T::SubKey, T::Value))>, DB_FAILURES> {
let pair = self
.prev_dup::<Vec<u8>, Vec<u8>>()
.map_err(std::convert::Into::<DB_FAILURES>::into)?;
if let Some(pair) = pair {
let (decoded_key, decoded_value) = (
mdbx_decode(pair.0.as_slice())?,
mdbx_decode(pair.1.as_slice())?,
);
return Ok(Some((decoded_key.0, decoded_value.0)));
}
Ok(None)
}
}
// Implementation of the WriteCursor trait for mdbx's Cursors in RW permission
impl<'a, T> crate::transaction::WriteCursor<'a, T> for Cursor<'a, RW>
where
T: Table,
{
fn put_cursor(&mut self, key: &T::Key, value: &T::Value) -> Result<(), DB_FAILURES> {
let (encoded_key, encoded_value) = (mdbx_encode(key)?, mdbx_encode(value)?);
self.put(&encoded_key, &encoded_value, WriteFlags::empty())
.map_err(Into::into)
}
fn del(&mut self) -> Result<(), DB_FAILURES> {
self.del(WriteFlags::empty()).map_err(Into::into)
}
}
// Implementation of the DupWriteCursor trait for mdbx's Cursors in RW permission
impl<'a, T> crate::transaction::DupWriteCursor<'a, T> for Cursor<'a, RW>
where
T: DupTable,
{
fn put_cursor_dup(
&mut self,
key: &<T>::Key,
subkey: &<T as DupTable>::SubKey,
value: &<T>::Value,
) -> Result<(), DB_FAILURES> {
let (encoded_key, mut encoded_subkey, mut encoded_value) =
(mdbx_encode(key)?, mdbx_encode(subkey)?, mdbx_encode(value)?);
encoded_subkey.append(&mut encoded_value);
self.put(
encoded_key.as_slice(),
encoded_subkey.as_slice(),
WriteFlags::empty(),
)
.map_err(Into::into)
}
fn del_nodup(&mut self) -> Result<(), DB_FAILURES> {
self.del(WriteFlags::NO_DUP_DATA).map_err(Into::into)
}
}
// Implementation of the Transaction trait for mdbx's Transactions
impl<'a, E, R: TransactionKind> Transaction<'a> for libmdbx::Transaction<'_, R, E>
where
E: DatabaseKind,
{
type Cursor<T: Table> = Cursor<'a, R>;
type DupCursor<T: DupTable> = Cursor<'a, R>;
fn get<T: Table>(&self, key: &T::Key) -> Result<Option<T::Value>, DB_FAILURES> {
let table = mdbx_open_table::<_, _, T>(self)?;
let encoded_key = mdbx_encode(key)?;
let value = self
.get::<Vec<u8>>(&table, &encoded_key)
.map_err(std::convert::Into::<DB_FAILURES>::into)?;
if let Some(value) = value {
return Ok(Some(mdbx_decode(value.as_slice())?.0));
}
Ok(None)
}
fn cursor<T: Table>(&self) -> Result<Self::Cursor<T>, DB_FAILURES> {
let table = mdbx_open_table::<_, _, T>(self)?;
self.cursor(&table).map_err(Into::into)
}
fn commit(self) -> Result<(), DB_FAILURES> {
let b = self
.commit()
.map_err(std::convert::Into::<DB_FAILURES>::into)?;
if b {
Ok(())
} else {
Err(DB_FAILURES::FailedToCommit)
}
}
fn cursor_dup<T: DupTable>(&self) -> Result<Self::DupCursor<T>, DB_FAILURES> {
let table = mdbx_open_table::<_, _, T>(self)?;
self.cursor(&table).map_err(Into::into)
}
fn num_entries<T: Table>(&self) -> Result<usize, DB_FAILURES> {
let table = mdbx_open_table::<_, _, T>(self)?;
let stat = self.table_stat(&table)?;
Ok(stat.entries())
}
}
// Implementation of the Transaction trait for mdbx's Transactions with RW permissions
impl<'a, E> WriteTransaction<'a> for libmdbx::Transaction<'a, RW, E>
where
E: DatabaseKind,
{
type WriteCursor<T: Table> = Cursor<'a, RW>;
type DupWriteCursor<T: DupTable> = Cursor<'a, RW>;
fn put<T: Table>(&self, key: &T::Key, value: &T::Value) -> Result<(), DB_FAILURES> {
let table = mdbx_open_table::<_, _, T>(self)?;
let (encoded_key, encoded_value) = (mdbx_encode(key)?, mdbx_encode(value)?);
self.put(&table, encoded_key, encoded_value, WriteFlags::empty())
.map_err(Into::into)
}
fn delete<T: Table>(&self, key: &T::Key, value: &Option<T::Value>) -> Result<(), DB_FAILURES> {
let table = mdbx_open_table::<_, _, T>(self)?;
let encoded_key = mdbx_encode(key)?;
if let Some(value) = value {
let encoded_value = mdbx_encode(value)?;
return self
.del(&table, encoded_key, Some(encoded_value.as_slice()))
.map(|_| ())
.map_err(Into::into);
}
self.del(&table, encoded_key, None)
.map(|_| ())
.map_err(Into::into)
}
fn clear<T: Table>(&self) -> Result<(), DB_FAILURES> {
let table = mdbx_open_table::<_, _, T>(self)?;
self.clear_table(&table).map_err(Into::into)
}
fn write_cursor<T: Table>(&self) -> Result<Self::WriteCursor<T>, DB_FAILURES> {
let table = mdbx_open_table::<_, _, T>(self)?;
self.cursor(&table).map_err(Into::into)
}
fn write_cursor_dup<T: DupTable>(&self) -> Result<Self::DupWriteCursor<T>, DB_FAILURES> {
let table = mdbx_open_table::<_, _, T>(self)?;
self.cursor(&table).map_err(Into::into)
}
}

127
database/src/table.rs Normal file
View file

@ -0,0 +1,127 @@
//! ### Table module
//! This module contains the definition of the [`Table`] and [`DupTable`] trait, and the actual tables used in the database.
//! [`DupTable`] are just a trait used to define that they support DUPSORT|DUPFIXED operation (as of now we don't know the equivalent for HSE).
//! All tables are defined with docs explaining its purpose, what types are the key and data.
//! For more details please look at Cuprate's book : <link to cuprate book>
use monero::{Hash, Block, blockdata::transaction::KeyImage};
use bincode::{enc::Encode,de::Decode};
use crate::{types::{BlockMetadata, /*OutAmountIdx,*/ /*KeyImage,*/ TxOutputIdx, /*OutTx,*/ AltBlock, TxIndex, TransactionPruned, /*RctOutkey,*/ OutputMetadata}, encoding::Compat};
/// A trait implementing a table interaction for the database. It is implemented to an empty struct to specify the name and table's associated types. These associated
/// types are used to simplify deserialization process.
pub trait Table: Send + Sync + 'static + Clone {
// name of the table
const TABLE_NAME: &'static str;
// Definition of a key & value types of the database
type Key: Encode + Decode;
type Value: Encode + Decode;
}
/// A trait implementing a table with duplicated data support.
pub trait DupTable: Table {
// Subkey of the table (prefix of the data)
type SubKey: Encode + Decode;
}
/// This declarative macro declare a new empty struct and impl the specified name, and corresponding types.
macro_rules! impl_table {
( $(#[$docs:meta])* $table:ident , $key:ty , $value:ty ) => {
#[derive(Clone)]
$(#[$docs])*
pub(crate) struct $table;
impl Table for $table {
const TABLE_NAME: &'static str = stringify!($table);
type Key = $key;
type Value = $value;
}
};
}
/// This declarative macro declare extend the original impl_table! macro by implementy DupTable trait.
macro_rules! impl_duptable {
($(#[$docs:meta])* $table:ident, $key:ty, $subkey:ty, $value:ty) => {
impl_table!($(#[$docs])* $table, $key, $value);
impl DupTable for $table {
type SubKey = $subkey;
}
};
}
// ------------------------------------------| Tables definition |------------------------------------------
// ----- BLOCKS -----
impl_duptable!(
/// `blockhash` is table defining a relation between the hash of a block and its height. Its primary use is to quickly find block's hash by its height.
blockhash, (), Compat<Hash>, u64);
impl_duptable!(
/// `blockmetadata` store block metadata alongside their corresponding Hash. The blocks metadata can contains the total_coins_generated, weight, long_term_block_weight & cumulative RingCT
blockmetadata, (), u64, BlockMetadata);
impl_table!(
/// `blockbody` store blocks' bodies along their Hash. The blocks body contains the coinbase transaction and its corresponding mined transactions' hashes.
blocks, u64, Compat<Block>);
/*
impl_table!(
/// `blockhfversion` keep track of block's hard fork version. If an outdated node continue to run after a hard fork, it needs to know, after updating, what blocks needs to be update.
blockhfversion, u64, u8);
*/
impl_table!(
/// `altblock` is a table that permits the storage of blocks from an alternative chain, which may cause a re-org. These blocks can be fetch by their corresponding hash.
altblock, Compat<Hash>, AltBlock);
// ------- TXNs -------
impl_table!(
/// `txspruned` is table storing TransactionPruned (or Pruned Tx). These can be fetch by the corresponding Transaction ID.
txspruned, u64, TransactionPruned);
impl_table!(
/// `txsprunable` is a table storing the Prunable part of transactions (Signatures and RctSig), stored as raw bytes. These can be fetch by the corresponding Transaction ID.
txsprunable, u64, Vec<u8>);
impl_duptable!(
/// `txsprunablehash` is a table storing hashes of prunable part of transactions. These hash can be fetch by the corresponding Transaction ID.
txsprunablehash, u64, (), Compat<Hash>);
impl_table!(
/// `txsprunabletip` is a table used for optimization purpose. It defines at which block's height this transaction belong as long as the block is with Tip blocks. These can be fetch by the corresponding Transaction ID.
txsprunabletip, u64, u64);
impl_duptable!(
/// `txsoutputs` is a table storing output indices used in a transaction. These can be fetch by the corresponding Transaction ID.
txsoutputs, u64, (), TxOutputIdx);
impl_duptable!(
/// `txsidentifier` is a table defining a relation between the hash of a transaction and its transaction Indexes. Its primarly used to quickly find tx's ID by its hash.
txsidentifier, Compat<Hash>, (), TxIndex);
// ---- OUTPUTS ----
impl_duptable!(
/// `prerctoutputmetadata` is a duplicated table storing Pre-RingCT output's metadata. The key is the amount of this output, and the subkey is its amount idx.
prerctoutputmetadata, u64, u64, OutputMetadata);
impl_duptable!(
/// `prerctoutputmetadata` is a table storing RingCT output's metadata. The key is the amount idx of this output since amount is always 0 for RingCT outputs.
outputmetadata, (), u64, OutputMetadata);
// ---- SPT KEYS ----
impl_duptable!(
/// `spentkeys`is a table storing every KeyImage that have been used to create decoys input. As these KeyImage can't be re used they need to marked.
spentkeys, (), Compat<KeyImage>, ());
// ---- PROPERTIES ----
impl_table!(
/// `spentkeys`is a table storing every KeyImage that have been used to create decoys input. As these KeyImage can't be re used they need to marked.
properties, u32, u32);

516
database/src/types.rs Normal file
View file

@ -0,0 +1,516 @@
//! ### Types module
//! This module contains definition and implementations of some of the structures stored in the database.
//! Some of these types are just Wrapper for convenience or re-definition of `monero-rs` database type (see Boog900/monero-rs, "db" branch)
//! Since the database do not use dummy keys, these redefined structs are the same as monerod without the prefix data used as a key.
//! All these types implement [`bincode::Encode`] and [`bincode::Decode`]. They can store `monero-rs` types in their field. In this case, these field
//! use the [`Compat<T>`] wrapper.
use crate::encoding::{Compat, ReaderCompat};
use bincode::{enc::write::Writer, Decode, Encode};
use monero::{
consensus::{encode, Decodable},
util::ringct::{Key, RctSig, RctSigBase, RctSigPrunable, RctType, Signature},
Block, Hash, PublicKey, Transaction, TransactionPrefix, TxIn,
};
// ---- BLOCKS ----
#[derive(Clone, Debug, Encode, Decode)]
/// [`BlockMetadata`] is a struct containing metadata of a block such as the block's `timestamp`, the `total_coins_generated` at this height, its `weight`, its difficulty (`diff_lo`)
/// and cumulative difficulty (`diff_hi`), the `block_hash`, the cumulative RingCT (`cum_rct`) and its long term weight (`long_term_block_weight`). The monerod's struct equivalent is `mdb_block_info_4`
/// This struct is used in [`crate::table::blockmetadata`] table.
pub struct BlockMetadata {
/// Block's timestamp (the time at which it started to be mined)
pub timestamp: u64,
/// Total monero supply, this block included
pub total_coins_generated: u64,
/// Block's weight (sum of all transactions weights)
pub weight: u64,
/// Block's cumulative_difficulty. In monerod this field would have been split into two `u64`, since cpp don't support *natively* `uint128_t`/`u128`
pub cumulative_difficulty: u128,
/// Block's hash
pub block_hash: Compat<Hash>,
/// Cumulative number of RingCT outputs up to this block
pub cum_rct: u64,
/// Block's long term weight
pub long_term_block_weight: u64,
}
#[derive(Clone, Debug, Encode, Decode)]
/// [`AltBlock`] is a struct contaning an alternative `block` (defining an alternative mainchain) and its metadata (`block_height`, `cumulative_weight`,
/// `cumulative_difficulty_low`, `cumulative_difficulty_high`, `already_generated_coins`).
/// This struct is used in [`crate::table::altblock`] table.
pub struct AltBlock {
/// Alternative block's height.
pub height: u64,
/// Cumulative weight median at this block
pub cumulative_weight: u64,
/// Cumulative difficulty
pub cumulative_difficulty: u128,
/// Total generated coins excluding this block's coinbase reward + fees
pub already_generated_coins: u64,
/// Actual block data, with Prefix and Transactions.
/// It is worth noting that monerod implementation do not contain the block in its struct, but still append it at the end of metadata.
pub block: Compat<Block>,
}
// ---- TRANSACTIONS ----
#[derive(Clone, Debug)]
/// [`TransactionPruned`] is, as its name suggest, the pruned part of a transaction, which is the Transaction Prefix and its RingCT signatures.
/// This struct is used in the [`crate::table::txsprefix`] table.
pub struct TransactionPruned {
/// The transaction prefix.
pub prefix: TransactionPrefix,
/// The RingCT signatures, will only contain the 'sig' field.
pub rct_signatures: RctSig,
}
impl bincode::Decode for TransactionPruned {
fn decode<D: bincode::de::Decoder>(
decoder: &mut D,
) -> Result<Self, bincode::error::DecodeError> {
let mut r = ReaderCompat(decoder.reader());
// We first decode the TransactionPrefix and get the n° of inputs/outputs
let prefix: TransactionPrefix = Decodable::consensus_decode(&mut r)
.map_err(|_| bincode::error::DecodeError::Other("Monero-rs decoding failed"))?;
let (inputs, outputs) = (prefix.inputs.len(), prefix.outputs.len());
// Handle the prefix accordingly to its version
match *prefix.version {
// First transaction format, Pre-RingCT, so the signatures are None
1 => Ok(TransactionPruned {
prefix,
rct_signatures: RctSig { sig: None, p: None },
}),
_ => {
let mut rct_signatures = RctSig { sig: None, p: None };
// No inputs so no RingCT
if inputs == 0 {
return Ok(TransactionPruned {
prefix,
rct_signatures,
});
}
// Otherwise get the RingCT signatures for the tx inputs
if let Some(sig) = RctSigBase::consensus_decode(&mut r, inputs, outputs)
.map_err(|_| bincode::error::DecodeError::Other("Monero-rs decoding failed"))?
{
rct_signatures = RctSig {
sig: Some(sig),
p: None,
};
}
// And we return it
Ok(TransactionPruned {
prefix,
rct_signatures,
})
}
}
}
}
impl bincode::Encode for TransactionPruned {
fn encode<E: bincode::enc::Encoder>(
&self,
encoder: &mut E,
) -> Result<(), bincode::error::EncodeError> {
let writer = encoder.writer();
// Encoding the Transaction prefix first
let buf = monero::consensus::serialize(&self.prefix);
writer.write(&buf)?;
match *self.prefix.version {
1 => {} // First transaction format, Pre-RingCT, so the there is no Rct signatures to add
_ => {
if let Some(sig) = &self.rct_signatures.sig {
// If there is signatures then we append it at the end
let buf = monero::consensus::serialize(sig);
writer.write(&buf)?;
}
}
}
Ok(())
}
}
impl TransactionPruned {
/// Turns a pruned transaction to a normal transaction with the missing pruned data
pub fn into_transaction(self, prunable: &[u8]) -> Result<Transaction, encode::Error> {
let mut r = std::io::Cursor::new(prunable);
match *self.prefix.version {
// Pre-RingCT transactions
1 => {
let signatures: Result<Vec<Vec<Signature>>, encode::Error> = self
.prefix
.inputs
.iter()
.filter_map(|input| match input {
TxIn::ToKey { key_offsets, .. } => {
let sigs: Result<Vec<Signature>, encode::Error> = key_offsets
.iter()
.map(|_| Decodable::consensus_decode(&mut r))
.collect();
Some(sigs)
}
_ => None,
})
.collect();
Ok(Transaction {
prefix: self.prefix,
signatures: signatures?,
rct_signatures: RctSig { sig: None, p: None },
})
}
// Post-RingCT Transactions
_ => {
let signatures = Vec::new();
let mut rct_signatures = RctSig { sig: None, p: None };
if self.prefix.inputs.is_empty() {
return Ok(Transaction {
prefix: self.prefix,
signatures,
rct_signatures: RctSig { sig: None, p: None },
});
}
if let Some(sig) = self.rct_signatures.sig {
let p = {
if sig.rct_type != RctType::Null {
let mixin_size = if !self.prefix.inputs.is_empty() {
match &self.prefix.inputs[0] {
TxIn::ToKey { key_offsets, .. } => key_offsets.len() - 1,
_ => 0,
}
} else {
0
};
RctSigPrunable::consensus_decode(
&mut r,
sig.rct_type,
self.prefix.inputs.len(),
self.prefix.outputs.len(),
mixin_size,
)?
} else {
None
}
};
rct_signatures = RctSig { sig: Some(sig), p };
}
Ok(Transaction {
prefix: self.prefix,
signatures,
rct_signatures,
})
}
}
}
}
pub fn get_transaction_prunable_blob<W: std::io::Write + ?Sized>(
tx: &monero::Transaction,
w: &mut W,
) -> Result<usize, std::io::Error> {
let mut len = 0;
match tx.prefix.version.0 {
1 => {
for sig in tx.signatures.iter() {
for c in sig {
len += monero::consensus::encode::Encodable::consensus_encode(c, w)?;
}
}
}
_ => {
if let Some(sig) = &tx.rct_signatures.sig {
if let Some(p) = &tx.rct_signatures.p {
len += p.consensus_encode(w, sig.rct_type)?;
}
}
}
}
Ok(len)
}
pub fn calculate_prunable_hash(tx: &monero::Transaction, tx_prunable_blob: &[u8]) -> Option<Hash> {
// V1 transaction don't have prunable hash
if tx.prefix.version.0 == 1 {
return None;
}
// Checking if it's a miner tx
if let TxIn::Gen { height: _ } = &tx.prefix.inputs[0] {
if tx.prefix.inputs.len() == 1 {
// Returning miner tx's empty hash
return Some(Hash::from_slice(&[
0x70, 0xa4, 0x85, 0x5d, 0x04, 0xd8, 0xfa, 0x7b, 0x3b, 0x27, 0x82, 0xca, 0x53, 0xb6,
0x00, 0xe5, 0xc0, 0x03, 0xc7, 0xdc, 0xb2, 0x7d, 0x7e, 0x92, 0x3c, 0x23, 0xf7, 0x86,
0x01, 0x46, 0xd2, 0xc5,
]));
}
};
// Calculating the hash
Some(Hash::new(tx_prunable_blob))
}
#[derive(Clone, Debug, Encode, Decode)]
/// [`TxIndex`] is a struct used in the [`crate::table::txsidentifier`]. It store the `unlock_time` of a transaction, the `height` of the block
/// whose transaction belong to and the Transaction ID (`tx_id`)
pub struct TxIndex {
/// Transaction ID
pub tx_id: u64,
/// The unlock time of this transaction (the height at which it is unlocked, it is not a timestamp)
pub unlock_time: u64,
/// The height of the block whose transaction belong to
pub height: u64, // TODO USELESS already in txs_prunable_tip
}
#[derive(Clone, Debug, Encode, Decode)]
/// [`TxOutputIdx`] is a single-tuple struct used to contain the indexes (amount and amount indices) of the transactions outputs. It is defined for more clarity on its role.
/// This struct is used in [`crate::table::txsoutputs`] table.
pub struct TxOutputIdx(pub Vec<u64>);
// ---- OUTPUTS ----
#[derive(Clone, Debug, Encode, Decode)]
/// [`RctOutkey`] is a struct containing RingCT metadata and an output ID. It is equivalent to the `output_data_t` struct in monerod
/// This struct is used in [`crate::table::outputamounts`]
pub struct RctOutkey {
// /// amount_index
//pub amount_index: u64,
/// The output's ID
pub output_id: u64,
/// The output's public key (for spend verification)
pub pubkey: Compat<PublicKey>,
/// The output's unlock time (the height at which it is unlocked, it is not a timestamp)
pub unlock_time: u64,
/// The height of the block which used this output
pub height: u64,
/// The output's amount commitment (for spend verification)
/// For compatibility with Pre-RingCT outputs, this field is an option. In fact, monerod distinguish between `pre_rct_output_data_t` and `output_data_t` field like that :
/// ```cpp
/// // This MUST be identical to output_data_t, without the extra rct data at the end
/// struct pre_rct_output_data_t
/// ```
pub commitment: Option<Compat<Key>>,
}
#[derive(Clone, Debug, Encode, Decode)]
/// [`OutputMetadata`] is a struct containing Outputs Metadata. It is used in [`crate::table::outputmetadata`]. It is a struct merging the
/// `out_tx_index` tuple with `output_data_t` structure in monerod, without the output ID.
pub struct OutputMetadata {
pub tx_hash: Compat<Hash>,
pub local_index: u64,
pub pubkey: Option<Compat<PublicKey>>,
pub unlock_time: u64,
pub height: u64,
pub commitment: Option<Compat<Key>>,
}
//#[derive(Clone, Debug, Encode, Decode)]
//// [`OutAmountIdx`] is a struct tuple used to contain the two keys used in [`crate::table::outputamounts`] table.
//// In monerod, the database key is the amount while the *cursor key* (the amount index) is the prefix of the actual data being returned.
//// As we prefere to note use cursor with partial data, we prefer to concat these two into a unique key
//pub struct OutAmountIdx(u64,u64);
// MAYBE NOT FINALLY
//#[derive(Clone, Debug, Encode, Decode)]
// /// [`OutTx`] is a struct containing the hash of the transaction whose output belongs to, and the local index of this output.
// /// This struct is used in [`crate::table::outputinherit`].
/*pub struct OutTx {
/// Output's transaction hash
pub tx_hash: Compat<Hash>,
/// Local index of the output
pub local_index: u64,
}*/
#[cfg(test)]
mod tests {
use monero::Hash;
use super::get_transaction_prunable_blob;
#[test]
fn calculate_tx_prunable_hash() {
let prunable_blob: Vec<u8> = vec![
1, 113, 10, 7, 87, 70, 119, 97, 244, 126, 155, 133, 254, 167, 60, 204, 134, 45, 71, 17,
87, 21, 252, 8, 218, 233, 219, 192, 84, 181, 196, 74, 213, 2, 246, 222, 66, 45, 152,
159, 156, 19, 224, 251, 110, 154, 188, 91, 129, 53, 251, 82, 134, 46, 93, 119, 136, 35,
13, 190, 235, 231, 44, 183, 134, 221, 12, 131, 222, 209, 246, 52, 14, 33, 94, 173, 251,
233, 18, 154, 91, 72, 229, 180, 43, 35, 152, 130, 38, 82, 56, 179, 36, 168, 54, 41, 62,
49, 208, 35, 245, 29, 27, 81, 72, 140, 104, 4, 59, 22, 120, 252, 67, 197, 130, 245, 93,
100, 129, 134, 19, 137, 228, 237, 166, 89, 5, 42, 1, 110, 139, 39, 81, 89, 159, 40,
239, 211, 251, 108, 82, 68, 125, 182, 75, 152, 129, 74, 73, 208, 215, 15, 63, 3, 106,
168, 35, 56, 126, 66, 2, 189, 53, 201, 77, 187, 102, 127, 154, 60, 209, 33, 217, 109,
81, 217, 183, 252, 114, 90, 245, 21, 229, 174, 254, 177, 147, 130, 74, 49, 118, 203,
14, 7, 118, 221, 81, 181, 78, 97, 224, 76, 160, 134, 73, 206, 204, 199, 201, 30, 201,
77, 4, 78, 237, 167, 76, 92, 104, 247, 247, 203, 141, 243, 72, 52, 83, 61, 35, 147,
231, 124, 21, 115, 81, 83, 67, 222, 61, 225, 171, 66, 243, 185, 195, 51, 72, 243, 80,
104, 4, 166, 54, 199, 235, 193, 175, 4, 242, 42, 146, 170, 90, 212, 101, 208, 113, 58,
65, 121, 55, 179, 206, 92, 50, 94, 171, 33, 67, 108, 220, 19, 193, 155, 30, 58, 46, 9,
227, 48, 246, 187, 82, 230, 61, 64, 95, 197, 183, 150, 62, 203, 252, 36, 157, 135, 160,
120, 189, 52, 94, 186, 93, 5, 36, 120, 160, 62, 254, 178, 101, 11, 228, 63, 128, 249,
182, 56, 100, 9, 5, 2, 81, 243, 229, 245, 43, 234, 35, 216, 212, 46, 165, 251, 183,
133, 10, 76, 172, 95, 106, 231, 13, 216, 222, 15, 92, 122, 103, 68, 238, 190, 108, 124,
138, 62, 255, 243, 22, 209, 2, 138, 45, 178, 101, 240, 18, 186, 71, 239, 137, 191, 134,
128, 221, 181, 173, 242, 111, 117, 45, 255, 138, 101, 79, 242, 42, 4, 144, 245, 193,
79, 14, 44, 201, 223, 0, 193, 123, 75, 155, 140, 248, 0, 226, 246, 230, 126, 7, 32,
107, 173, 193, 206, 184, 11, 33, 148, 104, 32, 79, 149, 71, 68, 150, 6, 47, 90, 231,
151, 14, 121, 196, 169, 249, 117, 154, 167, 139, 103, 62, 97, 250, 131, 160, 92, 239,
18, 236, 110, 184, 102, 30, 194, 175, 243, 145, 169, 183, 163, 141, 244, 186, 172, 251,
3, 78, 165, 33, 12, 2, 136, 180, 178, 83, 117, 0, 184, 170, 255, 69, 131, 123, 8, 212,
158, 162, 119, 137, 146, 63, 95, 133, 186, 91, 255, 152, 187, 107, 113, 147, 51, 219,
207, 5, 160, 169, 97, 9, 1, 202, 152, 186, 128, 160, 110, 120, 7, 176, 103, 87, 30,
137, 240, 67, 55, 79, 147, 223, 45, 177, 210, 101, 225, 22, 25, 129, 111, 101, 21, 213,
20, 254, 36, 57, 67, 70, 93, 192, 11, 180, 75, 99, 185, 77, 75, 74, 63, 182, 183, 208,
16, 69, 237, 96, 76, 96, 212, 242, 6, 169, 14, 250, 168, 129, 18, 141, 240, 101, 196,
96, 120, 88, 90, 51, 77, 12, 133, 212, 192, 107, 131, 238, 34, 237, 93, 157, 108, 13,
255, 187, 163, 106, 148, 108, 105, 244, 243, 174, 189, 180, 48, 102, 57, 170, 118, 211,
110, 126, 222, 165, 93, 36, 157, 90, 14, 135, 184, 197, 185, 7, 99, 199, 224, 225, 243,
212, 116, 149, 137, 186, 16, 196, 73, 23, 11, 248, 248, 67, 167, 149, 154, 64, 76, 218,
119, 135, 239, 34, 48, 66, 57, 109, 246, 3, 141, 169, 42, 157, 222, 21, 40, 183, 168,
97, 195, 106, 244, 229, 61, 122, 136, 59, 255, 120, 86, 30, 63, 226, 18, 65, 218, 188,
195, 217, 85, 12, 211, 221, 188, 27, 8, 98, 103, 211, 213, 217, 65, 82, 229, 145, 80,
147, 220, 57, 143, 20, 189, 253, 106, 13, 21, 170, 60, 24, 48, 162, 234, 0, 240, 226,
4, 28, 76, 93, 56, 3, 187, 223, 58, 31, 184, 58, 234, 198, 140, 223, 217, 1, 147, 94,
218, 199, 154, 121, 137, 44, 229, 0, 1, 10, 133, 250, 140, 64, 150, 89, 64, 112, 178,
221, 87, 19, 24, 104, 252, 28, 65, 207, 28, 195, 217, 73, 12, 16, 83, 55, 199, 84, 117,
175, 123, 13, 234, 10, 54, 63, 245, 161, 74, 235, 92, 189, 247, 47, 62, 176, 41, 159,
40, 250, 116, 63, 33, 193, 78, 72, 29, 215, 9, 191, 233, 243, 87, 14, 195, 7, 89, 101,
0, 28, 0, 234, 205, 59, 142, 119, 119, 52, 143, 80, 151, 211, 184, 235, 98, 222, 206,
170, 166, 4, 155, 3, 235, 26, 62, 8, 171, 19, 14, 53, 245, 77, 114, 175, 246, 170, 139,
227, 212, 141, 72, 223, 134, 63, 91, 26, 12, 78, 253, 198, 162, 152, 202, 207, 170,
254, 8, 4, 4, 175, 207, 84, 10, 108, 179, 157, 132, 110, 76, 201, 247, 227, 158, 106,
59, 41, 206, 229, 128, 2, 60, 203, 65, 71, 160, 232, 186, 227, 51, 12, 142, 85, 93, 89,
234, 236, 157, 230, 247, 167, 99, 7, 37, 146, 13, 53, 39, 255, 209, 177, 179, 17, 131,
59, 16, 75, 180, 21, 119, 88, 4, 12, 49, 140, 3, 110, 235, 231, 92, 13, 41, 137, 21,
37, 46, 138, 44, 250, 44, 161, 179, 114, 94, 63, 207, 192, 81, 234, 35, 125, 54, 2,
214, 10, 57, 116, 154, 150, 147, 223, 232, 36, 108, 152, 145, 157, 132, 190, 103, 233,
155, 141, 243, 249, 120, 72, 168, 14, 196, 35, 54, 107, 167, 218, 209, 1, 209, 197,
187, 242, 76, 86, 229, 114, 131, 196, 69, 171, 118, 28, 51, 192, 146, 14, 140, 84, 66,
155, 237, 194, 167, 121, 160, 166, 198, 166, 57, 13, 66, 162, 234, 148, 102, 133, 111,
18, 166, 77, 156, 75, 84, 220, 80, 35, 81, 141, 23, 197, 162, 23, 167, 187, 187, 187,
137, 184, 96, 140, 162, 6, 49, 63, 39, 84, 107, 85, 202, 168, 51, 194, 214, 132, 253,
253, 189, 231, 1, 226, 118, 104, 84, 147, 244, 58, 233, 250, 66, 26, 109, 223, 34, 2,
2, 112, 141, 147, 230, 134, 73, 45, 105, 180, 223, 52, 95, 40, 235, 209, 50, 67, 193,
22, 176, 176, 128, 140, 238, 252, 129, 220, 175, 79, 133, 12, 123, 209, 64, 5, 160, 39,
47, 66, 122, 245, 65, 102, 133, 58, 74, 138, 153, 217, 48, 59, 84, 135, 117, 92, 131,
44, 109, 40, 105, 69, 29, 14, 142, 71, 87, 112, 68, 134, 0, 14, 158, 14, 68, 15, 180,
150, 108, 49, 196, 94, 82, 27, 208, 163, 103, 81, 85, 124, 61, 242, 151, 29, 74, 87,
134, 166, 145, 186, 110, 207, 162, 99, 92, 133, 121, 137, 124, 90, 134, 5, 249, 231,
181, 222, 38, 170, 141, 113, 204, 172, 169, 173, 63, 81, 170, 76,
];
let prunable_hash = Hash::from_slice(&[
0x5c, 0x5e, 0x69, 0xd8, 0xfc, 0x0d, 0x22, 0x6a, 0x60, 0x91, 0x47, 0xda, 0x98, 0x36,
0x06, 0x00, 0xf4, 0xea, 0x49, 0xcc, 0x49, 0x45, 0x2c, 0x5e, 0xf8, 0xba, 0x20, 0xf5,
0x93, 0xd4, 0x80, 0x7d,
]);
assert_eq!(prunable_hash, Hash::new(prunable_blob));
}
#[test]
fn get_prunable_tx_blob() {
let mut pruned_p_blob: Vec<u8> = vec![
2, 0, 1, 2, 0, 16, 180, 149, 135, 30, 237, 231, 156, 1, 132, 145, 47, 182, 251, 153, 1,
225, 234, 94, 219, 134, 23, 222, 210, 30, 208, 213, 12, 136, 158, 5, 159, 148, 15, 206,
144, 2, 132, 63, 135, 22, 151, 8, 134, 8, 178, 26, 194, 111, 101, 192, 45, 104, 18,
115, 178, 194, 100, 255, 227, 10, 253, 165, 53, 62, 81, 67, 202, 169, 56, 99, 42, 146,
175, 137, 85, 195, 27, 151, 2, 0, 3, 207, 28, 183, 85, 7, 58, 81, 205, 53, 9, 191, 141,
209, 70, 58, 30, 38, 225, 212, 68, 14, 4, 216, 204, 101, 163, 66, 156, 101, 143, 255,
196, 134, 0, 3, 254, 66, 159, 187, 180, 41, 78, 252, 85, 255, 154, 55, 239, 222, 199,
37, 159, 210, 71, 186, 188, 46, 134, 181, 236, 221, 173, 43, 93, 50, 138, 249, 221, 44,
1, 34, 67, 111, 182, 199, 28, 219, 56, 238, 143, 188, 101, 103, 205, 139, 160, 144,
226, 34, 92, 235, 221, 75, 38, 7, 104, 255, 108, 208, 1, 184, 169, 2, 9, 1, 84, 62, 77,
107, 119, 22, 148, 222, 6, 128, 128, 211, 14, 242, 200, 16, 137, 239, 249, 55, 59, 16,
193, 192, 140, 240, 153, 129, 228, 115, 222, 247, 41, 128, 219, 241, 249, 198, 214, 75,
31, 82, 225, 1, 158, 183, 226, 220, 126, 228, 191, 211, 79, 43, 220, 95, 124, 109, 14,
162, 170, 68, 37, 62, 21, 139, 182, 246, 152, 36, 156, 172, 197, 20, 145, 85, 9, 8,
106, 237, 112, 63, 189, 172, 145, 49, 234, 68, 152, 200, 241, 0, 37,
];
let prunable_blob: Vec<u8> = vec![
1, 113, 10, 7, 87, 70, 119, 97, 244, 126, 155, 133, 254, 167, 60, 204, 134, 45, 71, 17,
87, 21, 252, 8, 218, 233, 219, 192, 84, 181, 196, 74, 213, 2, 246, 222, 66, 45, 152,
159, 156, 19, 224, 251, 110, 154, 188, 91, 129, 53, 251, 82, 134, 46, 93, 119, 136, 35,
13, 190, 235, 231, 44, 183, 134, 221, 12, 131, 222, 209, 246, 52, 14, 33, 94, 173, 251,
233, 18, 154, 91, 72, 229, 180, 43, 35, 152, 130, 38, 82, 56, 179, 36, 168, 54, 41, 62,
49, 208, 35, 245, 29, 27, 81, 72, 140, 104, 4, 59, 22, 120, 252, 67, 197, 130, 245, 93,
100, 129, 134, 19, 137, 228, 237, 166, 89, 5, 42, 1, 110, 139, 39, 81, 89, 159, 40,
239, 211, 251, 108, 82, 68, 125, 182, 75, 152, 129, 74, 73, 208, 215, 15, 63, 3, 106,
168, 35, 56, 126, 66, 2, 189, 53, 201, 77, 187, 102, 127, 154, 60, 209, 33, 217, 109,
81, 217, 183, 252, 114, 90, 245, 21, 229, 174, 254, 177, 147, 130, 74, 49, 118, 203,
14, 7, 118, 221, 81, 181, 78, 97, 224, 76, 160, 134, 73, 206, 204, 199, 201, 30, 201,
77, 4, 78, 237, 167, 76, 92, 104, 247, 247, 203, 141, 243, 72, 52, 83, 61, 35, 147,
231, 124, 21, 115, 81, 83, 67, 222, 61, 225, 171, 66, 243, 185, 195, 51, 72, 243, 80,
104, 4, 166, 54, 199, 235, 193, 175, 4, 242, 42, 146, 170, 90, 212, 101, 208, 113, 58,
65, 121, 55, 179, 206, 92, 50, 94, 171, 33, 67, 108, 220, 19, 193, 155, 30, 58, 46, 9,
227, 48, 246, 187, 82, 230, 61, 64, 95, 197, 183, 150, 62, 203, 252, 36, 157, 135, 160,
120, 189, 52, 94, 186, 93, 5, 36, 120, 160, 62, 254, 178, 101, 11, 228, 63, 128, 249,
182, 56, 100, 9, 5, 2, 81, 243, 229, 245, 43, 234, 35, 216, 212, 46, 165, 251, 183,
133, 10, 76, 172, 95, 106, 231, 13, 216, 222, 15, 92, 122, 103, 68, 238, 190, 108, 124,
138, 62, 255, 243, 22, 209, 2, 138, 45, 178, 101, 240, 18, 186, 71, 239, 137, 191, 134,
128, 221, 181, 173, 242, 111, 117, 45, 255, 138, 101, 79, 242, 42, 4, 144, 245, 193,
79, 14, 44, 201, 223, 0, 193, 123, 75, 155, 140, 248, 0, 226, 246, 230, 126, 7, 32,
107, 173, 193, 206, 184, 11, 33, 148, 104, 32, 79, 149, 71, 68, 150, 6, 47, 90, 231,
151, 14, 121, 196, 169, 249, 117, 154, 167, 139, 103, 62, 97, 250, 131, 160, 92, 239,
18, 236, 110, 184, 102, 30, 194, 175, 243, 145, 169, 183, 163, 141, 244, 186, 172, 251,
3, 78, 165, 33, 12, 2, 136, 180, 178, 83, 117, 0, 184, 170, 255, 69, 131, 123, 8, 212,
158, 162, 119, 137, 146, 63, 95, 133, 186, 91, 255, 152, 187, 107, 113, 147, 51, 219,
207, 5, 160, 169, 97, 9, 1, 202, 152, 186, 128, 160, 110, 120, 7, 176, 103, 87, 30,
137, 240, 67, 55, 79, 147, 223, 45, 177, 210, 101, 225, 22, 25, 129, 111, 101, 21, 213,
20, 254, 36, 57, 67, 70, 93, 192, 11, 180, 75, 99, 185, 77, 75, 74, 63, 182, 183, 208,
16, 69, 237, 96, 76, 96, 212, 242, 6, 169, 14, 250, 168, 129, 18, 141, 240, 101, 196,
96, 120, 88, 90, 51, 77, 12, 133, 212, 192, 107, 131, 238, 34, 237, 93, 157, 108, 13,
255, 187, 163, 106, 148, 108, 105, 244, 243, 174, 189, 180, 48, 102, 57, 170, 118, 211,
110, 126, 222, 165, 93, 36, 157, 90, 14, 135, 184, 197, 185, 7, 99, 199, 224, 225, 243,
212, 116, 149, 137, 186, 16, 196, 73, 23, 11, 248, 248, 67, 167, 149, 154, 64, 76, 218,
119, 135, 239, 34, 48, 66, 57, 109, 246, 3, 141, 169, 42, 157, 222, 21, 40, 183, 168,
97, 195, 106, 244, 229, 61, 122, 136, 59, 255, 120, 86, 30, 63, 226, 18, 65, 218, 188,
195, 217, 85, 12, 211, 221, 188, 27, 8, 98, 103, 211, 213, 217, 65, 82, 229, 145, 80,
147, 220, 57, 143, 20, 189, 253, 106, 13, 21, 170, 60, 24, 48, 162, 234, 0, 240, 226,
4, 28, 76, 93, 56, 3, 187, 223, 58, 31, 184, 58, 234, 198, 140, 223, 217, 1, 147, 94,
218, 199, 154, 121, 137, 44, 229, 0, 1, 10, 133, 250, 140, 64, 150, 89, 64, 112, 178,
221, 87, 19, 24, 104, 252, 28, 65, 207, 28, 195, 217, 73, 12, 16, 83, 55, 199, 84, 117,
175, 123, 13, 234, 10, 54, 63, 245, 161, 74, 235, 92, 189, 247, 47, 62, 176, 41, 159,
40, 250, 116, 63, 33, 193, 78, 72, 29, 215, 9, 191, 233, 243, 87, 14, 195, 7, 89, 101,
0, 28, 0, 234, 205, 59, 142, 119, 119, 52, 143, 80, 151, 211, 184, 235, 98, 222, 206,
170, 166, 4, 155, 3, 235, 26, 62, 8, 171, 19, 14, 53, 245, 77, 114, 175, 246, 170, 139,
227, 212, 141, 72, 223, 134, 63, 91, 26, 12, 78, 253, 198, 162, 152, 202, 207, 170,
254, 8, 4, 4, 175, 207, 84, 10, 108, 179, 157, 132, 110, 76, 201, 247, 227, 158, 106,
59, 41, 206, 229, 128, 2, 60, 203, 65, 71, 160, 232, 186, 227, 51, 12, 142, 85, 93, 89,
234, 236, 157, 230, 247, 167, 99, 7, 37, 146, 13, 53, 39, 255, 209, 177, 179, 17, 131,
59, 16, 75, 180, 21, 119, 88, 4, 12, 49, 140, 3, 110, 235, 231, 92, 13, 41, 137, 21,
37, 46, 138, 44, 250, 44, 161, 179, 114, 94, 63, 207, 192, 81, 234, 35, 125, 54, 2,
214, 10, 57, 116, 154, 150, 147, 223, 232, 36, 108, 152, 145, 157, 132, 190, 103, 233,
155, 141, 243, 249, 120, 72, 168, 14, 196, 35, 54, 107, 167, 218, 209, 1, 209, 197,
187, 242, 76, 86, 229, 114, 131, 196, 69, 171, 118, 28, 51, 192, 146, 14, 140, 84, 66,
155, 237, 194, 167, 121, 160, 166, 198, 166, 57, 13, 66, 162, 234, 148, 102, 133, 111,
18, 166, 77, 156, 75, 84, 220, 80, 35, 81, 141, 23, 197, 162, 23, 167, 187, 187, 187,
137, 184, 96, 140, 162, 6, 49, 63, 39, 84, 107, 85, 202, 168, 51, 194, 214, 132, 253,
253, 189, 231, 1, 226, 118, 104, 84, 147, 244, 58, 233, 250, 66, 26, 109, 223, 34, 2,
2, 112, 141, 147, 230, 134, 73, 45, 105, 180, 223, 52, 95, 40, 235, 209, 50, 67, 193,
22, 176, 176, 128, 140, 238, 252, 129, 220, 175, 79, 133, 12, 123, 209, 64, 5, 160, 39,
47, 66, 122, 245, 65, 102, 133, 58, 74, 138, 153, 217, 48, 59, 84, 135, 117, 92, 131,
44, 109, 40, 105, 69, 29, 14, 142, 71, 87, 112, 68, 134, 0, 14, 158, 14, 68, 15, 180,
150, 108, 49, 196, 94, 82, 27, 208, 163, 103, 81, 85, 124, 61, 242, 151, 29, 74, 87,
134, 166, 145, 186, 110, 207, 162, 99, 92, 133, 121, 137, 124, 90, 134, 5, 249, 231,
181, 222, 38, 170, 141, 113, 204, 172, 169, 173, 63, 81, 170, 76,
];
let mut tx_blob: Vec<u8> = Vec::new();
tx_blob.append(&mut pruned_p_blob);
tx_blob.append(&mut prunable_blob.clone());
let mut buf = Vec::new();
#[allow(clippy::expect_used)]
let tx: monero::Transaction =
monero::consensus::encode::deserialize(&tx_blob).expect("failed to serialize");
#[allow(clippy::expect_used)]
get_transaction_prunable_blob(&tx, &mut buf).expect("failed to get out prunable blob");
assert_eq!(prunable_blob, buf);
}
}

View file

@ -53,7 +53,10 @@ impl<W: AsyncWrite + std::marker::Unpin> Sink<Bucket> for BucketSink<W> {
Ok(())
}
fn poll_flush(self: Pin<&mut Self>, cx: &mut std::task::Context<'_>) -> Poll<Result<(), Self::Error>> {
fn poll_flush(
self: Pin<&mut Self>,
cx: &mut std::task::Context<'_>,
) -> Poll<Result<(), Self::Error>> {
let this = self.project();
let mut w = this.writer;
let buffer = this.buffer;
@ -74,17 +77,20 @@ impl<W: AsyncWrite + std::marker::Unpin> Sink<Bucket> for BucketSink<W> {
} else {
buffer[0].advance(len);
}
},
}
}
} else {
return Poll::Ready(Ok(()));
}
},
}
}
}
}
fn poll_close(self: Pin<&mut Self>, cx: &mut std::task::Context<'_>) -> Poll<Result<(), Self::Error>> {
fn poll_close(
self: Pin<&mut Self>,
cx: &mut std::task::Context<'_>,
) -> Poll<Result<(), Self::Error>> {
ready!(self.project().writer.poll_close(cx))?;
Poll::Ready(Ok(()))
}

View file

@ -46,7 +46,10 @@ impl BucketDecoder {
/// Tries to decode a `Bucket` from the given buffer, returning the decoded `Bucket` and the
/// number of bytes consumed from the buffer.
pub fn try_decode_bucket(&mut self, mut buf: &[u8]) -> Result<(Option<Bucket>, usize), BucketError> {
pub fn try_decode_bucket(
&mut self,
mut buf: &[u8],
) -> Result<(Option<Bucket>, usize), BucketError> {
let mut len = 0;
// first we decode header
@ -142,7 +145,7 @@ impl<S: AsyncRead + std::marker::Unpin> Stream for BucketStream<S> {
} else {
continue;
}
},
}
}
}
}

View file

@ -130,7 +130,12 @@ impl TryInto<MessageType> for header::Flags {
/// A levin body
pub trait LevinBody: Sized {
/// Decodes the message from the data in the header
fn decode_message(buf: &[u8], typ: MessageType, have_to_return: bool, command: u32) -> Result<Self, BucketError>;
fn decode_message(
buf: &[u8],
typ: MessageType,
have_to_return: bool,
command: u32,
) -> Result<Self, BucketError>;
/// Encodes the message
///

View file

@ -41,7 +41,10 @@ pub struct MessageSink<W: AsyncWrite + std::marker::Unpin, E: LevinBody> {
impl<W: AsyncWrite + std::marker::Unpin, E: LevinBody> Sink<E> for MessageSink<W, E> {
type Error = BucketError;
fn poll_ready(self: Pin<&mut Self>, cx: &mut std::task::Context<'_>) -> Poll<Result<(), Self::Error>> {
fn poll_ready(
self: Pin<&mut Self>,
cx: &mut std::task::Context<'_>,
) -> Poll<Result<(), Self::Error>> {
self.project().bucket_sink.poll_ready(cx)
}
@ -60,11 +63,17 @@ impl<W: AsyncWrite + std::marker::Unpin, E: LevinBody> Sink<E> for MessageSink<W
self.project().bucket_sink.start_send(bucket)
}
fn poll_flush(self: Pin<&mut Self>, cx: &mut std::task::Context<'_>) -> Poll<Result<(), Self::Error>> {
fn poll_flush(
self: Pin<&mut Self>,
cx: &mut std::task::Context<'_>,
) -> Poll<Result<(), Self::Error>> {
self.project().bucket_sink.poll_flush(cx)
}
fn poll_close(self: Pin<&mut Self>, cx: &mut std::task::Context<'_>) -> Poll<Result<(), Self::Error>> {
fn poll_close(
self: Pin<&mut Self>,
cx: &mut std::task::Context<'_>,
) -> Poll<Result<(), Self::Error>> {
self.project().bucket_sink.poll_close(cx)
}
}

View file

@ -52,9 +52,13 @@ impl<D: LevinBody, S: AsyncRead + std::marker::Unpin> MessageStream<D, S> {
impl<D: LevinBody, S: AsyncRead + std::marker::Unpin> Stream for MessageStream<D, S> {
type Item = Result<D, BucketError>;
fn poll_next(self: std::pin::Pin<&mut Self>, cx: &mut std::task::Context<'_>) -> Poll<Option<Self::Item>> {
fn poll_next(
self: std::pin::Pin<&mut Self>,
cx: &mut std::task::Context<'_>,
) -> Poll<Option<Self::Item>> {
let this = self.project();
match ready!(this.bucket_stream.poll_next(cx)).expect("BucketStream will never return None") {
match ready!(this.bucket_stream.poll_next(cx)).expect("BucketStream will never return None")
{
Err(e) => Poll::Ready(Some(Err(e))),
Ok(bucket) => {
if bucket.header.signature != LEVIN_SIGNATURE {
@ -62,7 +66,9 @@ impl<D: LevinBody, S: AsyncRead + std::marker::Unpin> Stream for MessageStream<D
}
if bucket.header.protocol_version != PROTOCOL_VERSION {
return Err(BucketError::UnknownProtocolVersion(bucket.header.protocol_version))?;
return Err(BucketError::UnknownProtocolVersion(
bucket.header.protocol_version,
))?;
}
if bucket.header.return_code < 0
@ -82,7 +88,7 @@ impl<D: LevinBody, S: AsyncRead + std::marker::Unpin> Stream for MessageStream<D
bucket.header.have_to_return_data,
bucket.header.command,
)))
},
}
}
}
}

View file

@ -25,15 +25,17 @@ macro_rules! get_val_from_map {
$map.get($field_name)
.ok_or_else(|| serde::de::Error::missing_field($field_name))?
.$get_fn()
.ok_or_else(|| serde::de::Error::invalid_type($map.get_value_type_as_unexpected(), &$expected_ty))?
.ok_or_else(|| {
serde::de::Error::invalid_type($map.get_value_type_as_unexpected(), &$expected_ty)
})?
};
}
macro_rules! get_internal_val {
($value:ident, $get_fn:ident, $expected_ty:expr) => {
$value
.$get_fn()
.ok_or_else(|| serde::de::Error::invalid_type($value.get_value_type_as_unexpected(), &$expected_ty))?
$value.$get_fn().ok_or_else(|| {
serde::de::Error::invalid_type($value.get_value_type_as_unexpected(), &$expected_ty)
})?
};
}

View file

@ -117,7 +117,12 @@ fn decode_message<T: serde::de::DeserializeOwned>(buf: &[u8]) -> Result<T, Bucke
}
impl levin::LevinBody for Message {
fn decode_message(buf: &[u8], typ: MessageType, have_to_return: bool, command: u32) -> Result<Self, BucketError> {
fn decode_message(
buf: &[u8],
typ: MessageType,
have_to_return: bool,
command: u32,
) -> Result<Self, BucketError> {
let command = P2pCommand::try_from(command)?;
Ok(match typ {
@ -130,37 +135,57 @@ impl levin::LevinBody for Message {
return Err(levin::BucketError::FailedToDecodeBucketBody(
"Invalid header flag/command/have_to_return combination".to_string(),
))
},
}
}),
MessageType::Request if have_to_return => Message::Request(match command {
P2pCommand::Handshake => MessageRequest::Handshake(decode_message(buf)?),
P2pCommand::TimedSync => MessageRequest::TimedSync(decode_message(buf)?),
P2pCommand::Ping => MessageRequest::Ping(admin::PingRequest),
P2pCommand::SupportFlags => MessageRequest::SupportFlags(admin::SupportFlagsRequest),
P2pCommand::SupportFlags => {
MessageRequest::SupportFlags(admin::SupportFlagsRequest)
}
_ => {
return Err(levin::BucketError::FailedToDecodeBucketBody(
"Invalid header flag/command/have_to_return combination".to_string(),
))
},
}
}),
MessageType::Request if !have_to_return => Message::Notification(Box::new(match command {
P2pCommand::NewBlock => MessageNotification::NewBlock(decode_message(buf)?),
P2pCommand::NewTransactions => MessageNotification::NewTransactions(decode_message(buf)?),
P2pCommand::RequestGetObject => MessageNotification::RequestGetObject(decode_message(buf)?),
P2pCommand::ResponseGetObject => MessageNotification::ResponseGetObject(decode_message(buf)?),
P2pCommand::RequestChain => MessageNotification::RequestChain(decode_message(buf)?),
P2pCommand::ResponseChainEntry => MessageNotification::ResponseChainEntry(decode_message(buf)?),
P2pCommand::NewFluffyBlock => MessageNotification::NewFluffyBlock(decode_message(buf)?),
P2pCommand::RequestFluffyMissingTx => MessageNotification::RequestFluffyMissingTx(decode_message(buf)?),
P2pCommand::GetTxPoolComplement => MessageNotification::GetTxPoolComplement(decode_message(buf)?),
_ => {
return Err(levin::BucketError::FailedToDecodeBucketBody(
"Invalid header flag/command/have_to_return combination".to_string(),
))
},
})),
MessageType::Request if !have_to_return => {
Message::Notification(Box::new(match command {
P2pCommand::NewBlock => MessageNotification::NewBlock(decode_message(buf)?),
P2pCommand::NewTransactions => {
MessageNotification::NewTransactions(decode_message(buf)?)
}
P2pCommand::RequestGetObject => {
MessageNotification::RequestGetObject(decode_message(buf)?)
}
P2pCommand::ResponseGetObject => {
MessageNotification::ResponseGetObject(decode_message(buf)?)
}
P2pCommand::RequestChain => {
MessageNotification::RequestChain(decode_message(buf)?)
}
P2pCommand::ResponseChainEntry => {
MessageNotification::ResponseChainEntry(decode_message(buf)?)
}
P2pCommand::NewFluffyBlock => {
MessageNotification::NewFluffyBlock(decode_message(buf)?)
}
P2pCommand::RequestFluffyMissingTx => {
MessageNotification::RequestFluffyMissingTx(decode_message(buf)?)
}
P2pCommand::GetTxPoolComplement => {
MessageNotification::GetTxPoolComplement(decode_message(buf)?)
}
_ => {
return Err(levin::BucketError::FailedToDecodeBucketBody(
"Invalid header flag/command/have_to_return combination".to_string(),
))
}
}))
}
_ => unreachable!("All typs are handleded"),
})
}
@ -181,21 +206,21 @@ impl levin::LevinBody for Message {
MessageRequest::Handshake(handshake) => {
command = P2pCommand::Handshake;
bytes = encode_message(handshake)?;
},
}
MessageRequest::TimedSync(timedsync) => {
command = P2pCommand::TimedSync;
bytes = encode_message(timedsync)?;
},
}
MessageRequest::Ping(_) => {
command = P2pCommand::Ping;
bytes = Vec::new();
},
}
MessageRequest::SupportFlags(_) => {
command = P2pCommand::SupportFlags;
bytes = Vec::new();
},
}
}
},
}
Message::Response(res) => {
return_code = 1;
have_to_return_data = false;
@ -204,21 +229,21 @@ impl levin::LevinBody for Message {
MessageResponse::Handshake(handshake) => {
command = P2pCommand::Handshake;
bytes = encode_message(handshake)?;
},
}
MessageResponse::TimedSync(timed_sync) => {
command = P2pCommand::TimedSync;
bytes = encode_message(timed_sync)?;
},
}
MessageResponse::Ping(ping) => {
command = P2pCommand::Ping;
bytes = encode_message(ping)?;
},
}
MessageResponse::SupportFlags(support_flags) => {
command = P2pCommand::SupportFlags;
bytes = encode_message(support_flags)?;
},
}
}
},
}
Message::Notification(noti) => {
return_code = 0;
have_to_return_data = false;
@ -227,43 +252,49 @@ impl levin::LevinBody for Message {
MessageNotification::NewBlock(new_block) => {
command = P2pCommand::NewBlock;
bytes = encode_message(new_block)?;
},
}
MessageNotification::NewTransactions(new_txs) => {
command = P2pCommand::NewTransactions;
bytes = encode_message(new_txs)?;
},
}
MessageNotification::RequestGetObject(obj) => {
command = P2pCommand::RequestGetObject;
bytes = encode_message(obj)?;
},
}
MessageNotification::ResponseGetObject(obj) => {
command = P2pCommand::ResponseGetObject;
bytes = encode_message(obj)?;
},
}
MessageNotification::RequestChain(chain) => {
command = P2pCommand::RequestChain;
bytes = encode_message(chain)?;
},
}
MessageNotification::ResponseChainEntry(chain_entry) => {
command = P2pCommand::ResponseChainEntry;
bytes = encode_message(chain_entry)?;
},
}
MessageNotification::NewFluffyBlock(fluffy_block) => {
command = P2pCommand::NewFluffyBlock;
bytes = encode_message(fluffy_block)?;
},
}
MessageNotification::RequestFluffyMissingTx(txs) => {
command = P2pCommand::RequestFluffyMissingTx;
bytes = encode_message(txs)?;
},
}
MessageNotification::GetTxPoolComplement(txpool) => {
command = P2pCommand::GetTxPoolComplement;
bytes = encode_message(txpool)?;
},
}
}
},
}
}
return Ok((return_code, command.into(), have_to_return_data, flag, bytes.into()));
return Ok((
return_code,
command.into(),
have_to_return_data,
flag,
bytes.into(),
));
}
}
@ -275,25 +306,30 @@ mod tests {
#[test]
fn decode_handshake_request() {
let buf = [
1, 17, 1, 1, 1, 1, 2, 1, 1, 12, 9, 110, 111, 100, 101, 95, 100, 97, 116, 97, 12, 24, 7, 109, 121, 95, 112,
111, 114, 116, 6, 168, 70, 0, 0, 10, 110, 101, 116, 119, 111, 114, 107, 95, 105, 100, 10, 64, 18, 48, 241,
113, 97, 4, 65, 97, 23, 49, 0, 130, 22, 161, 161, 16, 7, 112, 101, 101, 114, 95, 105, 100, 5, 153, 5, 227,
61, 188, 214, 159, 10, 13, 115, 117, 112, 112, 111, 114, 116, 95, 102, 108, 97, 103, 115, 6, 1, 0, 0, 0, 8,
114, 112, 99, 95, 112, 111, 114, 116, 7, 0, 0, 20, 114, 112, 99, 95, 99, 114, 101, 100, 105, 116, 115, 95,
112, 101, 114, 95, 104, 97, 115, 104, 6, 0, 0, 0, 0, 12, 112, 97, 121, 108, 111, 97, 100, 95, 100, 97, 116,
97, 12, 24, 21, 99, 117, 109, 117, 108, 97, 116, 105, 118, 101, 95, 100, 105, 102, 102, 105, 99, 117, 108,
116, 121, 5, 59, 90, 163, 153, 0, 0, 0, 0, 27, 99, 117, 109, 117, 108, 97, 116, 105, 118, 101, 95, 100,
105, 102, 102, 105, 99, 117, 108, 116, 121, 95, 116, 111, 112, 54, 52, 5, 0, 0, 0, 0, 0, 0, 0, 0, 14, 99,
117, 114, 114, 101, 110, 116, 95, 104, 101, 105, 103, 104, 116, 5, 190, 50, 0, 0, 0, 0, 0, 0, 12, 112, 114,
117, 110, 105, 110, 103, 95, 115, 101, 101, 100, 6, 0, 0, 0, 0, 6, 116, 111, 112, 95, 105, 100, 10, 128,
230, 40, 186, 45, 79, 79, 224, 164, 117, 133, 84, 130, 185, 94, 4, 1, 57, 126, 74, 145, 238, 238, 122, 44,
214, 85, 129, 237, 230, 14, 67, 218, 11, 116, 111, 112, 95, 118, 101, 114, 115, 105, 111, 110, 8, 1, 18,
108, 111, 99, 97, 108, 95, 112, 101, 101, 114, 108, 105, 115, 116, 95, 110, 101, 119, 140, 4, 24, 3, 97,
100, 114, 12, 8, 4, 116, 121, 112, 101, 8, 1, 4, 97, 100, 100, 114, 12, 8, 4, 109, 95, 105, 112, 6, 225,
219, 21, 0, 6, 109, 95, 112, 111, 114, 116, 7, 0, 0, 2, 105, 100, 5, 0, 0, 0, 0, 0, 0, 0, 0, 9, 108, 97,
115, 116, 95, 115, 101, 101, 110, 1, 0, 0, 0, 0, 0, 0, 0, 0, 12, 112, 114, 117, 110, 105, 110, 103, 95,
115, 101, 101, 100, 6, 0, 0, 0, 0, 8, 114, 112, 99, 95, 112, 111, 114, 116, 7, 0, 0, 20, 114, 112, 99, 95,
99, 114, 101, 100, 105, 116, 115, 95, 112, 101, 114, 95, 104, 97, 115, 104, 6, 0, 0, 0, 0,
1, 17, 1, 1, 1, 1, 2, 1, 1, 12, 9, 110, 111, 100, 101, 95, 100, 97, 116, 97, 12, 24, 7,
109, 121, 95, 112, 111, 114, 116, 6, 168, 70, 0, 0, 10, 110, 101, 116, 119, 111, 114,
107, 95, 105, 100, 10, 64, 18, 48, 241, 113, 97, 4, 65, 97, 23, 49, 0, 130, 22, 161,
161, 16, 7, 112, 101, 101, 114, 95, 105, 100, 5, 153, 5, 227, 61, 188, 214, 159, 10,
13, 115, 117, 112, 112, 111, 114, 116, 95, 102, 108, 97, 103, 115, 6, 1, 0, 0, 0, 8,
114, 112, 99, 95, 112, 111, 114, 116, 7, 0, 0, 20, 114, 112, 99, 95, 99, 114, 101, 100,
105, 116, 115, 95, 112, 101, 114, 95, 104, 97, 115, 104, 6, 0, 0, 0, 0, 12, 112, 97,
121, 108, 111, 97, 100, 95, 100, 97, 116, 97, 12, 24, 21, 99, 117, 109, 117, 108, 97,
116, 105, 118, 101, 95, 100, 105, 102, 102, 105, 99, 117, 108, 116, 121, 5, 59, 90,
163, 153, 0, 0, 0, 0, 27, 99, 117, 109, 117, 108, 97, 116, 105, 118, 101, 95, 100, 105,
102, 102, 105, 99, 117, 108, 116, 121, 95, 116, 111, 112, 54, 52, 5, 0, 0, 0, 0, 0, 0,
0, 0, 14, 99, 117, 114, 114, 101, 110, 116, 95, 104, 101, 105, 103, 104, 116, 5, 190,
50, 0, 0, 0, 0, 0, 0, 12, 112, 114, 117, 110, 105, 110, 103, 95, 115, 101, 101, 100, 6,
0, 0, 0, 0, 6, 116, 111, 112, 95, 105, 100, 10, 128, 230, 40, 186, 45, 79, 79, 224,
164, 117, 133, 84, 130, 185, 94, 4, 1, 57, 126, 74, 145, 238, 238, 122, 44, 214, 85,
129, 237, 230, 14, 67, 218, 11, 116, 111, 112, 95, 118, 101, 114, 115, 105, 111, 110,
8, 1, 18, 108, 111, 99, 97, 108, 95, 112, 101, 101, 114, 108, 105, 115, 116, 95, 110,
101, 119, 140, 4, 24, 3, 97, 100, 114, 12, 8, 4, 116, 121, 112, 101, 8, 1, 4, 97, 100,
100, 114, 12, 8, 4, 109, 95, 105, 112, 6, 225, 219, 21, 0, 6, 109, 95, 112, 111, 114,
116, 7, 0, 0, 2, 105, 100, 5, 0, 0, 0, 0, 0, 0, 0, 0, 9, 108, 97, 115, 116, 95, 115,
101, 101, 110, 1, 0, 0, 0, 0, 0, 0, 0, 0, 12, 112, 114, 117, 110, 105, 110, 103, 95,
115, 101, 101, 100, 6, 0, 0, 0, 0, 8, 114, 112, 99, 95, 112, 111, 114, 116, 7, 0, 0,
20, 114, 112, 99, 95, 99, 114, 101, 100, 105, 116, 115, 95, 112, 101, 114, 95, 104, 97,
115, 104, 6, 0, 0, 0, 0,
];
let message = Message::decode_message(&buf, MessageType::Request, true, 1001);

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -129,7 +129,11 @@ impl<'de> Deserialize<'de> for NetworkAddress {
Ok(match addr_type {
1 => NetworkAddress::IPv4(IPv4Address::from_value(get_field_from_map!(value, "addr"))?),
2 => NetworkAddress::IPv6(IPv6Address::from_value(get_field_from_map!(value, "addr"))?),
_ => return Err(de::Error::custom("Network address type currently unsupported")),
_ => {
return Err(de::Error::custom(
"Network address type currently unsupported",
))
}
})
}
}
@ -144,11 +148,11 @@ impl Serialize for NetworkAddress {
NetworkAddress::IPv4(v) => {
state.serialize_field("type", &1_u8)?;
state.serialize_field("addr", v)?;
},
}
NetworkAddress::IPv6(v) => {
state.serialize_field("type", &2_u8)?;
state.serialize_field("addr", v)?;
},
}
}
state.end()
}

View file

@ -1,8 +0,0 @@
indent_style = "Block"
reorder_imports = false
max_width = 120
comment_width = 100
match_block_trailing_comma = true
wrap_comments = true
edition = "2021"
version = "Two"

View file

@ -1,3 +0,0 @@
pub mod blocks {
}

View file

@ -1 +0,0 @@

View file

@ -1 +0,0 @@
pub mod difficulty;

View file

@ -1 +0,0 @@

View file

@ -1 +0,0 @@
pub mod enums;

View file

@ -1,9 +1,6 @@
#![forbid(unsafe_code)]
#![allow(non_camel_case_types)]
#![deny(clippy::expect_used, clippy::panic)]
pub mod cryptonote_basic;
pub mod cryptonote_protocol;
fn main() {
println!("Hello, world!");
}