book
Some checks are pending
Audit / audit (push) Waiting to run
Deny / audit (push) Waiting to run

This commit is contained in:
hinto.janai 2024-10-07 20:33:44 -04:00
parent 24d176ff14
commit 6525307744
No known key found for this signature in database
GPG key ID: D47CE05FA175A499
18 changed files with 156 additions and 258 deletions

2
Cargo.lock generated
View file

@ -639,7 +639,6 @@ dependencies = [
"cfg-if",
"cuprate-benchmark-example",
"cuprate-benchmark-lib",
"cuprate-database",
"serde",
"serde_json",
]
@ -746,7 +745,6 @@ dependencies = [
"cuprate-json-rpc",
"function_name",
"serde_json",
"tempfile",
]
[[package]]

View file

@ -1,83 +1,5 @@
# Benches
This directory contains Cuprate's benchmarks and benchmarking utilities.
- [1. File layout and purpose](#1-file-layout-and-purpose)
- [2. Harness](#2-harness)
- [2.1 Creating a harness benchmark](#21-creating-a-harness-benchmark)
- [2.2 Running a harness benchmark](#22-running-a-harness-benchmark)
- [3. Criterion](#3-criterion)
- [2.1 Creating a Criterion benchmark](#21-creating-a-criterion-benchmark)
- [2.2 Running a Criterion benchmark](#22-running-a-criterion-benchmark)
## 1. File layout and purpose
This directory is sorted into 4 important categories:
| Sub-directory | Purpose |
|---------------|---------|
| `harness/src` | Cuprate's custom benchmarking harness **binary**
| `harness/lib` | Cuprate's custom benchmarking harness **library**
| `harness/*` | Macro-benchmarks for whole crates or sub-systems (using Cuprate's custom benchmarking harness)
| `criterion/*` | Micro-benchmarks for crates (e.g. timings for a single function)
## 2. Harness
The harness is:
- `cuprate-harness`; the actual binary crate ran
- `cuprate-harness-lib`; the library that other crates hook into
The purpose of the harness is very simple:
1. Set-up the benchmark
1. Start timer
1. Run benchmark
1. Output data
The harness runs the benchmarks found in `harness/`.
The way benchmarks "plug-in" to the harness is simply by implementing `cuprate_harness_lib::Benchmark`.
See `cuprate-harness-lib` crate documentation for a user-guide:
```bash
cargo doc --open --package cuprate-harness-lib
```
### 2.1 Creating a harness benchmark
1. Create a new crate inside `benches/harness` (consider copying `benches/harness/test` as a base)
2. Pull in `cuprate_harness_lib` as a dependency
3. Implement `cuprate_harness_lib::Benchmark`
4. Add a feature inside `cuprate_harness` for your benchmark
### 2.2 Running a harness benchmark
After your benchmark is implemented, run this command:
```bash
cargo run --release --package cuprate-harness --features $YOUR_BENCHMARK_CRATE_FEATURE
```
For example, to run the test benchmark:
```bash
cargo run --release --package cuprate-harness --features test
```
## 3. Criterion
Each sub-directory in here is a crate that uses [Criterion](https://bheisler.github.io/criterion.rs/book) for timing single functions and/or groups of functions.
They are generally be small in scope.
See [`criterion/cuprate-json-rpc`](https://github.com/Cuprate/cuprate/tree/main/benches/criterion/cuprate-json-rpc) for an example.
### 3.1 Creating a Criterion benchmark
1. Copy [`criterion/test`](https://github.com/Cuprate/cuprate/tree/main/benches/criterion) as base
2. Read the `Getting Started` section of <https://bheisler.github.io/criterion.rs/book>
3. Get started
### 3.1 Running a Criterion benchmark
To run all Criterion benchmarks, run this from the repository root:
```bash
cargo bench
```
To run specific package(s), use:
```bash
cargo bench --package $CRITERION_BENCHMARK_CRATE_NAME
```
For example:
```bash
cargo bench --package cuprate-criterion-json-rpc
```
See the [`Benchmarking` section in the Architecture book](https://architecture.cuprate.org/benchmarking/intro.html)
to see how to create and run these benchmarks.

View file

@ -9,15 +9,16 @@ repository = "https://github.com/Cuprate/cuprate/tree/main/benches/benchmark/bi
keywords = ["cuprate", "benchmarking", "binary"]
[features]
default = ["example"]
# All new benchmarks should be added here!
all = ["example"]
default = []
json = []
example = ["dep:cuprate-benchmark-example"]
database = ["dep:cuprate-database"]
[dependencies]
cuprate-benchmark-lib = { path = "../lib" }
cuprate-benchmark-example = { path = "../example", optional = true }
cuprate-database = { path = "../../../storage/database", optional = true }
cfg-if = { workspace = true }
serde = { workspace = true, features = ["derive"] }

View file

@ -14,21 +14,20 @@ use cfg_if::cfg_if;
/// 1. Run all enabled benchmarks
/// 2. Record benchmark timings
/// 3. Print timing data
///
/// To add a new benchmark to be ran here:
/// 1. Copy + paste a `cfg_if` block
/// 2. Change it to your benchmark's feature flag
/// 3. Change it to your benchmark's type
fn main() {
let mut timings = timings::Timings::new();
cfg_if! {
if #[cfg(not(any(feature = "database", feature = "example")))] {
if #[cfg(not(any(feature = "example")))] {
compile_error!("[cuprate_benchmark]: no feature specified. Use `--features $BENCHMARK_FEATURE` when building.");
}
}
cfg_if! {
if #[cfg(feature = "database")] {
run::run_benchmark::<cuprate_benchmark_database::Benchmark>(&mut timings);
}
}
cfg_if! {
if #[cfg(feature = "example")] {
run::run_benchmark::<cuprate_benchmark_example::Example>(&mut timings);

View file

@ -12,10 +12,12 @@ keywords = ["cuprate", "json-rpc", "criterion", "benchmark"]
criterion = { workspace = true }
function_name = { workspace = true }
serde_json = { workspace = true, features = ["default"] }
tempfile = { workspace = true }
cuprate-json-rpc = { path = "../../../rpc/json-rpc" }
[[bench]]
name = "main"
harness = false
[lints]
workspace = true

View file

@ -1,78 +1,5 @@
//! TODO
//---------------------------------------------------------------------------------------------------- Lints
// Forbid lints.
// Our code, and code generated (e.g macros) cannot overrule these.
#![forbid(
// `unsafe` is allowed but it _must_ be
// commented with `SAFETY: reason`.
clippy::undocumented_unsafe_blocks,
// Never.
unused_unsafe,
redundant_semicolons,
unused_allocation,
coherence_leak_check,
while_true,
clippy::missing_docs_in_private_items,
// Maybe can be put into `#[deny]`.
unconditional_recursion,
for_loops_over_fallibles,
unused_braces,
unused_doc_comments,
unused_labels,
keyword_idents,
non_ascii_idents,
variant_size_differences,
single_use_lifetimes,
// Probably can be put into `#[deny]`.
future_incompatible,
let_underscore,
break_with_label_and_loop,
duplicate_macro_attributes,
exported_private_dependencies,
large_assignments,
overlapping_range_endpoints,
semicolon_in_expressions_from_macros,
noop_method_call,
unreachable_pub,
)]
// Deny lints.
// Some of these are `#[allow]`'ed on a per-case basis.
#![deny(
clippy::all,
clippy::correctness,
clippy::suspicious,
clippy::style,
clippy::complexity,
clippy::perf,
clippy::pedantic,
clippy::nursery,
clippy::cargo,
unused_mut,
missing_docs,
deprecated,
unused_comparisons,
nonstandard_style
)]
#![allow(unreachable_code, unused_variables, dead_code, unused_imports)] // TODO: remove
#![allow(
// FIXME: this lint affects crates outside of
// `database/` for some reason, allow for now.
clippy::cargo_common_metadata,
// FIXME: adding `#[must_use]` onto everything
// might just be more annoying than useful...
// although it is sometimes nice.
clippy::must_use_candidate,
// TODO: should be removed after all `todo!()`'s are gone.
clippy::diverging_sub_expression,
clippy::module_name_repetitions,
clippy::module_inception,
clippy::redundant_pub_crate,
clippy::option_if_let_else,
clippy::significant_drop_tightening,
)]
// Allow some lints when running in debug mode.
#![cfg_attr(debug_assertions, allow(clippy::todo, clippy::multiple_crate_versions))]
//! Benchmarks for `cuprate-json-rpc`.
#![allow(unused_crate_dependencies)]
mod response;

View file

@ -1,13 +1,12 @@
//! `trait Storable` benchmarks.
//! Benchmarks for [`Response`].
#![allow(unused_attributes, unused_crate_dependencies)]
//---------------------------------------------------------------------------------------------------- Import
use criterion::{black_box, criterion_group, criterion_main, Criterion};
use function_name::named;
use serde_json::{from_str, to_string_pretty};
use cuprate_json_rpc::{Id, Response};
//---------------------------------------------------------------------------------------------------- Criterion
criterion_group! {
benches,
response_from_str_u8,
@ -25,8 +24,7 @@ criterion_group! {
}
criterion_main!(benches);
//---------------------------------------------------------------------------------------------------- Deserialization
/// TODO
/// Generate `from_str` deserialization benchmark functions for [`Response`].
macro_rules! impl_from_str_benchmark {
(
$(
@ -60,8 +58,7 @@ impl_from_str_benchmark! {
response_from_str_string_500_len => String => r#"{"jsonrpc":"2.0","id":123,"result":"helloworldhelloworldhelloworldhelloworldhelloworldhelloworldhelloworldhelloworldhelloworldhelloworldhelloworldhelloworldhelloworldhelloworldhelloworldhelloworldhelloworldhelloworldhelloworldhelloworldhelloworldhelloworldhelloworldhelloworldhelloworldhelloworldhelloworldhelloworldhelloworldhelloworldhelloworldhelloworldhelloworldhelloworldhelloworldhelloworldhelloworldhelloworldhelloworldhelloworldhelloworldhelloworldhelloworldhelloworldhelloworldhelloworldhelloworldhelloworldhelloworldhelloworld"}"#,
}
//---------------------------------------------------------------------------------------------------- Deserialization
/// TODO
/// Generate `to_string_pretty` serialization benchmark functions for [`Response`].
macro_rules! impl_to_string_pretty_benchmark {
(
$(

View file

@ -1,77 +1,2 @@
//! TODO
//---------------------------------------------------------------------------------------------------- Lints
// Forbid lints.
// Our code, and code generated (e.g macros) cannot overrule these.
#![forbid(
// `unsafe` is allowed but it _must_ be
// commented with `SAFETY: reason`.
clippy::undocumented_unsafe_blocks,
// Never.
unused_unsafe,
redundant_semicolons,
unused_allocation,
coherence_leak_check,
while_true,
clippy::missing_docs_in_private_items,
// Maybe can be put into `#[deny]`.
unconditional_recursion,
for_loops_over_fallibles,
unused_braces,
unused_doc_comments,
unused_labels,
keyword_idents,
non_ascii_idents,
variant_size_differences,
single_use_lifetimes,
// Probably can be put into `#[deny]`.
future_incompatible,
let_underscore,
break_with_label_and_loop,
duplicate_macro_attributes,
exported_private_dependencies,
large_assignments,
overlapping_range_endpoints,
semicolon_in_expressions_from_macros,
noop_method_call,
unreachable_pub,
)]
// Deny lints.
// Some of these are `#[allow]`'ed on a per-case basis.
#![deny(
clippy::all,
clippy::correctness,
clippy::suspicious,
clippy::style,
clippy::complexity,
clippy::perf,
clippy::pedantic,
clippy::nursery,
clippy::cargo,
unused_mut,
missing_docs,
deprecated,
unused_comparisons,
nonstandard_style
)]
#![allow(unreachable_code, unused_variables, dead_code, unused_imports)] // TODO: remove
#![allow(
// FIXME: this lint affects crates outside of
// `database/` for some reason, allow for now.
clippy::cargo_common_metadata,
// FIXME: adding `#[must_use]` onto everything
// might just be more annoying than useful...
// although it is sometimes nice.
clippy::must_use_candidate,
// TODO: should be removed after all `todo!()`'s are gone.
clippy::diverging_sub_expression,
clippy::module_name_repetitions,
clippy::module_inception,
clippy::redundant_pub_crate,
clippy::option_if_let_else,
clippy::significant_drop_tightening,
)]
// Allow some lints when running in debug mode.
#![cfg_attr(debug_assertions, allow(clippy::todo, clippy::multiple_crate_versions))]
//---------------------------------------------------------------------------------------------------- Modules
#![allow(unused_crate_dependencies, reason = "used in benchmarks")]

View file

@ -143,9 +143,16 @@
---
- [⚪️ Benchmarking](benchmarking/intro.md)
- [⚪️ Criterion](benchmarking/criterion.md)
- [⚪️ Harness](benchmarking/harness.md)
- [🟢 Benchmarking](benchmarking/intro.md)
- [🟢 Criterion](benchmarking/criterion/intro.md)
- [🟢 Creating](benchmarking/criterion/creating.md)
- [🟢 Running](benchmarking/criterion/running.md)
- [🟢 `cuprate-benchmark`](benchmarking/cuprate/intro.md)
- [🟢 Creating](benchmarking/cuprate/creating.md)
- [🟢 Running](benchmarking/cuprate/running.md)
---
- [⚪️ Testing](testing/intro.md)
- [⚪️ Monero data](testing/monero-data.md)
- [⚪️ RPC client](testing/rpc-client.md)

View file

@ -1 +0,0 @@
# ⚪️ Criterion

View file

@ -0,0 +1,10 @@
# Creating
Creating a new Criterion-based benchmarking crate for one of Cuprate's crates is relatively simple,
although, it requires knowledge of how to use Criterion first:
1. Read the `Getting Started` section of <https://bheisler.github.io/criterion.rs/book>
2. Copy [`benches/criterion/example`](https://github.com/Cuprate/cuprate/tree/main/benches/criterion/example) as base
3. Get started
For a real example, see:
[`cuprate-criterion-json-rpc`](https://github.com/Cuprate/cuprate/tree/main/benches/criterion/cuprate-json-rpc).

View file

@ -0,0 +1,6 @@
# Criterion
Each sub-directory in [`benches/criterion/`](https://github.com/Cuprate/cuprate/tree/main/benches/criterion) is a crate that uses [Criterion](https://bheisler.github.io/criterion.rs/book) for timing single functions and/or groups of functions.
They are generally be small in scope.
See [`benches/criterion/cuprate-json-rpc`](https://github.com/Cuprate/cuprate/tree/main/benches/criterion/cuprate-json-rpc) for an example.

View file

@ -0,0 +1,15 @@
# Running
To run all Criterion benchmarks, run this from the repository root:
```bash
cargo bench
```
To run specific package(s), use:
```bash
cargo bench --package $CRITERION_BENCHMARK_CRATE_NAME
```
For example:
```bash
cargo bench --package cuprate-criterion-json-rpc
```

View file

@ -0,0 +1,42 @@
# Creating
New benchmarks are plugged into `cuprate-benchmark` by:
1. Implementing `cuprate_benchmark_lib::Benchmark`
1. Registering the benchmark in the `cuprate_benchmark` binary
See [`benches/benchmark/example`](https://github.com/Cuprate/cuprate/tree/main/benches/benchmark/example)
for an example. For a real example, see:
[`cuprate-benchmark-database`](https://github.com/Cuprate/cuprate/tree/main/benches/benchmark/cuprate-database).
## Creating the benchmark crate
Before plugging into `cuprate-benchmark`, your actual benchmark crate must be created:
1. Create a new crate inside `benches/benchmark` (consider copying `benches/benchmark/example` as a base)
1. Pull in `cuprate_benchmark_lib` as a dependency
1. Create a benchmark
1. Implement `cuprate_benchmark_lib::Benchmark`
## `cuprate_benchmark_lib::Benchmark`
This is the trait that standarizes all benchmarks ran under `cuprate-benchmark`.
It must be implemented by your benchmarking crate.
See `cuprate-benchmark-lib` crate documentation for a user-guide: <https://doc.cuprate.org/cuprate_benchmark_lib>.
## Adding a feature to `cuprate-benchmark`
After your benchmark's behavior is defined, it must be registered
in the binary that is actually ran: `cuprate-benchmark`.
If your benchmark is new, add a new crate feature to [`cuprate-benchmark`'s Cargo.toml file](https://github.com/Cuprate/cuprate/tree/main/benches/benchmark/bin/Cargo.toml) with an optional dependency to your benchmarking crate.
## Adding to `cuprate-benchmark`'s `main()`
After adding your crate's feature, add a conditional line that run the benchmark
if the feature is enabled to the `main()` function:
For example, if your crate's name is `egg`:
```rust
cfg_if! {
if #[cfg(feature = "egg")] {
run::run_benchmark::<cuprate_benchmark_egg::Benchmark>(&mut timings);
}
}
```

View file

@ -0,0 +1,12 @@
# cuprate-benchmark
Cuprate has 2 custom crates for macro benchmarking:
- `cuprate-benchmark`; the actual binary crate ran
- `cuprate-benchmark-lib`; the library that other crates hook into
The purpose of `cuprate-benchmark` is very simple:
1. Set-up the benchmark
1. Start timer
1. Run benchmark
1. Output data
`cuprate-benchmark` runs the benchmarks found in [`benches/benchmark/cuprate-*`](https://github.com/Cuprate/cuprate/tree/main/benches/benchmark).

View file

@ -0,0 +1,16 @@
# Running
`cuprate-benchmark` benchmarks are ran with this command:
```bash
cargo run --release --package cuprate-benchmark --features $YOUR_BENCHMARK_CRATE_FEATURE
```
For example, to run the example benchmark:
```bash
cargo run --release --package cuprate-benchmark --features example
```
Use the `all` feature to run all benchmarks:
```bash
# Run all benchmarks
cargo run --release --package cuprate-benchmark --features all
```

View file

@ -1 +0,0 @@
# ⚪️ Harness

View file

@ -1 +1,22 @@
# ⚪️ Benchmarking
# Benchmarking
Cuprate has 2 types of benchmarks:
- Criterion benchmarks
- `cuprate-benchmark` benchmarks
[Criterion](https://bheisler.github.io/criterion.rs/book/user_guide/advanced_configuration.html) is used for micro benchmarks; they time single functions, groups of functions, and generally are small in scope.
`cuprate-benchmark` and `cuprate-benchmark-lib` are custom in-house crates Cuprate uses for macro benchmarks; these test sub-systems, sections of a sub-system, or otherwise larger or more complicated code that isn't suited for micro benchmarks.
## File layout and purpose
All benchmarking related files are in the [`benches/`](https://github.com/Cuprate/cuprate/tree/main/benches) folder.
This directory is organized like such:
| Directory | Purpose |
|-------------------------------|---------|
| `benches/criterion/` | Criterion (micro) benchmarks
| `benches/criterion/cuprate-*` | Criterion benchmarks for the crate with the same name
| `benches/benchmark/` | Cuprate's custom benchmarking files
| `benches/benchmark/bin` | The `cuprate-benchmark` crate; the actual binary run that links all benchmarks
| `benches/benchmark/lib` | The `cuprate-benchmark-lib` crate; the benchmarking framework all benchmarks plug into
| `benches/benchmark/cuprate-*` | `cuprate-benchmark` benchmarks for the crate with the same name