serai/crypto/schnorr/src/aggregate.rs
akildemir e319762c69
use half-aggregation for tm messages (#346)
* dalek 4.0

* cargo update

Moves to a version of Substrate which uses curve25519-dalek 4.0 (not a rc).
Doesn't yet update the repo to curve25519-dalek 4.0 (as a branch does) due
to the official schnorrkel using a conflicting curve25519-dalek. This would
prevent installation of frost-schnorrkel without a patch.

* use half-aggregation for tm messages

* fmt

* fix pr comments

* cargo update

Achieves three notable updates.

1) Resolves RUSTSEC-2022-0093 by updating libp2p-identity.
2) Removes 3 old rand crates via updating ed25519-dalek (a dependency of
libp2p-identity).
3) Sets serde_derive to 1.0.171 via updating to time 0.3.26 which pins at up to
1.0.171.

The last one is the most important. The former two are niceties.

serde_derive, since 1.0.171, ships a non-reproducible binary blob in what's a
complete compromise of supply chain security. This is done in order to reduce
compile times, yet also for the maintainer of serde (dtolnay) to leverage
serde's position as the 8th most downloaded crate to attempt to force changes
to the Rust build pipeline.

While dtolnay's contributions to Rust are respectable, being behind syn, quote,
and proc-macro2 (the top three crates by downloads), along with thiserror,
anyhow, async-trait, and more (I believe also being part of the Rust project),
they have unfortunately decided to refuse to listen to the community on this
issue (or even engage with counter-commentary). Given their political agenda
they seem to try to be accomplishing with force, I'd go as far as to call their
actions terroristic (as they're using the threat of the binary blob as
justification for cargo to ship 'proper' support for binary blobs).

This is arguably representative of dtolnay's past work on watt. watt was a wasm
interpreter to execute a pre-compiled proc macro. This would save the compile
time of proc macros, yet sandbox it so a full binary did not have to be run.

Unfortunately, watt (while decreasing compile times) fails to be a valid
solution to supply chain security (without massive ecosystem changes). It never
implemented reproducible builds for its wasm blobs, and a malicious wasm blob
could still fundamentally compromise a project. The only solution for an end
user to achieve a secure pipeline would be to locally build the project,
verifying the blob aligns, yet doing so would negate all advantages of the
blob.

dtolnay also seems to be giving up their role as a FOSS maintainer given that
serde no longer works in several environments. While FOSS maintainers are not
required to never implement breaking changes, the version number is still 1.0.
While FOSS maintainers are not required to follow semver, releasing a very
notable breaking change *without a new version number* in an ecosystem which
*does follow semver*, then refusing to acknowledge bugs as bugs with their work
does meet my personal definition of "not actively maintaining their existing
work". Maintenance would be to fix bugs, not introduce and ignore.

For now, serde > 1.0.171 has been banned. In the future, we may host a fork
without the blobs (yet with the patches). It may be necessary to ban all of
dtolnay's maintained crates, if they continue to force their agenda as such,
yet I hope this may be resolved within the next week or so.

Sources:

https://github.com/serde-rs/serde/issues/2538 - Binary blob discussion

This includes several reports of various workflows being broken.

https://github.com/serde-rs/serde/issues/2538#issuecomment-1682519944

dtolnay commenting that security should be resolved via Rust toolchain edits,
not via their own work being secure. This is why I say they're trying to
leverage serde in a political game.

https://github.com/serde-rs/serde/issues/2526 - Usage via git broken

dtolnay explicitly asks the submitting user if they'd be willing to advocate
for changes to Rust rather than actually fix the issue they created. This is
further political arm wrestling.

https://github.com/serde-rs/serde/issues/2530 - Usage via Bazel broken

https://github.com/serde-rs/serde/issues/2575 - Unverifiable binary blob

https://github.com/dtolnay/watt - dtolnay's prior work on precompilation

* add Rs() api to  SchnorrAggregate

* Correct serai-processor-tests to dalek 4

* fmt + deny

* Slash malevolent validators  (#294)

* add slash tx

* ignore unsigned tx replays

* verify that provided evidence is valid

* fix clippy + fmt

* move application tx handling to another module

* partially handle the tendermint txs

* fix pr comments

* support unsigned app txs

* add slash target to the votes

* enforce provided, unsigned, signed tx ordering within a block

* bug fixes

* add unit test for tendermint txs

* bug fixes

* update tests for tendermint txs

* add tx ordering test

* tidy up tx ordering test

* cargo +nightly fmt

* Misc fixes from rebasing

* Finish resolving clippy

* Remove sha3 from tendermint-machine

* Resolve a DoS in SlashEvidence's read

Also moves Evidence from Vec<Message> to (Message, Option<Message>). That
should meet all requirements while being a bit safer.

* Make lazy_static a dev-depend for tributary

* Various small tweaks

One use of sort was inefficient, sorting unsigned || signed when unsigned was
already properly sorted. Given how the unsigned TXs were given a nonce of 0, an
unstable sort may swap places with an unsigned TX and a signed TX with a nonce
of 0 (leading to a faulty block).

The extra protection added here sorts signed, then concats.

* Fix Tributary tests I broke, start review on tendermint/tx.rs

* Finish reviewing everything outside tests and empty_signature

* Remove empty_signature

empty_signature led to corrupted local state histories. Unfortunately, the API
is only sane with a signature.

We now use the actual signature, which risks creating a signature over a
malicious message if we have ever have an invariant producing malicious
messages. Prior, we only signed the message after the local machine confirmed
it was okay per the local view of consensus.

This is tolerated/preferred over a corrupt state history since production of
such messages is already an invariant. TODOs are added to make handling of this
theoretical invariant further robust.

* Remove async_sequential for tokio::test

There was no competition for resources forcing them to be run sequentially.

* Modify block order test to be statistically significant without multiple runs

* Clean tests

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>

* Add DSTs to Tributary TX sig_hash functions

Prevents conflicts with other systems/other parts of the Tributary.

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-08-21 01:22:00 -04:00

188 lines
5.6 KiB
Rust

use std_shims::{
vec::Vec,
io::{self, Read, Write},
};
use zeroize::Zeroize;
use transcript::{Transcript, SecureDigest, DigestTranscript};
use ciphersuite::{
group::{
ff::{Field, PrimeField},
Group, GroupEncoding,
},
Ciphersuite,
};
use multiexp::multiexp_vartime;
use crate::SchnorrSignature;
// Returns a unbiased scalar weight to use on a signature in order to prevent malleability
fn weight<D: Send + Clone + SecureDigest, F: PrimeField>(digest: &mut DigestTranscript<D>) -> F {
let mut bytes = digest.challenge(b"aggregation_weight");
debug_assert_eq!(bytes.len() % 8, 0);
// This should be guaranteed thanks to SecureDigest
debug_assert!(bytes.len() >= 32);
let mut res = F::ZERO;
let mut i = 0;
// Derive a scalar from enough bits of entropy that bias is < 2^128
// This can't be const due to its usage of a generic
// Also due to the usize::try_from, yet that could be replaced with an `as`
// The + 7 forces it to round up
#[allow(non_snake_case)]
let BYTES: usize = usize::try_from(((F::NUM_BITS + 128) + 7) / 8).unwrap();
let mut remaining = BYTES;
// We load bits in as u64s
const WORD_LEN_IN_BITS: usize = 64;
const WORD_LEN_IN_BYTES: usize = WORD_LEN_IN_BITS / 8;
let mut first = true;
while i < remaining {
// Shift over the already loaded bits
if !first {
for _ in 0 .. WORD_LEN_IN_BITS {
res += res;
}
}
first = false;
// Add the next 64 bits
res += F::from(u64::from_be_bytes(bytes[i .. (i + WORD_LEN_IN_BYTES)].try_into().unwrap()));
i += WORD_LEN_IN_BYTES;
// If we've exhausted this challenge, get another
if i == bytes.len() {
bytes = digest.challenge(b"aggregation_weight_continued");
remaining -= i;
i = 0;
}
}
res
}
/// Aggregate Schnorr signature as defined in <https://eprint.iacr.org/2021/350>.
#[allow(non_snake_case)]
#[derive(Clone, PartialEq, Eq, Debug, Zeroize)]
pub struct SchnorrAggregate<C: Ciphersuite> {
Rs: Vec<C::G>,
s: C::F,
}
impl<C: Ciphersuite> SchnorrAggregate<C> {
/// Read a SchnorrAggregate from something implementing Read.
pub fn read<R: Read>(reader: &mut R) -> io::Result<Self> {
let mut len = [0; 4];
reader.read_exact(&mut len)?;
#[allow(non_snake_case)]
let mut Rs = vec![];
for _ in 0 .. u32::from_le_bytes(len) {
Rs.push(C::read_G(reader)?);
}
Ok(SchnorrAggregate { Rs, s: C::read_F(reader)? })
}
/// Write a SchnorrAggregate to something implementing Write.
///
/// This will panic if more than 4 billion signatures were aggregated.
pub fn write<W: Write>(&self, writer: &mut W) -> io::Result<()> {
writer.write_all(
&u32::try_from(self.Rs.len())
.expect("more than 4 billion signatures in aggregate")
.to_le_bytes(),
)?;
#[allow(non_snake_case)]
for R in &self.Rs {
writer.write_all(R.to_bytes().as_ref())?;
}
writer.write_all(self.s.to_repr().as_ref())
}
/// Serialize a SchnorrAggregate, returning a `Vec<u8>`.
pub fn serialize(&self) -> Vec<u8> {
let mut buf = vec![];
self.write(&mut buf).unwrap();
buf
}
#[allow(non_snake_case)]
pub fn Rs(&self) -> &[C::G] {
self.Rs.as_slice()
}
/// Perform signature verification.
///
/// Challenges must be properly crafted, which means being binding to the public key, nonce, and
/// any message. Failure to do so will let a malicious adversary to forge signatures for
/// different keys/messages.
///
/// The DST used here must prevent a collision with whatever hash function produced the
/// challenges.
#[must_use]
pub fn verify(&self, dst: &'static [u8], keys_and_challenges: &[(C::G, C::F)]) -> bool {
if self.Rs.len() != keys_and_challenges.len() {
return false;
}
let mut digest = DigestTranscript::<C::H>::new(dst);
digest.domain_separate(b"signatures");
for (_, challenge) in keys_and_challenges {
digest.append_message(b"challenge", challenge.to_repr());
}
let mut pairs = Vec::with_capacity((2 * keys_and_challenges.len()) + 1);
for (i, (key, challenge)) in keys_and_challenges.iter().enumerate() {
let z = weight(&mut digest);
pairs.push((z, self.Rs[i]));
pairs.push((z * challenge, *key));
}
pairs.push((-self.s, C::generator()));
multiexp_vartime(&pairs).is_identity().into()
}
}
/// A signature aggregator capable of consuming signatures in order to produce an aggregate.
#[allow(non_snake_case)]
#[derive(Clone, Debug, Zeroize)]
pub struct SchnorrAggregator<C: Ciphersuite> {
digest: DigestTranscript<C::H>,
sigs: Vec<SchnorrSignature<C>>,
}
impl<C: Ciphersuite> SchnorrAggregator<C> {
/// Create a new aggregator.
///
/// The DST used here must prevent a collision with whatever hash function produced the
/// challenges.
pub fn new(dst: &'static [u8]) -> Self {
let mut res = Self { digest: DigestTranscript::<C::H>::new(dst), sigs: vec![] };
res.digest.domain_separate(b"signatures");
res
}
/// Aggregate a signature.
pub fn aggregate(&mut self, challenge: C::F, sig: SchnorrSignature<C>) {
self.digest.append_message(b"challenge", challenge.to_repr());
self.sigs.push(sig);
}
/// Complete aggregation, returning None if none were aggregated.
pub fn complete(mut self) -> Option<SchnorrAggregate<C>> {
if self.sigs.is_empty() {
return None;
}
let mut aggregate = SchnorrAggregate { Rs: Vec::with_capacity(self.sigs.len()), s: C::F::ZERO };
for i in 0 .. self.sigs.len() {
aggregate.Rs.push(self.sigs[i].R);
aggregate.s += self.sigs[i].s * weight::<_, C::F>(&mut self.digest);
}
Some(aggregate)
}
}