mirror of
https://github.com/serai-dex/serai.git
synced 2025-01-30 14:35:57 +00:00
e4e4245ee3
* Upstream GBP, divisor, circuit abstraction, and EC gadgets from FCMP++ * Initial eVRF implementation Not quite done yet. It needs to communicate the resulting points and proofs to extract them from the Pedersen Commitments in order to return those, and then be tested. * Add the openings of the PCs to the eVRF as necessary * Add implementation of secq256k1 * Make DKG Encryption a bit more flexible No longer requires the use of an EncryptionKeyMessage, and allows pre-defined keys for encryption. * Make NUM_BITS an argument for the field macro * Have the eVRF take a Zeroizing private key * Initial eVRF-based DKG * Add embedwards25519 curve * Inline the eVRF into the DKG library Due to how we're handling share encryption, we'd either need two circuits or to dedicate this circuit to the DKG. The latter makes sense at this time. * Add documentation to the eVRF-based DKG * Add paragraph claiming robustness * Update to the new eVRF proof * Finish routing the eVRF functionality Still needs errors and serialization, along with a few other TODOs. * Add initial eVRF DKG test * Improve eVRF DKG Updates how we calculcate verification shares, improves performance when extracting multiple sets of keys, and adds more to the test for it. * Start using a proper error for the eVRF DKG * Resolve various TODOs Supports recovering multiple key shares from the eVRF DKG. Inlines two loops to save 2**16 iterations. Adds support for creating a constant time representation of scalars < NUM_BITS. * Ban zero ECDH keys, document non-zero requirements * Implement eVRF traits, all the way up to the DKG, for secp256k1/ed25519 * Add Ristretto eVRF trait impls * Support participating multiple times in the eVRF DKG * Only participate once per key, not once per key share * Rewrite processor key-gen around the eVRF DKG Still a WIP. * Finish routing the new key gen in the processor Doesn't touch the tests, coordinator, nor Substrate yet. `cargo +nightly fmt && cargo +nightly-2024-07-01 clippy --all-features -p serai-processor` does pass. * Deduplicate and better document in processor key_gen * Update serai-processor tests to the new key gen * Correct amount of yx coefficients, get processor key gen test to pass * Add embedded elliptic curve keys to Substrate * Update processor key gen tests to the eVRF DKG * Have set_keys take signature_participants, not removed_participants Now no one is removed from the DKG. Only `t` people publish the key however. Uses a BitVec for an efficient encoding of the participants. * Update the coordinator binary for the new DKG This does not yet update any tests. * Add sensible Debug to key_gen::[Processor, Coordinator]Message * Have the DKG explicitly declare how to interpolate its shares Removes the hack for MuSig where we multiply keys by the inverse of their lagrange interpolation factor. * Replace Interpolation::None with Interpolation::Constant Allows the MuSig DKG to keep the secret share as the original private key, enabling deriving FROST nonces consistently regardless of the MuSig context. * Get coordinator tests to pass * Update spec to the new DKG * Get clippy to pass across the repo * cargo machete * Add an extra sleep to ensure expected ordering of `Participation`s * Update orchestration * Remove bad panic in coordinator It expected ConfirmationShare to be n-of-n, not t-of-n. * Improve documentation on functions * Update TX size limit We now no longer have to support the ridiculous case of having 49 DKG participations within a 101-of-150 DKG. It does remain quite high due to needing to _sign_ so many times. It'd may be optimal for parties with multiple key shares to independently send their preprocesses/shares (despite the overhead that'll cause with signatures and the transaction structure). * Correct error in the Processor spec document * Update a few comments in the validator-sets pallet * Send/Recv Participation one at a time Sending all, then attempting to receive all in an expected order, wasn't working even with notable delays between sending messages. This points to the mempool not working as expected... * Correct ThresholdKeys serialization in modular-frost test * Updating existing TX size limit test for the new DKG parameters * Increase time allowed for the DKG on the GH CI * Correct construction of signature_participants in serai-client tests Fault identified by akil. * Further contextualize DkgConfirmer by ValidatorSet Caught by a safety check we wouldn't reuse preprocesses across messages. That raises the question of we were prior reusing preprocesses (reusing keys)? Except that'd have caused a variety of signing failures (suggesting we had some staggered timing avoiding it in practice but yes, this was possible in theory). * Add necessary calls to set_embedded_elliptic_curve_key in coordinator set rotation tests * Correct shimmed setting of a secq256k1 key * cargo fmt * Don't use `[0; 32]` for the embedded keys in the coordinator rotation test The key_gen function expects the random values already decided. * Big-endian secq256k1 scalars Also restores the prior, safer, Encryption::register function.
635 lines
23 KiB
Rust
635 lines
23 KiB
Rust
use core::{marker::PhantomData, ops::Deref, fmt};
|
|
use std::{
|
|
io::{self, Read, Write},
|
|
collections::HashMap,
|
|
};
|
|
|
|
use rand_core::{RngCore, CryptoRng};
|
|
|
|
use zeroize::{Zeroize, ZeroizeOnDrop, Zeroizing};
|
|
|
|
use transcript::{Transcript, RecommendedTranscript};
|
|
|
|
use ciphersuite::{
|
|
group::{
|
|
ff::{Field, PrimeField},
|
|
Group, GroupEncoding,
|
|
},
|
|
Ciphersuite,
|
|
};
|
|
use multiexp::{multiexp_vartime, BatchVerifier};
|
|
|
|
use schnorr::SchnorrSignature;
|
|
|
|
use crate::{
|
|
Participant, DkgError, ThresholdParams, Interpolation, ThresholdCore, validate_map,
|
|
encryption::{
|
|
ReadWrite, EncryptionKeyMessage, EncryptedMessage, Encryption, Decryption, EncryptionKeyProof,
|
|
DecryptionError,
|
|
},
|
|
};
|
|
|
|
type FrostError<C> = DkgError<EncryptionKeyProof<C>>;
|
|
|
|
#[allow(non_snake_case)]
|
|
fn challenge<C: Ciphersuite>(context: [u8; 32], l: Participant, R: &[u8], Am: &[u8]) -> C::F {
|
|
let mut transcript = RecommendedTranscript::new(b"DKG FROST v0.2");
|
|
transcript.domain_separate(b"schnorr_proof_of_knowledge");
|
|
transcript.append_message(b"context", context);
|
|
transcript.append_message(b"participant", l.to_bytes());
|
|
transcript.append_message(b"nonce", R);
|
|
transcript.append_message(b"commitments", Am);
|
|
C::hash_to_F(b"DKG-FROST-proof_of_knowledge-0", &transcript.challenge(b"schnorr"))
|
|
}
|
|
|
|
/// The commitments message, intended to be broadcast to all other parties.
|
|
///
|
|
/// Every participant should only provide one set of commitments to all parties. If any
|
|
/// participant sends multiple sets of commitments, they are faulty and should be presumed
|
|
/// malicious. As this library does not handle networking, it is unable to detect if any
|
|
/// participant is so faulty. That responsibility lies with the caller.
|
|
#[derive(Clone, PartialEq, Eq, Debug, Zeroize)]
|
|
pub struct Commitments<C: Ciphersuite> {
|
|
commitments: Vec<C::G>,
|
|
cached_msg: Vec<u8>,
|
|
sig: SchnorrSignature<C>,
|
|
}
|
|
|
|
impl<C: Ciphersuite> ReadWrite for Commitments<C> {
|
|
fn read<R: Read>(reader: &mut R, params: ThresholdParams) -> io::Result<Self> {
|
|
let mut commitments = Vec::with_capacity(params.t().into());
|
|
let mut cached_msg = vec![];
|
|
|
|
#[allow(non_snake_case)]
|
|
let mut read_G = || -> io::Result<C::G> {
|
|
let mut buf = <C::G as GroupEncoding>::Repr::default();
|
|
reader.read_exact(buf.as_mut())?;
|
|
let point = C::read_G(&mut buf.as_ref())?;
|
|
cached_msg.extend(buf.as_ref());
|
|
Ok(point)
|
|
};
|
|
|
|
for _ in 0 .. params.t() {
|
|
commitments.push(read_G()?);
|
|
}
|
|
|
|
Ok(Commitments { commitments, cached_msg, sig: SchnorrSignature::read(reader)? })
|
|
}
|
|
|
|
fn write<W: Write>(&self, writer: &mut W) -> io::Result<()> {
|
|
writer.write_all(&self.cached_msg)?;
|
|
self.sig.write(writer)
|
|
}
|
|
}
|
|
|
|
/// State machine to begin the key generation protocol.
|
|
#[derive(Debug, Zeroize)]
|
|
pub struct KeyGenMachine<C: Ciphersuite> {
|
|
params: ThresholdParams,
|
|
context: [u8; 32],
|
|
_curve: PhantomData<C>,
|
|
}
|
|
|
|
impl<C: Ciphersuite> KeyGenMachine<C> {
|
|
/// Create a new machine to generate a key.
|
|
///
|
|
/// The context should be unique among multisigs.
|
|
pub fn new(params: ThresholdParams, context: [u8; 32]) -> KeyGenMachine<C> {
|
|
KeyGenMachine { params, context, _curve: PhantomData }
|
|
}
|
|
|
|
/// Start generating a key according to the FROST DKG spec.
|
|
///
|
|
/// Returns a commitments message to be sent to all parties over an authenticated channel. If any
|
|
/// party submits multiple sets of commitments, they MUST be treated as malicious.
|
|
pub fn generate_coefficients<R: RngCore + CryptoRng>(
|
|
self,
|
|
rng: &mut R,
|
|
) -> (SecretShareMachine<C>, EncryptionKeyMessage<C, Commitments<C>>) {
|
|
let t = usize::from(self.params.t);
|
|
let mut coefficients = Vec::with_capacity(t);
|
|
let mut commitments = Vec::with_capacity(t);
|
|
let mut cached_msg = vec![];
|
|
|
|
for i in 0 .. t {
|
|
// Step 1: Generate t random values to form a polynomial with
|
|
coefficients.push(Zeroizing::new(C::random_nonzero_F(&mut *rng)));
|
|
// Step 3: Generate public commitments
|
|
commitments.push(C::generator() * coefficients[i].deref());
|
|
cached_msg.extend(commitments[i].to_bytes().as_ref());
|
|
}
|
|
|
|
// Step 2: Provide a proof of knowledge
|
|
let r = Zeroizing::new(C::random_nonzero_F(rng));
|
|
let nonce = C::generator() * r.deref();
|
|
let sig = SchnorrSignature::<C>::sign(
|
|
&coefficients[0],
|
|
// This could be deterministic as the PoK is a singleton never opened up to cooperative
|
|
// discussion
|
|
// There's no reason to spend the time and effort to make this deterministic besides a
|
|
// general obsession with canonicity and determinism though
|
|
r,
|
|
challenge::<C>(self.context, self.params.i(), nonce.to_bytes().as_ref(), &cached_msg),
|
|
);
|
|
|
|
// Additionally create an encryption mechanism to protect the secret shares
|
|
let encryption = Encryption::new(self.context, self.params.i, rng);
|
|
|
|
// Step 4: Broadcast
|
|
let msg =
|
|
encryption.registration(Commitments { commitments: commitments.clone(), cached_msg, sig });
|
|
(
|
|
SecretShareMachine {
|
|
params: self.params,
|
|
context: self.context,
|
|
coefficients,
|
|
our_commitments: commitments,
|
|
encryption,
|
|
},
|
|
msg,
|
|
)
|
|
}
|
|
}
|
|
|
|
fn polynomial<F: PrimeField + Zeroize>(
|
|
coefficients: &[Zeroizing<F>],
|
|
l: Participant,
|
|
) -> Zeroizing<F> {
|
|
let l = F::from(u64::from(u16::from(l)));
|
|
// This should never be reached since Participant is explicitly non-zero
|
|
assert!(l != F::ZERO, "zero participant passed to polynomial");
|
|
let mut share = Zeroizing::new(F::ZERO);
|
|
for (idx, coefficient) in coefficients.iter().rev().enumerate() {
|
|
*share += coefficient.deref();
|
|
if idx != (coefficients.len() - 1) {
|
|
*share *= l;
|
|
}
|
|
}
|
|
share
|
|
}
|
|
|
|
/// The secret share message, to be sent to the party it's intended for over an authenticated
|
|
/// channel.
|
|
///
|
|
/// If any participant sends multiple secret shares to another participant, they are faulty.
|
|
// This should presumably be written as SecretShare(Zeroizing<F::Repr>).
|
|
// It's unfortunately not possible as F::Repr doesn't have Zeroize as a bound.
|
|
// The encryption system also explicitly uses Zeroizing<M> so it can ensure anything being
|
|
// encrypted is within Zeroizing. Accordingly, internally having Zeroizing would be redundant.
|
|
#[derive(Clone, PartialEq, Eq)]
|
|
pub struct SecretShare<F: PrimeField>(F::Repr);
|
|
impl<F: PrimeField> AsRef<[u8]> for SecretShare<F> {
|
|
fn as_ref(&self) -> &[u8] {
|
|
self.0.as_ref()
|
|
}
|
|
}
|
|
impl<F: PrimeField> AsMut<[u8]> for SecretShare<F> {
|
|
fn as_mut(&mut self) -> &mut [u8] {
|
|
self.0.as_mut()
|
|
}
|
|
}
|
|
impl<F: PrimeField> fmt::Debug for SecretShare<F> {
|
|
fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result {
|
|
fmt.debug_struct("SecretShare").finish_non_exhaustive()
|
|
}
|
|
}
|
|
impl<F: PrimeField> Zeroize for SecretShare<F> {
|
|
fn zeroize(&mut self) {
|
|
self.0.as_mut().zeroize()
|
|
}
|
|
}
|
|
// Still manually implement ZeroizeOnDrop to ensure these don't stick around.
|
|
// We could replace Zeroizing<M> with a bound M: ZeroizeOnDrop.
|
|
// Doing so would potentially fail to highlight the expected behavior with these and remove a layer
|
|
// of depth.
|
|
impl<F: PrimeField> Drop for SecretShare<F> {
|
|
fn drop(&mut self) {
|
|
self.zeroize();
|
|
}
|
|
}
|
|
impl<F: PrimeField> ZeroizeOnDrop for SecretShare<F> {}
|
|
|
|
impl<F: PrimeField> ReadWrite for SecretShare<F> {
|
|
fn read<R: Read>(reader: &mut R, _: ThresholdParams) -> io::Result<Self> {
|
|
let mut repr = F::Repr::default();
|
|
reader.read_exact(repr.as_mut())?;
|
|
Ok(SecretShare(repr))
|
|
}
|
|
|
|
fn write<W: Write>(&self, writer: &mut W) -> io::Result<()> {
|
|
writer.write_all(self.0.as_ref())
|
|
}
|
|
}
|
|
|
|
/// Advancement of the key generation state machine.
|
|
#[derive(Zeroize)]
|
|
pub struct SecretShareMachine<C: Ciphersuite> {
|
|
params: ThresholdParams,
|
|
context: [u8; 32],
|
|
coefficients: Vec<Zeroizing<C::F>>,
|
|
our_commitments: Vec<C::G>,
|
|
encryption: Encryption<C>,
|
|
}
|
|
|
|
impl<C: Ciphersuite> fmt::Debug for SecretShareMachine<C> {
|
|
fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result {
|
|
fmt
|
|
.debug_struct("SecretShareMachine")
|
|
.field("params", &self.params)
|
|
.field("context", &self.context)
|
|
.field("our_commitments", &self.our_commitments)
|
|
.field("encryption", &self.encryption)
|
|
.finish_non_exhaustive()
|
|
}
|
|
}
|
|
|
|
impl<C: Ciphersuite> SecretShareMachine<C> {
|
|
/// Verify the data from the previous round (canonicity, PoKs, message authenticity)
|
|
#[allow(clippy::type_complexity)]
|
|
fn verify_r1<R: RngCore + CryptoRng>(
|
|
&mut self,
|
|
rng: &mut R,
|
|
mut commitment_msgs: HashMap<Participant, EncryptionKeyMessage<C, Commitments<C>>>,
|
|
) -> Result<HashMap<Participant, Vec<C::G>>, FrostError<C>> {
|
|
validate_map(
|
|
&commitment_msgs,
|
|
&(1 ..= self.params.n()).map(Participant).collect::<Vec<_>>(),
|
|
self.params.i(),
|
|
)?;
|
|
|
|
let mut batch = BatchVerifier::<Participant, C::G>::new(commitment_msgs.len());
|
|
let mut commitments = HashMap::new();
|
|
for l in (1 ..= self.params.n()).map(Participant) {
|
|
let Some(msg) = commitment_msgs.remove(&l) else { continue };
|
|
let mut msg = self.encryption.register(l, msg);
|
|
|
|
if msg.commitments.len() != self.params.t().into() {
|
|
Err(FrostError::InvalidCommitments(l))?;
|
|
}
|
|
|
|
// Step 5: Validate each proof of knowledge
|
|
// This is solely the prep step for the latter batch verification
|
|
msg.sig.batch_verify(
|
|
rng,
|
|
&mut batch,
|
|
l,
|
|
msg.commitments[0],
|
|
challenge::<C>(self.context, l, msg.sig.R.to_bytes().as_ref(), &msg.cached_msg),
|
|
);
|
|
|
|
commitments.insert(l, msg.commitments.drain(..).collect::<Vec<_>>());
|
|
}
|
|
|
|
batch.verify_vartime_with_vartime_blame().map_err(FrostError::InvalidCommitments)?;
|
|
|
|
commitments.insert(self.params.i, self.our_commitments.drain(..).collect());
|
|
Ok(commitments)
|
|
}
|
|
|
|
/// Continue generating a key.
|
|
///
|
|
/// Takes in everyone else's commitments. Returns a HashMap of encrypted secret shares to be sent
|
|
/// over authenticated channels to their relevant counterparties.
|
|
///
|
|
/// If any participant sends multiple secret shares to another participant, they are faulty.
|
|
#[allow(clippy::type_complexity)]
|
|
pub fn generate_secret_shares<R: RngCore + CryptoRng>(
|
|
mut self,
|
|
rng: &mut R,
|
|
commitments: HashMap<Participant, EncryptionKeyMessage<C, Commitments<C>>>,
|
|
) -> Result<
|
|
(KeyMachine<C>, HashMap<Participant, EncryptedMessage<C, SecretShare<C::F>>>),
|
|
FrostError<C>,
|
|
> {
|
|
let commitments = self.verify_r1(&mut *rng, commitments)?;
|
|
|
|
// Step 1: Generate secret shares for all other parties
|
|
let mut res = HashMap::new();
|
|
for l in (1 ..= self.params.n()).map(Participant) {
|
|
// Don't insert our own shares to the byte buffer which is meant to be sent around
|
|
// An app developer could accidentally send it. Best to keep this black boxed
|
|
if l == self.params.i() {
|
|
continue;
|
|
}
|
|
|
|
let mut share = polynomial(&self.coefficients, l);
|
|
let share_bytes = Zeroizing::new(SecretShare::<C::F>(share.to_repr()));
|
|
share.zeroize();
|
|
res.insert(l, self.encryption.encrypt(rng, l, share_bytes));
|
|
}
|
|
|
|
// Calculate our own share
|
|
let share = polynomial(&self.coefficients, self.params.i());
|
|
self.coefficients.zeroize();
|
|
|
|
Ok((
|
|
KeyMachine { params: self.params, secret: share, commitments, encryption: self.encryption },
|
|
res,
|
|
))
|
|
}
|
|
}
|
|
|
|
/// Advancement of the the secret share state machine.
|
|
///
|
|
/// This machine will 'complete' the protocol, by a local perspective. In order to be secure,
|
|
/// the parties must confirm having successfully completed the protocol (an effort out of scope to
|
|
/// this library), yet this is modeled by one more state transition (BlameMachine).
|
|
pub struct KeyMachine<C: Ciphersuite> {
|
|
params: ThresholdParams,
|
|
secret: Zeroizing<C::F>,
|
|
commitments: HashMap<Participant, Vec<C::G>>,
|
|
encryption: Encryption<C>,
|
|
}
|
|
|
|
impl<C: Ciphersuite> fmt::Debug for KeyMachine<C> {
|
|
fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result {
|
|
fmt
|
|
.debug_struct("KeyMachine")
|
|
.field("params", &self.params)
|
|
.field("commitments", &self.commitments)
|
|
.field("encryption", &self.encryption)
|
|
.finish_non_exhaustive()
|
|
}
|
|
}
|
|
|
|
impl<C: Ciphersuite> Zeroize for KeyMachine<C> {
|
|
fn zeroize(&mut self) {
|
|
self.params.zeroize();
|
|
self.secret.zeroize();
|
|
for commitments in self.commitments.values_mut() {
|
|
commitments.zeroize();
|
|
}
|
|
self.encryption.zeroize();
|
|
}
|
|
}
|
|
|
|
// Calculate the exponent for a given participant and apply it to a series of commitments
|
|
// Initially used with the actual commitments to verify the secret share, later used with
|
|
// stripes to generate the verification shares
|
|
fn exponential<C: Ciphersuite>(i: Participant, values: &[C::G]) -> Vec<(C::F, C::G)> {
|
|
let i = C::F::from(u16::from(i).into());
|
|
let mut res = Vec::with_capacity(values.len());
|
|
(0 .. values.len()).fold(C::F::ONE, |exp, l| {
|
|
res.push((exp, values[l]));
|
|
exp * i
|
|
});
|
|
res
|
|
}
|
|
|
|
fn share_verification_statements<C: Ciphersuite>(
|
|
target: Participant,
|
|
commitments: &[C::G],
|
|
mut share: Zeroizing<C::F>,
|
|
) -> Vec<(C::F, C::G)> {
|
|
// This can be insecurely linearized from n * t to just n using the below sums for a given
|
|
// stripe. Doing so uses naive addition which is subject to malleability. The only way to
|
|
// ensure that malleability isn't present is to use this n * t algorithm, which runs
|
|
// per sender and not as an aggregate of all senders, which also enables blame
|
|
let mut values = exponential::<C>(target, commitments);
|
|
|
|
// Perform the share multiplication outside of the multiexp to minimize stack copying
|
|
// While the multiexp BatchVerifier does zeroize its flattened multiexp, and itself, it still
|
|
// converts whatever we give to an iterator and then builds a Vec internally, welcoming copies
|
|
let neg_share_pub = C::generator() * -*share;
|
|
share.zeroize();
|
|
values.push((C::F::ONE, neg_share_pub));
|
|
|
|
values
|
|
}
|
|
|
|
#[derive(Clone, Copy, Hash, Debug, Zeroize)]
|
|
enum BatchId {
|
|
Decryption(Participant),
|
|
Share(Participant),
|
|
}
|
|
|
|
impl<C: Ciphersuite> KeyMachine<C> {
|
|
/// Calculate our share given the shares sent to us.
|
|
///
|
|
/// Returns a BlameMachine usable to determine if faults in the protocol occurred.
|
|
///
|
|
/// This will error on, and return a blame proof for, the first-observed case of faulty behavior.
|
|
pub fn calculate_share<R: RngCore + CryptoRng>(
|
|
mut self,
|
|
rng: &mut R,
|
|
mut shares: HashMap<Participant, EncryptedMessage<C, SecretShare<C::F>>>,
|
|
) -> Result<BlameMachine<C>, FrostError<C>> {
|
|
validate_map(
|
|
&shares,
|
|
&(1 ..= self.params.n()).map(Participant).collect::<Vec<_>>(),
|
|
self.params.i(),
|
|
)?;
|
|
|
|
let mut batch = BatchVerifier::new(shares.len());
|
|
let mut blames = HashMap::new();
|
|
for (l, share_bytes) in shares.drain() {
|
|
let (mut share_bytes, blame) =
|
|
self.encryption.decrypt(rng, &mut batch, BatchId::Decryption(l), l, share_bytes);
|
|
let share =
|
|
Zeroizing::new(Option::<C::F>::from(C::F::from_repr(share_bytes.0)).ok_or_else(|| {
|
|
FrostError::InvalidShare { participant: l, blame: Some(blame.clone()) }
|
|
})?);
|
|
share_bytes.zeroize();
|
|
*self.secret += share.deref();
|
|
|
|
blames.insert(l, blame);
|
|
batch.queue(
|
|
rng,
|
|
BatchId::Share(l),
|
|
share_verification_statements::<C>(self.params.i(), &self.commitments[&l], share),
|
|
);
|
|
}
|
|
batch.verify_with_vartime_blame().map_err(|id| {
|
|
let (l, blame) = match id {
|
|
BatchId::Decryption(l) => (l, None),
|
|
BatchId::Share(l) => (l, Some(blames.remove(&l).unwrap())),
|
|
};
|
|
FrostError::InvalidShare { participant: l, blame }
|
|
})?;
|
|
|
|
// Stripe commitments per t and sum them in advance. Calculating verification shares relies on
|
|
// these sums so preprocessing them is a massive speedup
|
|
// If these weren't just sums, yet the tables used in multiexp, this would be further optimized
|
|
// As of right now, each multiexp will regenerate them
|
|
let mut stripes = Vec::with_capacity(usize::from(self.params.t()));
|
|
for t in 0 .. usize::from(self.params.t()) {
|
|
stripes.push(self.commitments.values().map(|commitments| commitments[t]).sum());
|
|
}
|
|
|
|
// Calculate each user's verification share
|
|
let mut verification_shares = HashMap::new();
|
|
for i in (1 ..= self.params.n()).map(Participant) {
|
|
verification_shares.insert(
|
|
i,
|
|
if i == self.params.i() {
|
|
C::generator() * self.secret.deref()
|
|
} else {
|
|
multiexp_vartime(&exponential::<C>(i, &stripes))
|
|
},
|
|
);
|
|
}
|
|
|
|
let KeyMachine { commitments, encryption, params, secret } = self;
|
|
Ok(BlameMachine {
|
|
commitments,
|
|
encryption: encryption.into_decryption(),
|
|
result: Some(ThresholdCore {
|
|
params,
|
|
interpolation: Interpolation::Lagrange,
|
|
secret_share: secret,
|
|
group_key: stripes[0],
|
|
verification_shares,
|
|
}),
|
|
})
|
|
}
|
|
}
|
|
|
|
/// A machine capable of handling blame proofs.
|
|
pub struct BlameMachine<C: Ciphersuite> {
|
|
commitments: HashMap<Participant, Vec<C::G>>,
|
|
encryption: Decryption<C>,
|
|
result: Option<ThresholdCore<C>>,
|
|
}
|
|
|
|
impl<C: Ciphersuite> fmt::Debug for BlameMachine<C> {
|
|
fn fmt(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result {
|
|
fmt
|
|
.debug_struct("BlameMachine")
|
|
.field("commitments", &self.commitments)
|
|
.field("encryption", &self.encryption)
|
|
.finish_non_exhaustive()
|
|
}
|
|
}
|
|
|
|
impl<C: Ciphersuite> Zeroize for BlameMachine<C> {
|
|
fn zeroize(&mut self) {
|
|
for commitments in self.commitments.values_mut() {
|
|
commitments.zeroize();
|
|
}
|
|
self.result.zeroize();
|
|
}
|
|
}
|
|
|
|
impl<C: Ciphersuite> BlameMachine<C> {
|
|
/// Mark the protocol as having been successfully completed, returning the generated keys.
|
|
/// This should only be called after having confirmed, with all participants, successful
|
|
/// completion.
|
|
///
|
|
/// Confirming successful completion is not necessarily as simple as everyone reporting their
|
|
/// completion. Everyone must also receive everyone's report of completion, entering into the
|
|
/// territory of consensus protocols. This library does not handle that nor does it provide any
|
|
/// tooling to do so. This function is solely intended to force users to acknowledge they're
|
|
/// completing the protocol, not processing any blame.
|
|
pub fn complete(self) -> ThresholdCore<C> {
|
|
self.result.unwrap()
|
|
}
|
|
|
|
fn blame_internal(
|
|
&self,
|
|
sender: Participant,
|
|
recipient: Participant,
|
|
msg: EncryptedMessage<C, SecretShare<C::F>>,
|
|
proof: Option<EncryptionKeyProof<C>>,
|
|
) -> Participant {
|
|
let share_bytes = match self.encryption.decrypt_with_proof(sender, recipient, msg, proof) {
|
|
Ok(share_bytes) => share_bytes,
|
|
// If there's an invalid signature, the sender did not send a properly formed message
|
|
Err(DecryptionError::InvalidSignature) => return sender,
|
|
// Decryption will fail if the provided ECDH key wasn't correct for the given message
|
|
Err(DecryptionError::InvalidProof) => return recipient,
|
|
};
|
|
|
|
let Some(share) = Option::<C::F>::from(C::F::from_repr(share_bytes.0)) else {
|
|
// If this isn't a valid scalar, the sender is faulty
|
|
return sender;
|
|
};
|
|
|
|
// If this isn't a valid share, the sender is faulty
|
|
if !bool::from(
|
|
multiexp_vartime(&share_verification_statements::<C>(
|
|
recipient,
|
|
&self.commitments[&sender],
|
|
Zeroizing::new(share),
|
|
))
|
|
.is_identity(),
|
|
) {
|
|
return sender;
|
|
}
|
|
|
|
// The share was canonical and valid
|
|
recipient
|
|
}
|
|
|
|
/// Given an accusation of fault, determine the faulty party (either the sender, who sent an
|
|
/// invalid secret share, or the receiver, who claimed a valid secret share was invalid). No
|
|
/// matter which, prevent completion of the machine, forcing an abort of the protocol.
|
|
///
|
|
/// The message should be a copy of the encrypted secret share from the accused sender to the
|
|
/// accusing recipient. This message must have been authenticated as actually having come from
|
|
/// the sender in question.
|
|
///
|
|
/// In order to enable detecting multiple faults, an `AdditionalBlameMachine` is returned, which
|
|
/// can be used to determine further blame. These machines will process the same blame statements
|
|
/// multiple times, always identifying blame. It is the caller's job to ensure they're unique in
|
|
/// order to prevent multiple instances of blame over a single incident.
|
|
pub fn blame(
|
|
self,
|
|
sender: Participant,
|
|
recipient: Participant,
|
|
msg: EncryptedMessage<C, SecretShare<C::F>>,
|
|
proof: Option<EncryptionKeyProof<C>>,
|
|
) -> (AdditionalBlameMachine<C>, Participant) {
|
|
let faulty = self.blame_internal(sender, recipient, msg, proof);
|
|
(AdditionalBlameMachine(self), faulty)
|
|
}
|
|
}
|
|
|
|
/// A machine capable of handling an arbitrary amount of additional blame proofs.
|
|
#[derive(Debug, Zeroize)]
|
|
pub struct AdditionalBlameMachine<C: Ciphersuite>(BlameMachine<C>);
|
|
impl<C: Ciphersuite> AdditionalBlameMachine<C> {
|
|
/// Create an AdditionalBlameMachine capable of evaluating Blame regardless of if the caller was
|
|
/// a member in the DKG protocol.
|
|
///
|
|
/// Takes in the parameters for the DKG protocol and all of the participant's commitment
|
|
/// messages.
|
|
///
|
|
/// This constructor assumes the full validity of the commitment messages. They must be fully
|
|
/// authenticated as having come from the supposed party and verified as valid. Usage of invalid
|
|
/// commitments is considered undefined behavior, and may cause everything from inaccurate blame
|
|
/// to panics.
|
|
pub fn new(
|
|
context: [u8; 32],
|
|
n: u16,
|
|
mut commitment_msgs: HashMap<Participant, EncryptionKeyMessage<C, Commitments<C>>>,
|
|
) -> Result<Self, FrostError<C>> {
|
|
let mut commitments = HashMap::new();
|
|
let mut encryption = Decryption::new(context);
|
|
for i in 1 ..= n {
|
|
let i = Participant::new(i).unwrap();
|
|
let Some(msg) = commitment_msgs.remove(&i) else { Err(DkgError::MissingParticipant(i))? };
|
|
commitments.insert(i, encryption.register(i, msg).commitments);
|
|
}
|
|
Ok(AdditionalBlameMachine(BlameMachine { commitments, encryption, result: None }))
|
|
}
|
|
|
|
/// Given an accusation of fault, determine the faulty party (either the sender, who sent an
|
|
/// invalid secret share, or the receiver, who claimed a valid secret share was invalid).
|
|
///
|
|
/// The message should be a copy of the encrypted secret share from the accused sender to the
|
|
/// accusing recipient. This message must have been authenticated as actually having come from
|
|
/// the sender in question.
|
|
///
|
|
/// This will process the same blame statement multiple times, always identifying blame. It is
|
|
/// the caller's job to ensure they're unique in order to prevent multiple instances of blame
|
|
/// over a single incident.
|
|
pub fn blame(
|
|
&self,
|
|
sender: Participant,
|
|
recipient: Participant,
|
|
msg: EncryptedMessage<C, SecretShare<C::F>>,
|
|
proof: Option<EncryptionKeyProof<C>>,
|
|
) -> Participant {
|
|
self.0.blame_internal(sender, recipient, msg, proof)
|
|
}
|
|
}
|