This could still be gamed. For [1, 2, 3], the options were ([1], [2, 3]) or
([1, 2], [3]). This means 2 would always have the maximum round count, and
thus this is still game-able. There's no point to keeping its complexity
accordingly when the algorithm is as efficient as it is.
While a proper random could be used to satisfy 3.7.2, it'd break the
expected determinism.
Also moves the aggregator over to Digest. While a bit verbose for this context,
as all appended items were fixed length, it's length prefixing is solid and
the API is pleasant. The downside is the additional dependency which is
in tree and quite compact.
While the prior intent was to avoid zeroizing for vartime verification, which
is assumed to not have any private data, this simplifies the code and promotes
safety.
Offset signing is now tested. Multi-nonce algorithms are now tested.
Multi-generator nonce algorithms are now tested. More fault cases are now tested
as well.
This wasn't done prior to be 'leaderless', as now the participant with the
lowest ID has an extra step, yet this is still trivial. There's also notable
performance benefits to not taking the previous dividing approach, which
performed an exp.
There's two ways which this could be tested.
1) Preprocess not taking in an arbitrary RNG item, yet the relevant bytes
This would be an unsafe level of refactoring, in my opinion.
2) Test random_nonce and test the passed in RNG eventually ends up at
random_nonce.
This takes the latter route, both verifying random_nonce meets the vectors
and that the FROST machine calls random_nonce properly.
The audit recommends checking failure cases for from_bytes,
from_bytes_unechecked, and from_repr. This isn't feasible.
from_bytes is allowed to have non-canonical values. [0xff; 32] may accordingly
be a valid point for non-SEC1-encoded curves.
from_bytes_unchecked doesn't have a defined failure mode, and by name,
unchecked, shouldn't necessarily fail. The audit acknowledges the tests should
test for whatever result is 'appropriate', yet any result which isn't a failure
on a valid element is appropriate.
from_repr must be canonical, yet for a binary field of 2^n where n % 8 == 0, a
[0xff; n / 8] repr would be valid.
single function
3.4.3 actually describes getting rid of DLEqProof for a thin wrapper around
MultiDLEqProof. That can't be done due to DLEqProof not requiring the std
features, enabling Vecs, which MultiDLEqProof relies on.
Merging the verification statement does simplify the code a bit. While merging
the proof could also be, it has much less value due to the simplicity of
proving (nonce * G, scalar * G).
This will only be called with 0 if the code fails to do proper screening of its
arguments. If such a flaw is present, the DKG lib is critically broken (as this
function isn't public). If it was allowed to continue executing, it'd reveal
the secret share.
This converts proofs from 2n elements to 1+n.
Moves FROST over to it. Additionally, for FROST's binomial nonces, provides
a single DLEq proof (2, not 1+2 elements) by proving the discrete log equality
of their aggregate (with an appropriate binding factor). This may be split back
up depending on later commentary...
The prior one did 64 scalar additions for Ed25519. The new one does 8.
This was optimized by instead of parsing byte-by-byte, u64-by-u64.
Improves perf by ~10-15%.
* Standardize the DLEq serialization function naming
They mismatched from the rest of the project.
This commit is technically incomplete as it doesn't update the dkg crate.
* Rewrite DKG encryption to enable per-message decryption without side effects
This isn't technically true as I already know a break in this which I'll
correct for shortly.
Does update documentation to explain the new scheme. Required for blame.
* Add a verifiable system for blame during the FROST DKG
Previously, if sent an invalid key share, the participant would realize that
and could accuse the sender. Without further evidence, either the accuser
or the accused could be guilty. Now, the accuser has a proof the accused is
in the wrong.
Reworks KeyMachine to return BlameMachine. This explicitly acknowledges how
locally complete keys still need group acknowledgement before the protocol
can be complete and provides a way for others to verify blame, even after a
locally successful run.
If any blame is cast, the protocol is no longer considered complete-able
(instead aborting). Further accusations of blame can still be handled however.
Updates documentation on network behavior.
Also starts to remove "OnDrop". We now use Zeroizing for anything which should
be zeroized on drop. This is a lot more piece-meal and reduces clones.
* Tweak Zeroizing and Debug impls
Expands Zeroizing to be more comprehensive.
Also updates Zeroizing<CachedPreprocess([u8; 32])> to
CachedPreprocess(Zeroizing<[u8; 32]>) so zeroizing is the first thing done
and last step before exposing the copy-able [u8; 32].
Removes private keys from Debug.
* Fix a bug where adversaries could claim to be using another user's encryption keys to learn their messages
Mentioned a few commits ago, now fixed.
This wouldn't have affected Serai, which aborts on failure, nor any DKG
currently supported. It's just about ensuring the DKG encryption is robust and
proper.
* Finish moving dleq from ser/deser to write/read
* Add tests for dkg blame
* Add a FROST test for invalid signature shares
* Batch verify encrypted messages' ephemeral keys' PoP
While the previous construction achieved n/2 average detection,
this will run in log2(n). Unfortunately, the need to keep entropy
around (or take in an RNG here) remains.
* Remove the explicit included participants from FROST
Now, whoever submits preprocesses becomes the signing set. Better separates
preprocess from sign, at the cost of slightly more annoying integrations
(Monero needs to now independently lagrange/offset its key images).
* Support caching preprocesses
Closes https://github.com/serai-dex/serai/issues/40.
I *could* have added a serialization trait to Algorithm and written a ton of
data to disk, while requiring Algorithm implementors also accept such work.
Instead, I moved preprocess to a seeded RNG (Chacha20) which should be as
secure as the regular RNG. Rebuilding from cache simply loads the previously
used Chacha seed, making the Algorithm oblivious to the fact it's being
rebuilt from a cache. This removes any requirements for it to be modified
while guaranteeing equivalency.
This builds on the last commit which delayed determining the signing set till
post-preprocess acquisition. Unfortunately, that commit did force preprocess
from ThresholdView to ThresholdKeys which had visible effects on Monero.
Serai will actually need delayed set determination for #163, and overall,
it remains better, hence it's inclusion.
* Document FROST preprocess caching
* Update ethereum to new FROST
* Fix bug in Monero offset calculation and update processor
Encryption used to be inlined into FROST. When writing the documentation, I
realized it was decently hard to review. It also was antagonistic to other
hosted DKG algorithms by not allowing code re-use.
Encryption is now a standalone module, providing clear boundaries and
reusability.
Additionally, the DKG protocol itself used to use the ciphersuite's specified
hash function (with an HKDF to prevent length extension attacks). Now,
RecommendedTranscript is used to achieve much more robust transcripting and
remove the HKDF dependency. This does add Blake2 into all consumers yet is
preferred for its security properties and ease of review.
* Add dkg crate
* Remove F_len and G_len
They're generally no longer used.
* Replace hash_to_vec with a provided method around associated type H: Digest
Part of trying to minimize this trait so it can be moved elsewhere. Vec,
which isn't std, may have been a blocker.
* Encrypt secret shares within the FROST library
Reduces requirements on callers in order to be correct.
* Update usage of Zeroize within FROST
* Inline functions in key_gen
There was no reason to have them separated as they were. sign probably
has the same statement available, yet that isn't the focus right now.
* Add a ciphersuite package which provides hash_to_F
* Set the Ciphersuite version to something valid
* Have ed448 export Scalar/FieldElement/Point at the top level
* Move FROST over to Ciphersuite
* Correct usage of ff in ciphersuite
* Correct documentation handling
* Move Schnorr signatures to their own crate
* Remove unused feature from schnorr
* Fix Schnorr tests
* Split DKG into a separate crate
* Add serialize to Commitments and SecretShare
Helper for buf = vec![]; .write(buf).unwrap(); buf
* Move FROST over to the new dkg crate
* Update Monero lib to latest FROST
* Correct ethereum's usage of features
* Add serialize to GeneratorProof
* Add serialize helper function to FROST
* Rename AddendumSerialize to WriteAddendum
* Update processor
* Slight fix to processor
* Create message types for FROST key gen
Taking in reader borrows absolutely wasn't feasible. Now, proper types
which can be read (and then passed directly, without a mutable borrow)
exist for key_gen. sign coming next.
* Move FROST signing to messages, not Readers/Writers/Vec<u8>
Also takes the nonce handling code and makes a dedicated file for it,
aiming to resolve complex types and make the code more legible by
replacing its previously inlined state.
* clippy
* Update FROST tests
* read_signature_share
* Update the Monero library to the new FROST packages
* Update processor to latest FROST
* Tweaks to terminology and documentation
Ensures random functions never return zero. This, combined with a check
commitments aren't 0, causes no serialized elements to be 0.
Also directly reads their vectors.
* Label the version as an alpha
* Add versions to Cargo.tomls
* Update to Zeroize 1.5
* Drop patch versions from monero-serai Cargo.toml
* Add a repository field
* Move generators to OUT_DIR
IIRC, I didn't do this originally as it constantly re-generated them.
Unfortunately, since cargo is complaining about .generators, we have to.
* Remove Timelock::fee_weight
Transaction::fee_weight's has a comment, "Assumes Timelock::None since
this library won't let you create a TX with a timelock". Accordingly,
this is dead code.
Despite being slower and only used for blinding values, its still
extremely performant. 20 is far more standard and will avoid an eye
raise from reviewers.
While Group::random shouldn't be used instead of a hash to curve, anyone
who did would've previously been insecure and now isn't.
Could've done a recover_x and a raw Point construction, followed by a
cofactor mul, to avoid the serialization, yet the serialization ensures
full validity under the standard from_bytes function. THis also doesn't
need to be micro-optimized.
* Theoretical ed448 impl
* Fixes
* Basic tests
* More efficient scalarmul
Precomputes a table to minimize additions required.
* Add a torsion test
* Split into a constant and variable time backend
The variable time one is still far too slow, at 53s for the tests (~5s a
scalarmul). It should be usable as a PoC though.
* Rename unsafe Ed448
It's not only unworthy of the Serai branding and deserves more clarity
in the name.
* Add wide reduction to ed448
* Add Zeroize to Ed448
* Rename Ed448 group.rs to point.rs
* Minor lint to FROST
* Ed448 ciphersuite with 8032 test vector
* Macro out the backend fields
* Slight efficiency improvement to point decompression
* Disable the multiexp test in FROST for Ed448
* fmt + clippy ed448
* Fix an infinite loop in the constant time ed448 backend
* Add b"chal" to the 8032 context string for Ed448
Successfully tests against proposed vectors for the FROST IETF draft.
* Fix fmt and clippy
* Use a tabled pow algorithm in ed448's const backend
* Slight tweaks to variable time backend
Stop from_repr(MODULUS) from passing.
* Use extended points
Almost two orders of magnitude faster.
* Efficient ed448 doubling
* Remove the variable time backend
With the recent performance improvements, the constant time backend is
now 4x faster than the variable time backend was. While the variable
time backend remains much faster, and the constant time backend is still
slow compared to other libraries, it's sufficiently performant now.
The FROST test, which runs a series of multiexps over the curve, does
take 218.26s while Ristretto takes 1 and secp256k1 takes 4.57s.
While 50x slower than secp256k1 is horrible, it's ~1.5 orders of
magntiude, which is close enough to the desire stated in
https://github.com/serai-dex/serai/issues/108 to meet it.
Largely makes this library safe to use.
* Correct constants in ed448
* Rename unsafe-ed448 to minimal-ed448
Enables all FROST tests against it.
* No longer require the hazmat feature to use ed448
* Remove extraneous as_refs
Potentially improves privacy with the reversion to a coordinator
setting, where the coordinator is the only party with the offset. While
any signer (or anyone) can claim key A relates to B, they can't prove it
without the discrete log of the offset. This enables creating a signing
process without a known offset, while maintaining a consistent
transcript format.
Doesn't affect security given a static generator. Does have a slight
effect on performance.
* Apply Zeroize to nonces used in Bulletproofs
Also makes bit decomposition constant time for a given amount of
outputs.
* Fix nonce reuse for single-signer CLSAG
* Attach Zeroize to most structures in Monero, and ZOnDrop to anything with private data
* Zeroize private keys and nonces
* Merge prepare_outputs and prepare_transactions
* Ensure CLSAG is constant time
* Pass by borrow where needed, bug fixes
The past few commitments have been one in-progress chunk which I've
broken up as best read.
* Add Zeroize to FROST structs
Still needs to zeroize internally, yet next step. Not quite as
aggressive as Monero, partially due to the limitations of HashMaps,
partially due to less concern about metadata, yet does still delete a
few smaller items of metadata (group key, context string...).
* Remove Zeroize from most Monero multisig structs
These structs largely didn't have private data, just fields with private
data, yet those fields implemented ZeroizeOnDrop making them already
covered. While there is still traces of the transaction left in RAM,
fully purging that was never the intent.
* Use Zeroize within dleq
bitvec doesn't offer Zeroize, so a manual zeroing has been implemented.
* Use Zeroize for random_nonce
It isn't perfect, due to the inability to zeroize the digest, and due to
kp256 requiring a few transformations. It does the best it can though.
Does move the per-curve random_nonce to a provided one, which is allowed
as of https://github.com/cfrg/draft-irtf-cfrg-frost/pull/231.
* Use Zeroize on FROST keygen/signing
* Zeroize constant time multiexp.
* Correct when FROST keygen zeroizes
* Move the FROST keys Arc into FrostKeys
Reduces amount of instances in memory.
* Manually implement Debug for FrostCore to not leak the secret share
* Misc bug fixes
* clippy + multiexp test bug fixes
* Correct FROST key gen share summation
It leaked our own share for ourself.
* Fix cross-group DLEq tests
* Use a struct in an enum for Bulletproofs
* verification bp working for just one proof
* add some more assert tests
* Clean BP verification
* Implement batch verification
* Add a debug assertion w_cache isn't 0
It's initially set to 0 and if not updated, this would be broken.
* Correct Monero workflow yaml
* Again try to corrent Monero workflow yaml
* Again
* Finally
* Re-apply weights as required by Bulletproofs
Removing these was insecure and my fault.
Co-authored-by: DangerousFreedom <dangfreed@tutanota.com>
* Initial attempt at Bulletproofs
I don't know why this doesn't work. The generators and hash_cache lines
up without issue. AFAICT, the inner product proof is valid as well, as
are all included formulas.
* Add yinvpow asserts
* Clean code
* Correct bad imports
* Fix the definition of TWO_N
Bulletproofs work now :D
* Tidy up a bit
* fmt + clippy
* Compile a variety of XMR dependencies with optimizations, even under dev
The Rust bulletproof implementation is 8% slower than C right now, under
release. This is acceptable, even if suboptimal. Under debug, they take
a quarter of a second to two seconds though, depending on the amount of
outputs, which justifies this move.
* Remove unnecessary deref in BPs
Currently intended to be done with:
cargo clippy --features "recommended merlin batch serialize experimental
ed25519 ristretto p256 secp256k1 multisig" -- -A clippy::type_complexity
-A dead_code
The two-generator limit wasn't required nor beneficial. This does
theoretically optimize FROST, yet not for any current constructions. A
follow up proof which would optimize current constructions has been
noted in #38.
Adds explicit no_std support to the core DLEq proof.
Closes#34.
Removes from_canonical_bytes, which is offered by from_repr, and
from_bytes_mod_order, which frequently leads to security issues.
Removes the pointless Compressed type.
Adds From u8/u16/u32 as they're pleasant.