This could still be gamed. For [1, 2, 3], the options were ([1], [2, 3]) or
([1, 2], [3]). This means 2 would always have the maximum round count, and
thus this is still game-able. There's no point to keeping its complexity
accordingly when the algorithm is as efficient as it is.
While a proper random could be used to satisfy 3.7.2, it'd break the
expected determinism.
Also moves the aggregator over to Digest. While a bit verbose for this context,
as all appended items were fixed length, it's length prefixing is solid and
the API is pleasant. The downside is the additional dependency which is
in tree and quite compact.
While the prior intent was to avoid zeroizing for vartime verification, which
is assumed to not have any private data, this simplifies the code and promotes
safety.
Offset signing is now tested. Multi-nonce algorithms are now tested.
Multi-generator nonce algorithms are now tested. More fault cases are now tested
as well.
This wasn't done prior to be 'leaderless', as now the participant with the
lowest ID has an extra step, yet this is still trivial. There's also notable
performance benefits to not taking the previous dividing approach, which
performed an exp.
There's two ways which this could be tested.
1) Preprocess not taking in an arbitrary RNG item, yet the relevant bytes
This would be an unsafe level of refactoring, in my opinion.
2) Test random_nonce and test the passed in RNG eventually ends up at
random_nonce.
This takes the latter route, both verifying random_nonce meets the vectors
and that the FROST machine calls random_nonce properly.
The audit recommends checking failure cases for from_bytes,
from_bytes_unechecked, and from_repr. This isn't feasible.
from_bytes is allowed to have non-canonical values. [0xff; 32] may accordingly
be a valid point for non-SEC1-encoded curves.
from_bytes_unchecked doesn't have a defined failure mode, and by name,
unchecked, shouldn't necessarily fail. The audit acknowledges the tests should
test for whatever result is 'appropriate', yet any result which isn't a failure
on a valid element is appropriate.
from_repr must be canonical, yet for a binary field of 2^n where n % 8 == 0, a
[0xff; n / 8] repr would be valid.
single function
3.4.3 actually describes getting rid of DLEqProof for a thin wrapper around
MultiDLEqProof. That can't be done due to DLEqProof not requiring the std
features, enabling Vecs, which MultiDLEqProof relies on.
Merging the verification statement does simplify the code a bit. While merging
the proof could also be, it has much less value due to the simplicity of
proving (nonce * G, scalar * G).
This will only be called with 0 if the code fails to do proper screening of its
arguments. If such a flaw is present, the DKG lib is critically broken (as this
function isn't public). If it was allowed to continue executing, it'd reveal
the secret share.
This converts proofs from 2n elements to 1+n.
Moves FROST over to it. Additionally, for FROST's binomial nonces, provides
a single DLEq proof (2, not 1+2 elements) by proving the discrete log equality
of their aggregate (with an appropriate binding factor). This may be split back
up depending on later commentary...
The prior one did 64 scalar additions for Ed25519. The new one does 8.
This was optimized by instead of parsing byte-by-byte, u64-by-u64.
Improves perf by ~10-15%.
* Standardize the DLEq serialization function naming
They mismatched from the rest of the project.
This commit is technically incomplete as it doesn't update the dkg crate.
* Rewrite DKG encryption to enable per-message decryption without side effects
This isn't technically true as I already know a break in this which I'll
correct for shortly.
Does update documentation to explain the new scheme. Required for blame.
* Add a verifiable system for blame during the FROST DKG
Previously, if sent an invalid key share, the participant would realize that
and could accuse the sender. Without further evidence, either the accuser
or the accused could be guilty. Now, the accuser has a proof the accused is
in the wrong.
Reworks KeyMachine to return BlameMachine. This explicitly acknowledges how
locally complete keys still need group acknowledgement before the protocol
can be complete and provides a way for others to verify blame, even after a
locally successful run.
If any blame is cast, the protocol is no longer considered complete-able
(instead aborting). Further accusations of blame can still be handled however.
Updates documentation on network behavior.
Also starts to remove "OnDrop". We now use Zeroizing for anything which should
be zeroized on drop. This is a lot more piece-meal and reduces clones.
* Tweak Zeroizing and Debug impls
Expands Zeroizing to be more comprehensive.
Also updates Zeroizing<CachedPreprocess([u8; 32])> to
CachedPreprocess(Zeroizing<[u8; 32]>) so zeroizing is the first thing done
and last step before exposing the copy-able [u8; 32].
Removes private keys from Debug.
* Fix a bug where adversaries could claim to be using another user's encryption keys to learn their messages
Mentioned a few commits ago, now fixed.
This wouldn't have affected Serai, which aborts on failure, nor any DKG
currently supported. It's just about ensuring the DKG encryption is robust and
proper.
* Finish moving dleq from ser/deser to write/read
* Add tests for dkg blame
* Add a FROST test for invalid signature shares
* Batch verify encrypted messages' ephemeral keys' PoP
While the previous construction achieved n/2 average detection,
this will run in log2(n). Unfortunately, the need to keep entropy
around (or take in an RNG here) remains.
* Remove the explicit included participants from FROST
Now, whoever submits preprocesses becomes the signing set. Better separates
preprocess from sign, at the cost of slightly more annoying integrations
(Monero needs to now independently lagrange/offset its key images).
* Support caching preprocesses
Closes https://github.com/serai-dex/serai/issues/40.
I *could* have added a serialization trait to Algorithm and written a ton of
data to disk, while requiring Algorithm implementors also accept such work.
Instead, I moved preprocess to a seeded RNG (Chacha20) which should be as
secure as the regular RNG. Rebuilding from cache simply loads the previously
used Chacha seed, making the Algorithm oblivious to the fact it's being
rebuilt from a cache. This removes any requirements for it to be modified
while guaranteeing equivalency.
This builds on the last commit which delayed determining the signing set till
post-preprocess acquisition. Unfortunately, that commit did force preprocess
from ThresholdView to ThresholdKeys which had visible effects on Monero.
Serai will actually need delayed set determination for #163, and overall,
it remains better, hence it's inclusion.
* Document FROST preprocess caching
* Update ethereum to new FROST
* Fix bug in Monero offset calculation and update processor
Encryption used to be inlined into FROST. When writing the documentation, I
realized it was decently hard to review. It also was antagonistic to other
hosted DKG algorithms by not allowing code re-use.
Encryption is now a standalone module, providing clear boundaries and
reusability.
Additionally, the DKG protocol itself used to use the ciphersuite's specified
hash function (with an HKDF to prevent length extension attacks). Now,
RecommendedTranscript is used to achieve much more robust transcripting and
remove the HKDF dependency. This does add Blake2 into all consumers yet is
preferred for its security properties and ease of review.
* Add dkg crate
* Remove F_len and G_len
They're generally no longer used.
* Replace hash_to_vec with a provided method around associated type H: Digest
Part of trying to minimize this trait so it can be moved elsewhere. Vec,
which isn't std, may have been a blocker.
* Encrypt secret shares within the FROST library
Reduces requirements on callers in order to be correct.
* Update usage of Zeroize within FROST
* Inline functions in key_gen
There was no reason to have them separated as they were. sign probably
has the same statement available, yet that isn't the focus right now.
* Add a ciphersuite package which provides hash_to_F
* Set the Ciphersuite version to something valid
* Have ed448 export Scalar/FieldElement/Point at the top level
* Move FROST over to Ciphersuite
* Correct usage of ff in ciphersuite
* Correct documentation handling
* Move Schnorr signatures to their own crate
* Remove unused feature from schnorr
* Fix Schnorr tests
* Split DKG into a separate crate
* Add serialize to Commitments and SecretShare
Helper for buf = vec![]; .write(buf).unwrap(); buf
* Move FROST over to the new dkg crate
* Update Monero lib to latest FROST
* Correct ethereum's usage of features
* Add serialize to GeneratorProof
* Add serialize helper function to FROST
* Rename AddendumSerialize to WriteAddendum
* Update processor
* Slight fix to processor
* Create message types for FROST key gen
Taking in reader borrows absolutely wasn't feasible. Now, proper types
which can be read (and then passed directly, without a mutable borrow)
exist for key_gen. sign coming next.
* Move FROST signing to messages, not Readers/Writers/Vec<u8>
Also takes the nonce handling code and makes a dedicated file for it,
aiming to resolve complex types and make the code more legible by
replacing its previously inlined state.
* clippy
* Update FROST tests
* read_signature_share
* Update the Monero library to the new FROST packages
* Update processor to latest FROST
* Tweaks to terminology and documentation
Ensures random functions never return zero. This, combined with a check
commitments aren't 0, causes no serialized elements to be 0.
Also directly reads their vectors.
* Label the version as an alpha
* Add versions to Cargo.tomls
* Update to Zeroize 1.5
* Drop patch versions from monero-serai Cargo.toml
* Add a repository field
* Move generators to OUT_DIR
IIRC, I didn't do this originally as it constantly re-generated them.
Unfortunately, since cargo is complaining about .generators, we have to.
* Remove Timelock::fee_weight
Transaction::fee_weight's has a comment, "Assumes Timelock::None since
this library won't let you create a TX with a timelock". Accordingly,
this is dead code.
Despite being slower and only used for blinding values, its still
extremely performant. 20 is far more standard and will avoid an eye
raise from reviewers.
While Group::random shouldn't be used instead of a hash to curve, anyone
who did would've previously been insecure and now isn't.
Could've done a recover_x and a raw Point construction, followed by a
cofactor mul, to avoid the serialization, yet the serialization ensures
full validity under the standard from_bytes function. THis also doesn't
need to be micro-optimized.