* Label the version as an alpha
* Add versions to Cargo.tomls
* Update to Zeroize 1.5
* Drop patch versions from monero-serai Cargo.toml
* Add a repository field
* Move generators to OUT_DIR
IIRC, I didn't do this originally as it constantly re-generated them.
Unfortunately, since cargo is complaining about .generators, we have to.
* Remove Timelock::fee_weight
Transaction::fee_weight's has a comment, "Assumes Timelock::None since
this library won't let you create a TX with a timelock". Accordingly,
this is dead code.
Despite being slower and only used for blinding values, its still
extremely performant. 20 is far more standard and will avoid an eye
raise from reviewers.
While Group::random shouldn't be used instead of a hash to curve, anyone
who did would've previously been insecure and now isn't.
Could've done a recover_x and a raw Point construction, followed by a
cofactor mul, to avoid the serialization, yet the serialization ensures
full validity under the standard from_bytes function. THis also doesn't
need to be micro-optimized.
* Theoretical ed448 impl
* Fixes
* Basic tests
* More efficient scalarmul
Precomputes a table to minimize additions required.
* Add a torsion test
* Split into a constant and variable time backend
The variable time one is still far too slow, at 53s for the tests (~5s a
scalarmul). It should be usable as a PoC though.
* Rename unsafe Ed448
It's not only unworthy of the Serai branding and deserves more clarity
in the name.
* Add wide reduction to ed448
* Add Zeroize to Ed448
* Rename Ed448 group.rs to point.rs
* Minor lint to FROST
* Ed448 ciphersuite with 8032 test vector
* Macro out the backend fields
* Slight efficiency improvement to point decompression
* Disable the multiexp test in FROST for Ed448
* fmt + clippy ed448
* Fix an infinite loop in the constant time ed448 backend
* Add b"chal" to the 8032 context string for Ed448
Successfully tests against proposed vectors for the FROST IETF draft.
* Fix fmt and clippy
* Use a tabled pow algorithm in ed448's const backend
* Slight tweaks to variable time backend
Stop from_repr(MODULUS) from passing.
* Use extended points
Almost two orders of magnitude faster.
* Efficient ed448 doubling
* Remove the variable time backend
With the recent performance improvements, the constant time backend is
now 4x faster than the variable time backend was. While the variable
time backend remains much faster, and the constant time backend is still
slow compared to other libraries, it's sufficiently performant now.
The FROST test, which runs a series of multiexps over the curve, does
take 218.26s while Ristretto takes 1 and secp256k1 takes 4.57s.
While 50x slower than secp256k1 is horrible, it's ~1.5 orders of
magntiude, which is close enough to the desire stated in
https://github.com/serai-dex/serai/issues/108 to meet it.
Largely makes this library safe to use.
* Correct constants in ed448
* Rename unsafe-ed448 to minimal-ed448
Enables all FROST tests against it.
* No longer require the hazmat feature to use ed448
* Remove extraneous as_refs
Potentially improves privacy with the reversion to a coordinator
setting, where the coordinator is the only party with the offset. While
any signer (or anyone) can claim key A relates to B, they can't prove it
without the discrete log of the offset. This enables creating a signing
process without a known offset, while maintaining a consistent
transcript format.
Doesn't affect security given a static generator. Does have a slight
effect on performance.
* Apply Zeroize to nonces used in Bulletproofs
Also makes bit decomposition constant time for a given amount of
outputs.
* Fix nonce reuse for single-signer CLSAG
* Attach Zeroize to most structures in Monero, and ZOnDrop to anything with private data
* Zeroize private keys and nonces
* Merge prepare_outputs and prepare_transactions
* Ensure CLSAG is constant time
* Pass by borrow where needed, bug fixes
The past few commitments have been one in-progress chunk which I've
broken up as best read.
* Add Zeroize to FROST structs
Still needs to zeroize internally, yet next step. Not quite as
aggressive as Monero, partially due to the limitations of HashMaps,
partially due to less concern about metadata, yet does still delete a
few smaller items of metadata (group key, context string...).
* Remove Zeroize from most Monero multisig structs
These structs largely didn't have private data, just fields with private
data, yet those fields implemented ZeroizeOnDrop making them already
covered. While there is still traces of the transaction left in RAM,
fully purging that was never the intent.
* Use Zeroize within dleq
bitvec doesn't offer Zeroize, so a manual zeroing has been implemented.
* Use Zeroize for random_nonce
It isn't perfect, due to the inability to zeroize the digest, and due to
kp256 requiring a few transformations. It does the best it can though.
Does move the per-curve random_nonce to a provided one, which is allowed
as of https://github.com/cfrg/draft-irtf-cfrg-frost/pull/231.
* Use Zeroize on FROST keygen/signing
* Zeroize constant time multiexp.
* Correct when FROST keygen zeroizes
* Move the FROST keys Arc into FrostKeys
Reduces amount of instances in memory.
* Manually implement Debug for FrostCore to not leak the secret share
* Misc bug fixes
* clippy + multiexp test bug fixes
* Correct FROST key gen share summation
It leaked our own share for ourself.
* Fix cross-group DLEq tests
* Use a struct in an enum for Bulletproofs
* verification bp working for just one proof
* add some more assert tests
* Clean BP verification
* Implement batch verification
* Add a debug assertion w_cache isn't 0
It's initially set to 0 and if not updated, this would be broken.
* Correct Monero workflow yaml
* Again try to corrent Monero workflow yaml
* Again
* Finally
* Re-apply weights as required by Bulletproofs
Removing these was insecure and my fault.
Co-authored-by: DangerousFreedom <dangfreed@tutanota.com>
* Initial attempt at Bulletproofs
I don't know why this doesn't work. The generators and hash_cache lines
up without issue. AFAICT, the inner product proof is valid as well, as
are all included formulas.
* Add yinvpow asserts
* Clean code
* Correct bad imports
* Fix the definition of TWO_N
Bulletproofs work now :D
* Tidy up a bit
* fmt + clippy
* Compile a variety of XMR dependencies with optimizations, even under dev
The Rust bulletproof implementation is 8% slower than C right now, under
release. This is acceptable, even if suboptimal. Under debug, they take
a quarter of a second to two seconds though, depending on the amount of
outputs, which justifies this move.
* Remove unnecessary deref in BPs
Currently intended to be done with:
cargo clippy --features "recommended merlin batch serialize experimental
ed25519 ristretto p256 secp256k1 multisig" -- -A clippy::type_complexity
-A dead_code
The two-generator limit wasn't required nor beneficial. This does
theoretically optimize FROST, yet not for any current constructions. A
follow up proof which would optimize current constructions has been
noted in #38.
Adds explicit no_std support to the core DLEq proof.
Closes#34.
Removes from_canonical_bytes, which is offered by from_repr, and
from_bytes_mod_order, which frequently leads to security issues.
Removes the pointless Compressed type.
Adds From u8/u16/u32 as they're pleasant.
While all of Serai can be argued as experimental, the DLEq proof is
especially so, as it's lacking any formal proofs over its theory.
Also adds doc(hidden) to the generic DLEqProof, now prefixed with __.
This enabled getting the proof sizes, which are:
- ConciseLinear had a proof size of 44607 bytes
- CompromiseLinear had a proof size of 48765 bytes
- ClassicLinear had a proof size of 56829 bytes
- EfficientLinear had a proof size of 65145 byte
Formatted results from my laptop:
EfficientLinear had a average prove time of 188ms
EfficientLinear had a average verify time of 126ms
CompromiseLinear had a average prove time of 176ms
CompromiseLinear had a average verify time of 141ms
ConciseLinear had a average prove time of 191ms
ConciseLinear had a average verify time of 160ms
ClassicLinear had a average prove time of 214ms
ClassicLinear had a average verify time of 159ms
There is a decent error margin here. Concise is a drop-in replacement
for Classic, in practice *not* theory. Efficient is optimal for
performance, yet largest. Compromise is a middleground.
The batch verified one offers ~23% faster verification. While this
massively refactors for modularity, I'm still not happy with the DLEq
proofs at the top level, nor am I happy with the AOS signatures. I'll
work on cleaning them up more later.
Reduces proof size by 21.5% without notable computational complexity
changes. I wouldn't be surprised if it has minor ones, yet I can't
comment in which way they go without further review.
Bit now verifies it can successfully complete the ring under debug,
slightly increasing debug times.
Few percent faster. Enables accumulating the current bit's point
representation, whereas the blinding keys can't be accumulated. Also
theoretically enables pre-computation of the bit points, removing
hundreds of additions from the proof. When tested, this was less
performant, possibly due to cache/heap allocation.
Instead of having if statements for the bits, it now has constant time
ops. While there are still if statements guiding the proof itself, they
aren't dependent on the data within.
Relies on the ff/group API, instead of the custom Curve type.
Also removes GENERATOR_TABLE, only used by dalek, as we should provide
our own API for that over ff/group instead. This slows down the FROST
tests, under debug, by about 0.2-0.3s. Ed25519 and Ristretto together
take ~2.15 seconds now.
This does reduce the strength of the challenges to that of the weaker
field, yet that doesn't have any impact on whether or not this is ZK due
to the key being shared across fields.
Saves ~8kb.
Closes https://github.com/serai-dex/serai/issues/17 by using the
PrimeFieldBits API to do so.
Should greatly speed up small batches, along with batches in the
hundreds. Saves almost a full second on the cross-group DLEq proof.
While Serai only needs the simple DLEq which was already present under
monero, this migrates the implementation of the cross-group DLEq I
maintain into Serai. This was to have full access to the ecosystem of
libraries built under Serai while also ensuring support for it.
The cross_group curve, which is extremely experimental, is feature
flagged off. So is the built in serialization functionality, as this
should be possible to make nostd once const generics are full featured,
yet the implemented serialization adds the additional barrier of
std::io.
Increases usage of standardization while expanding dalek_ff_group.
Closes https://github.com/serai-dex/serai/issues/26 by moving
dfg::EdwardsPoint to only be for the prime subgroup.
Collisions were possible depending on static label substrings. Now,
labels are prefixed by their length to prevent this from being possible.
All variables are also flagged by their type, preventing other potential
conflicts.
Modifies FROST behavior so group_key has the offset applied regardless
of if view was called. The unaltered secret_share and
verification_shares (as they have differing values depending on the
signing set) are no longer publicly accessible.
Doesn't fully utilize ec's hash2curve module as k256 Scalar doesn't have
FromOkm for some reason. The previously present bigint reduction is
preserved.
Updates ff/group to 0.12.
Premised on https://github.com/cfrg/draft-irtf-cfrg-frost/pull/205 being
merged, as while this Ed25519 is vector compliant, it's technically not
spec compliant due to that conflict.
Given the lack of vectors for k256, it's currently a match of the p256
spec (with a distinct context string), yet p256 is still always used
when testing.
While it was fine as-is, as it only had one variable length property,
this is a bit more robust. Also binds the Curve ID, which should declare
differently even for just different basepoints, and therefore adds two
variable length properties (justifying the transcript).
No functional changes have been made to signing, with solely slight API
changes being made.
Technically not actually FROST v5 compatible, due to differing on zero
checks and randomness, yet the vectors do confirm the core algorithm.
For any valid FROST implementation, this will be interoperable if they
can successfully communicate. For any devious FROST implementation, this
will be fingerprintable, yet should still be valid.
Relevant to https://github.com/serai-dex/serai/issues/9 as any curve can
now specify vectors for itself and be tested against them.
Moves the FROST testing curve from k256 to p256. Does not expose p256
despite being compliant. It's not at a point I'm happy with it, notably
regarding hash to curve, and I'm not sure I care to support p256. If it
has value to the larger FROST ecosystem...
It was never used as we derive entropy via the other fields in the
transcript, and explicitly add fields directly as needed for entropy.
Also drops an unused crate and corrects a bug in FROST's Schnorr
implementation which used the Group's generator, instead of the Curve's.
Also updates the Monero crate's description.
Also updates Bulletproofs from C to not be length prefixed, yet rather
have Rust calculate their length.
Corrects an error in key_gen where self was blamed, instead of the
faulty participant.
Adds helper functions to verify and, on failure, blame, which move an
unwrap from callers into multiexp where it's guaranteed to be safe and
easily verified to be proper.
Closes https://github.com/serai-dex/serai/issues/10.
Saves ~8% during FROST key gen, even with dropping a vartime for a
constant time (as needed to be secure), as the new batch verifier is
used where batch verification previously wasn't. The new multiexp API
itself also offered a very slight performance boost, which may solely be
a measurement error.
Handles most of https://github.com/serai-dex/serai/issues/10. The blame
function isn't binary searched nor randomly sorted yet.
Honestly, the borrowed keys are frustrating, and this probably reduces
performance while no longer offering an order when iterating. That said,
they enable full u16 indexing and should mildly improve the API.
Cleans the Proof of Knowledge handling present in key gen.
While all the transcript/extension code works as expected, which means,
they don't cause any conflicts, n was still capped at u64::MAX at
creation when it needs to be u16. Furthermore, participant index and
scalars/points were little endian instead of big endian/curve dependent.