Despite being slower and only used for blinding values, its still
extremely performant. 20 is far more standard and will avoid an eye
raise from reviewers.
* Apply Zeroize to nonces used in Bulletproofs
Also makes bit decomposition constant time for a given amount of
outputs.
* Fix nonce reuse for single-signer CLSAG
* Attach Zeroize to most structures in Monero, and ZOnDrop to anything with private data
* Zeroize private keys and nonces
* Merge prepare_outputs and prepare_transactions
* Ensure CLSAG is constant time
* Pass by borrow where needed, bug fixes
The past few commitments have been one in-progress chunk which I've
broken up as best read.
* Add Zeroize to FROST structs
Still needs to zeroize internally, yet next step. Not quite as
aggressive as Monero, partially due to the limitations of HashMaps,
partially due to less concern about metadata, yet does still delete a
few smaller items of metadata (group key, context string...).
* Remove Zeroize from most Monero multisig structs
These structs largely didn't have private data, just fields with private
data, yet those fields implemented ZeroizeOnDrop making them already
covered. While there is still traces of the transaction left in RAM,
fully purging that was never the intent.
* Use Zeroize within dleq
bitvec doesn't offer Zeroize, so a manual zeroing has been implemented.
* Use Zeroize for random_nonce
It isn't perfect, due to the inability to zeroize the digest, and due to
kp256 requiring a few transformations. It does the best it can though.
Does move the per-curve random_nonce to a provided one, which is allowed
as of https://github.com/cfrg/draft-irtf-cfrg-frost/pull/231.
* Use Zeroize on FROST keygen/signing
* Zeroize constant time multiexp.
* Correct when FROST keygen zeroizes
* Move the FROST keys Arc into FrostKeys
Reduces amount of instances in memory.
* Manually implement Debug for FrostCore to not leak the secret share
* Misc bug fixes
* clippy + multiexp test bug fixes
* Correct FROST key gen share summation
It leaked our own share for ourself.
* Fix cross-group DLEq tests
Currently intended to be done with:
cargo clippy --features "recommended merlin batch serialize experimental
ed25519 ristretto p256 secp256k1 multisig" -- -A clippy::type_complexity
-A dead_code
The two-generator limit wasn't required nor beneficial. This does
theoretically optimize FROST, yet not for any current constructions. A
follow up proof which would optimize current constructions has been
noted in #38.
Adds explicit no_std support to the core DLEq proof.
Closes#34.
While all of Serai can be argued as experimental, the DLEq proof is
especially so, as it's lacking any formal proofs over its theory.
Also adds doc(hidden) to the generic DLEqProof, now prefixed with __.
This enabled getting the proof sizes, which are:
- ConciseLinear had a proof size of 44607 bytes
- CompromiseLinear had a proof size of 48765 bytes
- ClassicLinear had a proof size of 56829 bytes
- EfficientLinear had a proof size of 65145 byte
Formatted results from my laptop:
EfficientLinear had a average prove time of 188ms
EfficientLinear had a average verify time of 126ms
CompromiseLinear had a average prove time of 176ms
CompromiseLinear had a average verify time of 141ms
ConciseLinear had a average prove time of 191ms
ConciseLinear had a average verify time of 160ms
ClassicLinear had a average prove time of 214ms
ClassicLinear had a average verify time of 159ms
There is a decent error margin here. Concise is a drop-in replacement
for Classic, in practice *not* theory. Efficient is optimal for
performance, yet largest. Compromise is a middleground.
The batch verified one offers ~23% faster verification. While this
massively refactors for modularity, I'm still not happy with the DLEq
proofs at the top level, nor am I happy with the AOS signatures. I'll
work on cleaning them up more later.
Reduces proof size by 21.5% without notable computational complexity
changes. I wouldn't be surprised if it has minor ones, yet I can't
comment in which way they go without further review.
Bit now verifies it can successfully complete the ring under debug,
slightly increasing debug times.
Few percent faster. Enables accumulating the current bit's point
representation, whereas the blinding keys can't be accumulated. Also
theoretically enables pre-computation of the bit points, removing
hundreds of additions from the proof. When tested, this was less
performant, possibly due to cache/heap allocation.
Instead of having if statements for the bits, it now has constant time
ops. While there are still if statements guiding the proof itself, they
aren't dependent on the data within.
This does reduce the strength of the challenges to that of the weaker
field, yet that doesn't have any impact on whether or not this is ZK due
to the key being shared across fields.
Saves ~8kb.
Closes https://github.com/serai-dex/serai/issues/17 by using the
PrimeFieldBits API to do so.
Should greatly speed up small batches, along with batches in the
hundreds. Saves almost a full second on the cross-group DLEq proof.
While Serai only needs the simple DLEq which was already present under
monero, this migrates the implementation of the cross-group DLEq I
maintain into Serai. This was to have full access to the ecosystem of
libraries built under Serai while also ensuring support for it.
The cross_group curve, which is extremely experimental, is feature
flagged off. So is the built in serialization functionality, as this
should be possible to make nostd once const generics are full featured,
yet the implemented serialization adds the additional barrier of
std::io.