* Clear upons upon round, not block
* Cache the proposal for a round
* Rebase onto develop, which reverted this PR, and re-apply this PR
* Set participation upon participation instead of constantly recalculating
* Cache message instances
* Add missing txn commit
Identified by @akildemir.
* Correct clippy lint identified upon rebase
* Fix tendermint chain sync (#581)
* fix p2p Reqres protocol
* stabilize tributary chain sync
* fix pr comments
---------
Co-authored-by: akildemir <34187742+akildemir@users.noreply.github.com>
* Rewrite tendermint's message handling loop to much more clearly match the paper
No longer checks relevant branches upon messages, yet all branches upon any
state change. This is slower, yet easier to review and likely without one or
two rare edge cases.
When reviewing, please see page 5 of https://arxiv.org/pdf/1807.04938.pdf.
Lines from the specified algorithm can be found in the code by searching for
"// L".
* Sane rebroadcasting of consensus messages
Instead of broadcasting the last n messages on the Tributary side of things, we
now have the machine rebroadcast the message tape for the current block.
* Only rebroadcast messages which didn't error in some way
* Only rebroadcast our own messages for tendermint
Instead of saving, for every sent message, if it was sent or not, we track the
latest block/round participated in. These two keys are comprehensive to all
prior block/rounds. We then use three keys for the latest round's
proposal/prevote/precommit, enabling tracking current state as necessary to
prevent equivocations with just 5 keys.
The storage of the latest three messages also enables proper rebroadcasting of
the current round (not implemented in this commit).
Online validators should inherently have them. Offline validators will receive
from the sync protocol.
This does somewhat eliminate the class of nodes who would follow the blockchain
(without validating it), yet that's fine for the performance benefit.
Part of https://github.com/serai-dex/serai/issues/345.
The lack of full DB persistence does mean enough nodes rebooting at the same
time may cause a halt. This will prevent slashes.
* complete various todos
* fix pr comments
* Document bounds on unique hashes in TransactionKind
---------
Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
With a DKG removal comes a reduction in the amount of participants which was
ignored by re-attempts.
Now, we determine n/i based on the parties removed, and deterministically
obtain the context of who was removd.
This mirrors how Provided TXs handle topics.
Now, instead of managing a global nonce stream, we can use items such as plan
IDs as topics.
This massively benefits re-attempts, as else we'd need a NOP TX to clear unused
nonces.
Closes https://github.com/serai-dex/serai/issues/342.
Under ideal network conditions, this is fine. While I won't claim ideal network
conditions will occur IRL, b0fcdd3367 has the
Tributary rebroadcast messages and should brute-force its way into a
functioning system.
* add reasons to slash evidence
* fix CI failing
* Remove unnecessary clones
.encode() takes &self
* InvalidVr to InvalidValidRound
* Unrelated to this PR: Clarify reasoning/potentials behind dropping evidence
* Clarify prevotes in SlashEvidence test
* Replace use of read_to_end
* Restore decode_signed_message
---------
Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
* Update the coordinator to give key shares based on weight, not based on existence
Participants are now identified by their starting index. While this compiles,
the following is unimplemented:
1) A conversion for DKG `i` values. It assumes the threshold `i` values used
will be identical for the MuSig signature used to confirm the DKG.
2) Expansion from compressed values to full values before forwarding to the
processor.
* Add a fn to the DkgConfirmer to convert `i` values as needed
Also removes TODOs regarding Serai ensuring validator key uniqueness +
validity. The current infra achieves both.
* Have the Tributary DB track participation by shares, not by count
* Prevent a node from obtaining 34% of the maximum amount of key shares
This is actually mainly intended to set a bound on message sizes in the
coordinator. Message sizes are amplified by the amount of key shares held, so
setting an upper bound on said amount lets it determine constants. While that
upper bound could be 150, that'd be unreasonable and increase the potential for
DoS attacks.
* Correct the mechanism to detect if sufficient accumulation has occured
It used to check if the latest accumulation hit the required threshold. Now,
accumulations may jump past the required threshold. The required mechanism is
to check the threshold wasn't prior met and is now met.
* Finish updating the coordinator to handle a multiple key share per validator environment
* Adjust stategy re: preventing noce reuse in DKG Confirmer
* Add TODOs regarding dropped transactions, add possible TODO fix
* Update tests/coordinator
This doesn't add new multi-key-share tests, it solely updates the existing
single key-share tests to compile and run, with the necessary fixes to the
coordinator.
* Update processor key_gen to handle generating multiple key shares at once
* Update SubstrateSigner
* Update signer, clippy
* Update processor tests
* Update processor docker tests
If a crate has std set, it should enable std for all dependencies in order to
let them properly select which algorithms to use. Some crates fallback to
slower/worse algorithms on no-std.
Also more aggressively sets default-features = false leading to a *10%*
reduction in the amount of crates coordinator builds.
* fix typos
* remove tributary sleeping
* handle not locally provided txs
* use topic number instead of waiting list
* Clean-up, fixes
1) Uses a single TXN in provided
2) Doesn't continue on non-local provided inside verify_block, skipping further
execution of checks
3) Upon local provision of already on-chain TX, compares
---------
Co-authored-by: Luke Parker <lukeparker5132@gmail.com>