* initial staking pallet
* add staking pallet to runtime
* support session rotation for serai
* optimizations & cleaning
* fix deny
* add serai network to initial networks
* a few tweaks & comments
* fix some pr comments
* Rewrite validator-sets with logarithmic algorithms
Uses the fact the underlying DB is sorted to achieve sorting of potential
validators by stake.
Removes release of deallocated stake for now.
---------
Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
Renames Update to SignedBatch.
Checks Batch equality via a hash of the InInstructions. That prevents needing
to keep the Batch in node state or TX introspect.
Prior, we only supported a single Tributary per network, and spawned a task to
handled Processor messages per Tributary. Now, we handle Processor messages per
network, yet we still only supported a single Tributary in that handling
function.
Now, when we handle a message, we load the Tributary which is relevant. Once we
know it, we ensure we have it (preventing race conditions), and then proceed.
We do need work to check if we should have a Tributary, or if we're not
participating. We also need to check if a Tributary has been retired, meaning
we shouldn't handle any transactions related to them, and to clean up retired
Tributaries.
This is a possibility under the new deterministic nonce scheme.
While there is a concern of us never creating a transaction with a nonce,
blocking everything, we should always create transactions. We'll always publish
preprocesses, and while we'll only publish shares if everyone else does, we
only allocate for shares once everyone else does.
It was a piece of duplicated data used to achieve context-less
de)serialization. This new Vec code is a bit tricker to first read, yet overall
clean and removes a potential fault.
Saves 2 bytes from DkgShares messages.
See prior commit message for more info.
With the plan for the batch sign ID to be just 5 bytes (potentially 4), this
does incur a +5 bytes cost compared to the ExternalBlock system *even in the
standard case*. The simplicity remains preferred at this time.
The initial TODO was simply to use one ExternalBlock per all batches in the
block. This would require publishing ExternalBlock after the last batch,
requiring knowing the last batch. While we could add such a pipeline, it'd
require:
1) Initial preprocesses using a distinct message from BatchPreprocess
2) An additional message sent after all BatchPreprocess are sent
Unfortunately, both would require tweaks to the SubstrateSigner which aren't
worth the complexity compared to the solution here, at least, not at this time.
While this will cause, if a Tributary is signing a block whose total batch data
exceeds 25 kB, to use multiple transactions which could be optimized out by
'better' local data pipelining, that's an extreme edge case. Given the temporal
nature of each Tributary, it's also an acceptable edge.
This does no longer achieve synchrony over external blocks accordingly. While
signed batches have synchrony, as they embed their block hash, batches being
signed don't have cryptographic synchrony on their contents. This means
validators who are eclipsed may produce invalid shares, as they sign a
different batch. This will be introduced in a follow-up commit.
We only expect processor messages when we have the relevant Tributary. We
queued Tributary creation, yet then kicked off processor messages. We need to
wait until the Tributary is actually created to kick off processor messages.
They used &mut self to prevent execution at the same time. This uses a lock
over the channel to achieve the same security, without requiring a lock over
the entire tributary.
This fixes post-provided Provided transactions. sync_block waited for the TX to
be provided, yet it never would as sync_block held a mutable reference over the
entire Tributary, preventing any other read/write operations of any scope.
A timeout increased (bc2f23f72b) due to this bug
not being identified has been decreased back, thankfully.
Also shims in basic support for Completed, which was the WIP before this bug
was identified.
The Heartbeat was meant to serve for this, yet no Heartbeats are fired when we
don't have active tributaries.
libp2p does offer an explicit KeepAlive protocol, yet it's not recommended in
prod. While this likely has the same pit falls as LibP2p's KeepAlive protocol,
it's at least tailored to our timing.
* add slash tx
* ignore unsigned tx replays
* verify that provided evidence is valid
* fix clippy + fmt
* move application tx handling to another module
* partially handle the tendermint txs
* fix pr comments
* support unsigned app txs
* add slash target to the votes
* enforce provided, unsigned, signed tx ordering within a block
* bug fixes
* add unit test for tendermint txs
* bug fixes
* update tests for tendermint txs
* add tx ordering test
* tidy up tx ordering test
* cargo +nightly fmt
* Misc fixes from rebasing
* Finish resolving clippy
* Remove sha3 from tendermint-machine
* Resolve a DoS in SlashEvidence's read
Also moves Evidence from Vec<Message> to (Message, Option<Message>). That
should meet all requirements while being a bit safer.
* Make lazy_static a dev-depend for tributary
* Various small tweaks
One use of sort was inefficient, sorting unsigned || signed when unsigned was
already properly sorted. Given how the unsigned TXs were given a nonce of 0, an
unstable sort may swap places with an unsigned TX and a signed TX with a nonce
of 0 (leading to a faulty block).
The extra protection added here sorts signed, then concats.
* Fix Tributary tests I broke, start review on tendermint/tx.rs
* Finish reviewing everything outside tests and empty_signature
* Remove empty_signature
empty_signature led to corrupted local state histories. Unfortunately, the API
is only sane with a signature.
We now use the actual signature, which risks creating a signature over a
malicious message if we have ever have an invariant producing malicious
messages. Prior, we only signed the message after the local machine confirmed
it was okay per the local view of consensus.
This is tolerated/preferred over a corrupt state history since production of
such messages is already an invariant. TODOs are added to make handling of this
theoretical invariant further robust.
* Remove async_sequential for tokio::test
There was no competition for resources forcing them to be run sequentially.
* Modify block order test to be statistically significant without multiple runs
* Clean tests
---------
Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
* restrict batch size to ~25kb
* add batch size check to node
* rate limit batches to 1 per serai block
* add support for multiple batches for block
* fix review comments
* Misc fixes
Doesn't yet update tests/processor until data flow is inspected.
* Move the block from SignId to ProcessorMessage::BatchPreprocesses
* Misc clean up
---------
Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
By default, tokio-spawned worker panics will only kill the task, not the
program. Due to our extensive use of panicking on invariants, we should ensure
the program exits.
It's largely unoptimized, and not yet exclusive to validators, yet has basic
sanity (using message content for ID instead of sender + index).
Fixes bugs as found. Notably, we used a time in milliseconds where the
Tributary expected seconds.
Also has Tributary::new jump to the presumed round number. This reduces slashes
when starting new chains (whose times will be before the current time) and was
the only way I was able to observe successful confirmations given current
surrounding infrastructure.
The Processor's coins folder referred to the networks it could process, as did
its Coin trait. This, and other similar cases throughout the codebase, have now
been corrected.
Also corrects dated documentation for a key pair is confirmed under the
validator-sets pallet.
This probably should be done with n-long lived tasks, one per Tributary. While
this may not be suitably performant long-term (potential DoS vector), this at
least resolves the halting concerns.
Reduces lock contention.
Additionally changes block_key to include the genesis. While not technically
needed, the lack of genesis introduced a side effect where any Tributary on the
the database could return the block of any other Tributary. While that wasn't a
security issue, returning it suggested it was on-chain when it wasn't. This may
have been usable to create issues.
This defines the tart of a very complex series of locks I'm really unhappy
with. At the same time, there's not immediately a better solution. This also
should work without issue.
add_active_tributary writes the spec to disk before it returns, so even if the
VecDeque it pushes to isn't popped, the tributary will still be loaded on boot.
Removes last_block as an argument from Tendermint. It now loads from the DB as
needed. While slightly less performant, it's easiest and should be fine.
Impls a LocalP2p for testing.
Moves rebroadcasting into Tendermint, since it's what knows if a message is
fully valid + original.
Removes TributarySpec::validators() HashMap, as its non-determinism caused
different instances to have different round robin schedules. It was already
prior moved to a Vec for this issue, so I'm unsure why this remnant existed.
Also renames the GH no-std workflow from the prior commit.