- enables to perform rescan_spent / ki sync with untrusted daemon. Spent check status involves RPC calls which require trusted daemon status as it leaks information. The new call performs soft reset while preserving key images thus a sequence: refresh, ki sync / import, rescan_bc keep_ki will correctly perform spent checking without need for trusted daemon.
- useful to detect spent outputs with untrusted daemon on watch_only / multisig / hw-cold wallets after expensive key image sync.
- cli: rescan_bc keep_ki
It's better to just ignore them, the user does not really need
to know they're here. If the mask is wrong, they'll fail to be
used, and sweeping will fail as it tries to use it.
Reported by Josh Davis.
- return the right output data when offset is not zero
- do not consider import failed if result height is zero
(it can be 0 if unknown)
- select the right tx pubkey when using subaddresses (it's faster,
and we might select the wrong one if we got an output using one
of the additional tx keys)
- account for skipped outputs for spent/unspent balance info
"spent" is arguably wrong, since it will count spent change
multiple times as it goes through receive/spend cycles.
RPC connections now have optional tranparent SSL.
An optional private key and certificate file can be passed,
using the --{rpc,daemon}-ssl-private-key and
--{rpc,daemon}-ssl-certificate options. Those have as
argument a path to a PEM format private private key and
certificate, respectively.
If not given, a temporary self signed certificate will be used.
SSL can be enabled or disabled using --{rpc}-ssl, which
accepts autodetect (default), disabled or enabled.
Access can be restricted to particular certificates using the
--rpc-ssl-allowed-certificates, which takes a list of
paths to PEM encoded certificates. This can allow a wallet to
connect to only the daemon they think they're connected to,
by forcing SSL and listing the paths to the known good
certificates.
To generate long term certificates:
openssl genrsa -out /tmp/KEY 4096
openssl req -new -key /tmp/KEY -out /tmp/REQ
openssl x509 -req -days 999999 -sha256 -in /tmp/REQ -signkey /tmp/KEY -out /tmp/CERT
/tmp/KEY is the private key, and /tmp/CERT is the certificate,
both in PEM format. /tmp/REQ can be removed. Adjust the last
command to set expiration date, etc, as needed. It doesn't
make a whole lot of sense for monero anyway, since most servers
will run with one time temporary self signed certificates anyway.
SSL support is transparent, so all communication is done on the
existing ports, with SSL autodetection. This means you can start
using an SSL daemon now, but you should not enforce SSL yet or
nothing will talk to you.
1f5680c8 simplewallet: add help for ask-password options (moneromooo-monero)
c7c74caf simplewallet: mark confirm-missing-payment-id as obsolete (moneromooo-monero)
0de14396 tests: add a CNv4 JIT test (moneromooo-monero)
24d281c3 crypto: plug CNv4 JIT into cn_slow_hash (moneromooo-monero)
78ab59ea crypto: clear cache after generating random program (moneromooo-monero)
b9a61884 performance_tests: add tests for new Cryptonight variants (moneromooo-monero)
fff23bf7 CNv4 JIT compiler for x86-64 and tests (SChernykh)
3dde67d8 blockchain: add v10 fork heights (moneromooo-monero)
2dbc487e Add support for V10 protocol with BulletProofV2 and short amount. (cslashm)
63cc02c0 Fix dummy decryption in debug mode (cslashm)
f0e55ceb fix log namespace (cslashm)
460da140 New scheme key destination contrfol (cslashm)
Minimalistic JIT code generator for random math sequence in CryptonightR.
Usage:
- Allocate writable and executable memory
- Call v4_generate_JIT_code with "buf" pointed to memory allocated on the previous step
- Call the generated code instead of "v4_random_math(code, r)", omit the "code" parameter
The 10 minute one will never trigger for 0 blocks, as it's still
fairly likely to happen even without the actual hash rate changing
much, so we add a 20 minute window, where it will (for 0 blocks)
and a one hour window.
This curbs runaway growth while still allowing substantial
spikes in block weight
Original specification from ArticMine:
here is the scaling proposal
Define: LongTermBlockWeight
Before fork:
LongTermBlockWeight = BlockWeight
At or after fork:
LongTermBlockWeight = min(BlockWeight, 1.4*LongTermEffectiveMedianBlockWeight)
Note: To avoid possible consensus issues over rounding the LongTermBlockWeight for a given block should be calculated to the nearest byte, and stored as a integer in the block itself. The stored LongTermBlockWeight is then used for future calculations of the LongTermEffectiveMedianBlockWeight and not recalculated each time.
Define: LongTermEffectiveMedianBlockWeight
LongTermEffectiveMedianBlockWeight = max(300000, MedianOverPrevious100000Blocks(LongTermBlockWeight))
Change Definition of EffectiveMedianBlockWeight
From (current definition)
EffectiveMedianBlockWeight = max(300000, MedianOverPrevious100Blocks(BlockWeight))
To (proposed definition)
EffectiveMedianBlockWeight = min(max(300000, MedianOverPrevious100Blocks(BlockWeight)), 50*LongTermEffectiveMedianBlockWeight)
Notes:
1) There are no other changes to the existing penalty formula, median calculation, fees etc.
2) There is the requirement to store the LongTermBlockWeight of a block unencrypted in the block itself. This is to avoid possible consensus issues over rounding and also to prevent the calculations from becoming unwieldy as we move away from the fork.
3) When the EffectiveMedianBlockWeight cap is reached it is still possible to mine blocks up to 2x the EffectiveMedianBlockWeight by paying the corresponding penalty.
Note: the long term block weight is stored in the database, but not in the actual block itself,
since it requires recalculating anyway for verification.
When all our outgoing peer slots are filled, we cycle one peer at
a time looking for syncing peers until we have at least two such
peers. This brings two advantages:
- Peers without incoming connections will find more syncing peers
that before, thereby strengthening network decentralization
- Peers will have more resistance to isolation attacks, as they
are more likely to find a "good" peer than they were before
NetBSD emits:
warning: Warning: reference to the libc supplied alloca(3); this most likely will not work. Please use the compiler provided version of alloca(3), by supplying the appropriate compiler flags (e.g. not -std=c89).
and man 3 alloca says:
Normally, gcc(1) translates calls to alloca() with inlined code. This is not done when either the -ansi, -std=c89, -std=c99, or the
-std=c11 option is given and the header <alloca.h> is not included. Otherwise, (without an -ansi or -std=c* option) the glibc version of
<stdlib.h> includes <alloca.h> and that contains the lines:
#ifdef __GNUC__
#define alloca(size) __builtin_alloca (size)
#endif
It looks like alloca is a bad idea in modern C/C++, so we use
VLAs for C and std::vector for C++.