7c4d6f1 simplewallet: Use default log file name when executable's file path is unknown (warptangent)
b5b0f08 epee: Don't set log file name when process path name isn't found (warptangent)
Default to "simplewallet.log" in current directory when file path isn't
obtained from epee.
In this situation previously, it defaulted to the file name of ".log"
("" + ".log") in the current directory.
(Thanks to @sammy007 for reporting bug.)
An earlier version yet used "" + "/" + ".log" = "/.log", which resulted
in silently not logging in most cases, due to lack of permission.
Test:
PATH=$PATH:</path/to/simplewallet/folder> && simplewallet --wallet-file /dev/null
This results in epee not finding the executable's file path, so
simplewallet will now use a default log filename.
The height function apparently used to return the index of
the last block, rather than the height of the chain. This now
seems to be incorrect, judging the the code, so we remove the
now wrong comment, as well as a couple +/- 1 adjustments
which now cause the median calculation to differ from the
original blockchain_storage version.
This obsoletes the need for a lengthy blockchain rescan when
a transaction doesn't end up in the chain after being accepted
by the daemon, or any other reason why the wallet's idea of
spent and unspent outputs gets out of sync from the blockchain's.
The original code removed key images from a tx from the blockchain
when an non to-key nor gen input was found in that tx. Additionally,
the remainder of the tx data was added to the blockchain only after
the double spend check passed.
2634307 daemon: omit extra set of <> in error message (moneromooo-monero)
0822933 daemon: print a decoded tx in print_tx (moneromooo-monero)
1d678b1 daemon: fix print_tx not find transactions (moneromooo-monero)
It was only used by the older blockchain_storage.
We also move the code to the calling blockchain level, to avoid
replicating the code in every DB implementation. This also makes
the get_random_out method obsolete, and we delete it.
Pros:
- smaller on the blockchain
- shorter integrated addresses
Cons:
- less sparseness
- less ability to embed actual information
The boolean argument to encrypt payment ids is now gone from the
RPC calls, since the decision is made based on the length of the
payment id passed.
A payment ID may be encrypted using the tx secret key and the
receiver's public view key. The receiver can decrypt it with
the tx public key and the receiver's secret view key.
Using integrated addresses now cause the payment IDs to be
encrypted. Payment IDs used manually are not encrypted by default,
but can be encrypted using the new 'encrypt_payment_id' field
in the transfer and transfer_split RPC calls. It is not possible
to use an encrypted payment ID by specifying a manual simplewallet
transfer/transfer_new command, though this is just a limitation
due to input parsing.
If there's no blocks in database (m_height == 0):
Don't assign incorrect block range to check.
Skip average block size check.
Test:
Run blockchain_converter with an existing source blockchain.bin and
a non-existent LMDB destination database.
The converter creates a BlockchainLMDB instance with zero height, due to
not being initialized with a genesis block, normally done by
Blockchain::init(). While different than the behavior of bitmonerod,
blockchain_import, and blockchain_export, the initialization hasn't been
strictly necessary.
The db batch size estimation normally uses an average block size, or a
default minimum block size, whichever is greater. In this case, as
there's no existing blocks to check for an average block size, the
default should be used.
It should avoid a lot of the issues sending more than half the
wallet's contents due to change.
Actual output selection is still random. Changing this would
improve the matching of transaction amounts to output sizes,
but may have non obvious effects on blockchain analysis.
Mapped to the new transfer_new command in simplewallet, and
transfer uses the existing algorithm.
To use in RPC, add "new_algorithm: true" in the transfer_split
JSON command. It is not used in the transfer command.
boost doesn't support %zu for size_t, and the previous change
to %u could technically lose bits (though it would require splitting
a transfer into 4 billion transactions, which seems unlikely).
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
fd73d9c Check and resize if needed at batch transaction start (warptangent)
f9e4afd blockchain_utilities: Increase debug statement's log level (warptangent)
699e4b3 blockchain_utilities: Pass expected number of blocks when starting batch (warptangent)
6e170c8 Optionally allow DB to know expected number of blocks at batch transaction start (warptangent)
This currently only affects blockchain_import and blockchain_converter.
When the number of blocks expected for the batch transaction is
provided, make an estimate of the DB space needed. If not enough free
space remains, resize the DB.
The estimate is made based on:
- the average size of the last 500 blocks, or if larger, a min. block
size of 4k
- a factor for the expanded size a block occupies in the DB across the
sub-dbs/tables
- a safety factor (1.7) to allow for a "reasonable" average block size
increase over the batch
Increase the DB size by whichever is greater: the estimated size needed
or a minimum increase size, currently 128 MB.
The conservative factors in the estimate help in testing that the resize
occurs when needed, and without gratuitous size increases. For common
use, the safety factor and minimum increase size could reasonably be
increased.
For testing, setting DEFAULT_MAPSIZE (blockchain_db/lmdb/db_lmdb.h) to 1
<< 27 (128 MB) and recompiling will ensure DB resizes take place sooner
and more frequently.
dc4dbc1 simplewallet: allow creating a wallet from a public address and view secret key (moneromooo-monero)
6a0f61d account: allow creating an account from a public address and view secret key (moneromooo-monero)
e05a58a wallet2: fix write_watch_only_wallet comment description (moneromooo-monero)
4bf6f0d simplewallet: forbid seed commands for watch only wallets (moneromooo-monero)