oxen-core/src/cryptonote_core/cryptonote_core.cpp

2236 lines
90 KiB
C++
Raw Normal View History

// Copyright (c) 2014-2019, The Monero Project
2018-04-10 06:49:20 +02:00
// Copyright (c) 2018, The Loki Project
//
2014-07-23 15:03:52 +02:00
// All rights reserved.
//
2014-07-23 15:03:52 +02:00
// Redistribution and use in source and binary forms, with or without modification, are
// permitted provided that the following conditions are met:
//
2014-07-23 15:03:52 +02:00
// 1. Redistributions of source code must retain the above copyright notice, this list of
// conditions and the following disclaimer.
//
2014-07-23 15:03:52 +02:00
// 2. Redistributions in binary form must reproduce the above copyright notice, this list
// of conditions and the following disclaimer in the documentation and/or other
// materials provided with the distribution.
//
2014-07-23 15:03:52 +02:00
// 3. Neither the name of the copyright holder nor the names of its contributors may be
// used to endorse or promote products derived from this software without specific
// prior written permission.
//
2014-07-23 15:03:52 +02:00
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
//
2014-07-23 15:03:52 +02:00
// Parts of this file are originally copyright (c) 2012-2013 The Cryptonote developers
2014-03-03 23:07:58 +01:00
#include <boost/algorithm/string.hpp>
#include <boost/endian/conversion.hpp>
#include "string_tools.h"
2014-03-03 23:07:58 +01:00
using namespace epee;
#include <unordered_set>
#include <iomanip>
Service Node Deregister Part 5 (#89) * Retrieve quorum list from height, reviewed * Setup data structures for de/register TX * Submit and validate partial/full deregisters * Add P2P relaying of partial deregistration votes * Code review adjustments for deregistration part 1 - Fix check_tx_semantic - Remove signature_pod as votes are now stored as blobs. Serialization overrides don't intefere with crypto::signature anymore. * deregistration_vote_pool - changed sign/verify interface and removed repeated code * Misc review, fix sign/verify api, vote threshold * Deregister/tx edge case handling for combinatoric votes * core, service_node_list: separated address from service node pubkey * Retrieve quorum list from height, reviewed * Setup data structures for de/register TX * Submit and validate partial/full deregisters * Add P2P relaying of partial deregistration votes * Code review adjustments for deregistration part 1 - Fix check_tx_semantic - Remove signature_pod as votes are now stored as blobs. Serialization overrides don't intefere with crypto::signature anymore. * deregistration_vote_pool - changed sign/verify interface and removed repeated code * Misc review, fix sign/verify api, vote threshold * Deregister/tx edge case handling for combinatoric votes * Store service node lists for the duration of deregister lifetimes * Quorum min/max bug, sort node list, fix node to test list * Change quorum to store acc pub address, fix oob bug * Code review for expiring votes, acc keys to pub_key, improve err msgs * Add early out for is_deregistration_tx and protect against quorum changes * Remove debug code, fix segfault * Remove irrelevant check for tx v3 in blockchain, fix >= height for pruning quorum states Incorrect assumption that a transaction can be kept in the chain if it could eventually become invalid, because if it were the chain would be split and eventually these transaction would be dropped. But also that we should not override the pre-existing logic which handles this case anyway.
2018-07-18 04:42:47 +02:00
2014-03-03 23:07:58 +01:00
#include "cryptonote_core.h"
#include "common/util.h"
#include "common/updates.h"
#include "common/download.h"
#include "common/threadpool.h"
#include "common/command_line.h"
2014-03-03 23:07:58 +01:00
#include "warnings.h"
#include "crypto/crypto.h"
#include "cryptonote_config.h"
#include "misc_language.h"
#include "file_io_utils.h"
#include <csignal>
2017-09-10 18:35:59 +02:00
#include "checkpoints/checkpoints.h"
#include "ringct/rctTypes.h"
#include "blockchain_db/blockchain_db.h"
#include "ringct/rctSigs.h"
#include "common/notify.h"
#include "version.h"
#include "wipeable_string.h"
#include "common/i18n.h"
#include "net/local_ip.h"
2014-03-03 23:07:58 +01:00
#include "common/loki_integration_test_hooks.h"
2018-04-10 06:49:20 +02:00
#undef LOKI_DEFAULT_LOG_CATEGORY
#define LOKI_DEFAULT_LOG_CATEGORY "cn"
Change logging to easylogging++ This replaces the epee and data_loggers logging systems with a single one, and also adds filename:line and explicit severity levels. Categories may be defined, and logging severity set by category (or set of categories). epee style 0-4 log level maps to a sensible severity configuration. Log files now also rotate when reaching 100 MB. To select which logs to output, use the MONERO_LOGS environment variable, with a comma separated list of categories (globs are supported), with their requested severity level after a colon. If a log matches more than one such setting, the last one in the configuration string applies. A few examples: This one is (mostly) silent, only outputting fatal errors: MONERO_LOGS=*:FATAL This one is very verbose: MONERO_LOGS=*:TRACE This one is totally silent (logwise): MONERO_LOGS="" This one outputs all errors and warnings, except for the "verify" category, which prints just fatal errors (the verify category is used for logs about incoming transactions and blocks, and it is expected that some/many will fail to verify, hence we don't want the spam): MONERO_LOGS=*:WARNING,verify:FATAL Log levels are, in decreasing order of priority: FATAL, ERROR, WARNING, INFO, DEBUG, TRACE Subcategories may be added using prefixes and globs. This example will output net.p2p logs at the TRACE level, but all other net* logs only at INFO: MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE Logs which are intended for the user (which Monero was using a lot through epee, but really isn't a nice way to go things) should use the "global" category. There are a few helper macros for using this category, eg: MGINFO("this shows up by default") or MGINFO_RED("this is red"), to try to keep a similar look and feel for now. Existing epee log macros still exist, and map to the new log levels, but since they're used as a "user facing" UI element as much as a logging system, they often don't map well to log severities (ie, a log level 0 log may be an error, or may be something we want the user to see, such as an important info). In those cases, I tried to use the new macros. In other cases, I left the existing macros in. When modifying logs, it is probably best to switch to the new macros with explicit levels. The --log-level options and set_log commands now also accept category settings, in addition to the epee style log levels.
2017-01-01 17:34:23 +01:00
2014-03-03 23:07:58 +01:00
DISABLE_VS_WARNINGS(4355)
Change logging to easylogging++ This replaces the epee and data_loggers logging systems with a single one, and also adds filename:line and explicit severity levels. Categories may be defined, and logging severity set by category (or set of categories). epee style 0-4 log level maps to a sensible severity configuration. Log files now also rotate when reaching 100 MB. To select which logs to output, use the MONERO_LOGS environment variable, with a comma separated list of categories (globs are supported), with their requested severity level after a colon. If a log matches more than one such setting, the last one in the configuration string applies. A few examples: This one is (mostly) silent, only outputting fatal errors: MONERO_LOGS=*:FATAL This one is very verbose: MONERO_LOGS=*:TRACE This one is totally silent (logwise): MONERO_LOGS="" This one outputs all errors and warnings, except for the "verify" category, which prints just fatal errors (the verify category is used for logs about incoming transactions and blocks, and it is expected that some/many will fail to verify, hence we don't want the spam): MONERO_LOGS=*:WARNING,verify:FATAL Log levels are, in decreasing order of priority: FATAL, ERROR, WARNING, INFO, DEBUG, TRACE Subcategories may be added using prefixes and globs. This example will output net.p2p logs at the TRACE level, but all other net* logs only at INFO: MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE Logs which are intended for the user (which Monero was using a lot through epee, but really isn't a nice way to go things) should use the "global" category. There are a few helper macros for using this category, eg: MGINFO("this shows up by default") or MGINFO_RED("this is red"), to try to keep a similar look and feel for now. Existing epee log macros still exist, and map to the new log levels, but since they're used as a "user facing" UI element as much as a logging system, they often don't map well to log severities (ie, a log level 0 log may be an error, or may be something we want the user to see, such as an important info). In those cases, I tried to use the new macros. In other cases, I left the existing macros in. When modifying logs, it is probably best to switch to the new macros with explicit levels. The --log-level options and set_log commands now also accept category settings, in addition to the epee style log levels.
2017-01-01 17:34:23 +01:00
#define MERROR_VER(x) MCERROR("verify", x)
#define BAD_SEMANTICS_TXES_MAX_SIZE 100
// basically at least how many bytes the block itself serializes to without the miner tx
#define BLOCK_SIZE_SANITY_LEEWAY 100
2014-03-03 23:07:58 +01:00
namespace cryptonote
{
const command_line::arg_descriptor<bool, false> arg_testnet_on = {
"testnet"
, "Run on testnet. The wallet must be launched with --testnet flag."
, false
};
2018-02-16 12:04:04 +01:00
const command_line::arg_descriptor<bool, false> arg_stagenet_on = {
"stagenet"
, "Run on stagenet. The wallet must be launched with --stagenet flag."
, false
};
const command_line::arg_descriptor<bool> arg_regtest_on = {
"regtest"
, "Run in a regression testing mode."
, false
};
const command_line::arg_descriptor<difficulty_type> arg_fixed_difficulty = {
"fixed-difficulty"
, "Fixed difficulty used for testing."
, 0
};
2018-02-16 12:04:04 +01:00
const command_line::arg_descriptor<std::string, false, true, 2> arg_data_dir = {
2018-01-21 16:29:55 +01:00
"data-dir"
, "Specify data directory"
, tools::get_default_data_dir()
2018-02-16 12:04:04 +01:00
, {{ &arg_testnet_on, &arg_stagenet_on }}
2018-03-27 15:47:57 +02:00
, [](std::array<bool, 2> testnet_stagenet, bool defaulted, std::string val)->std::string {
2018-02-16 12:04:04 +01:00
if (testnet_stagenet[0])
return (boost::filesystem::path(val) / "testnet").string();
2018-02-16 12:04:04 +01:00
else if (testnet_stagenet[1])
return (boost::filesystem::path(val) / "stagenet").string();
return val;
}
2018-01-21 16:29:55 +01:00
};
const command_line::arg_descriptor<bool> arg_offline = {
"offline"
, "Do not listen for peers, nor connect to any"
};
const command_line::arg_descriptor<size_t> arg_block_download_max_size = {
"block-download-max-size"
, "Set maximum size of block download queue in bytes (0 for default)"
, 0
};
static const command_line::arg_descriptor<bool> arg_test_drop_download = {
"test-drop-download"
, "For net tests: in download, discard ALL blocks instead checking/saving them (very fast)"
};
static const command_line::arg_descriptor<uint64_t> arg_test_drop_download_height = {
"test-drop-download-height"
2018-03-01 12:36:19 +01:00
, "Like test-drop-download but discards only after around certain height"
, 0
};
static const command_line::arg_descriptor<int> arg_test_dbg_lock_sleep = {
"test-dbg-lock-sleep"
, "Sleep time in ms, defaults to 0 (off), used to debug before/after locking mutex. Values 100 to 1000 are good for tests."
, 0
};
static const command_line::arg_descriptor<uint64_t> arg_fast_block_sync = {
"fast-block-sync"
, "Sync up most of the way by using embedded, known block hashes."
, 1
};
static const command_line::arg_descriptor<uint64_t> arg_prep_blocks_threads = {
"prep-blocks-threads"
, "Max number of threads to use when preparing block hashes in groups."
, 4
};
static const command_line::arg_descriptor<uint64_t> arg_show_time_stats = {
"show-time-stats"
, "Show time-stats when processing blocks/txs and disk synchronization."
, 0
};
static const command_line::arg_descriptor<size_t> arg_block_sync_size = {
"block-sync-size"
, "How many blocks to sync at once during chain synchronization (0 = adaptive)."
, 0
};
static const command_line::arg_descriptor<std::string> arg_check_updates = {
"check-updates"
, "Check for new versions of loki: [disabled|notify|download|update]"
, "notify"
};
static const command_line::arg_descriptor<bool> arg_pad_transactions = {
"pad-transactions"
, "Pad relayed transactions to help defend against traffic volume analysis"
, false
};
static const command_line::arg_descriptor<size_t> arg_max_txpool_weight = {
"max-txpool-weight"
, "Set maximum txpool weight in bytes."
, DEFAULT_TXPOOL_MAX_WEIGHT
};
static const command_line::arg_descriptor<bool> arg_service_node = {
"service-node"
, "Run as a service node, options 'service-node-public-ip' and 'storage-server-port' must be set"
};
static const command_line::arg_descriptor<std::string> arg_public_ip = {
"service-node-public-ip"
, "Public IP address on which this service node's services (such as the Loki "
"storage server) are accessible. This IP address will be advertised to the "
"network via the service node uptime proofs. Required if operating as a "
"service node."
};
static const command_line::arg_descriptor<uint16_t> arg_sn_bind_port = {
"storage-server-port"
, "The port on which this service node's storage server is accessible. A listening "
"storage server is required for service nodes. (This option is specified "
"automatically when using Loki Launcher.)"
, 0};
static const command_line::arg_descriptor<std::string> arg_block_notify = {
"block-notify"
, "Run a program for each new block, '%s' will be replaced by the block hash"
, ""
};
static const command_line::arg_descriptor<bool> arg_prune_blockchain = {
"prune-blockchain"
, "Prune blockchain"
, false
};
static const command_line::arg_descriptor<std::string> arg_reorg_notify = {
"reorg-notify"
, "Run a program for each reorg, '%s' will be replaced by the split height, "
"'%h' will be replaced by the new blockchain height, and '%n' will be "
"replaced by the number of new blocks in the new chain"
, ""
};
static const command_line::arg_descriptor<std::string> arg_block_rate_notify = {
"block-rate-notify"
, "Run a program when the block rate undergoes large fluctuations. This might "
"be a sign of large amounts of hash rate going on and off the Monero network, "
"and thus be of potential interest in predicting attacks. %t will be replaced "
"by the number of minutes for the observation window, %b by the number of "
"blocks observed within that window, and %e by the number of blocks that was "
"expected in that window. It is suggested that this notification is used to "
"automatically increase the number of confirmations required before a payment "
"is acted upon."
, ""
};
static const command_line::arg_descriptor<bool> arg_keep_alt_blocks = {
"keep-alt-blocks"
, "Keep alternative blocks on restart"
, false
};
2014-03-03 23:07:58 +01:00
const command_line::arg_descriptor<uint64_t> arg_recalculate_difficulty = {
"recalculate-difficulty",
"Recalculate per-block difficulty starting from the height specified",
0};
static const command_line::arg_descriptor<uint64_t> arg_store_quorum_history = {
"store-quorum-history",
"Store the service node quorum history for the last N blocks to allow historic quorum lookups "
"(e.g. by a block explorer). Specify the number of blocks of history to store, or 1 to store "
"the entire history. Requires considerably more memory and block chain storage.",
0};
2014-03-03 23:07:58 +01:00
//-----------------------------------------------------------------------------------------------
core::core(i_cryptonote_protocol* pprotocol):
m_mempool(m_blockchain_storage),
2018-06-29 06:47:00 +02:00
m_service_node_list(m_blockchain_storage),
m_blockchain_storage(m_mempool, m_service_node_list),
m_quorum_cop(*this),
m_miner(this, &m_blockchain_storage),
m_miner_address(boost::value_initialized<account_public_address>()),
m_starter_message_showed(false),
m_target_blockchain_height(0),
m_checkpoints_path(""),
2017-02-26 22:00:38 +01:00
m_last_json_checkpoints_update(0),
2018-03-19 00:36:20 +01:00
m_update_download(0),
m_nettype(UNDEFINED),
m_update_available(false),
m_last_storage_server_ping(time(nullptr)), // Reset the storage server last ping to make sure the very first uptime proof works
m_pad_transactions(false)
2014-03-03 23:07:58 +01:00
{
m_checkpoints_updating.clear();
2014-03-03 23:07:58 +01:00
set_cryptonote_protocol(pprotocol);
}
void core::set_cryptonote_protocol(i_cryptonote_protocol* pprotocol)
{
if(pprotocol)
m_pprotocol = pprotocol;
else
m_pprotocol = &m_protocol_stub;
}
//-----------------------------------------------------------------------------------------------
bool core::update_checkpoints_from_json_file()
{
2019-05-01 08:01:17 +02:00
if (m_nettype != MAINNET) return true;
if (m_checkpoints_updating.test_and_set()) return true;
// load json checkpoints every 10min and verify them with respect to what blocks we already have
bool res = true;
2019-05-01 08:01:17 +02:00
if (time(NULL) - m_last_json_checkpoints_update >= 600)
{
res = m_blockchain_storage.update_checkpoints_from_json_file(m_checkpoints_path);
m_last_json_checkpoints_update = time(NULL);
}
m_checkpoints_updating.clear();
// if anything fishy happened getting new checkpoints, bring down the house
if (!res)
{
graceful_exit();
}
return res;
2014-03-03 23:07:58 +01:00
}
//-----------------------------------------------------------------------------------
void core::stop()
{
m_miner.stop();
2016-12-04 13:27:45 +01:00
m_blockchain_storage.cancel();
2017-02-26 22:00:38 +01:00
tools::download_async_handle handle;
{
boost::lock_guard<boost::mutex> lock(m_update_mutex);
handle = m_update_download;
m_update_download = 0;
}
if (handle)
tools::download_cancel(handle);
}
//-----------------------------------------------------------------------------------
void core::init_options(boost::program_options::options_description& desc)
2014-03-03 23:07:58 +01:00
{
2018-01-21 16:29:55 +01:00
command_line::add_arg(desc, arg_data_dir);
command_line::add_arg(desc, arg_test_drop_download);
command_line::add_arg(desc, arg_test_drop_download_height);
command_line::add_arg(desc, arg_testnet_on);
2018-02-16 12:04:04 +01:00
command_line::add_arg(desc, arg_stagenet_on);
command_line::add_arg(desc, arg_regtest_on);
command_line::add_arg(desc, arg_fixed_difficulty);
command_line::add_arg(desc, arg_prep_blocks_threads);
command_line::add_arg(desc, arg_fast_block_sync);
command_line::add_arg(desc, arg_show_time_stats);
command_line::add_arg(desc, arg_block_sync_size);
command_line::add_arg(desc, arg_check_updates);
command_line::add_arg(desc, arg_test_dbg_lock_sleep);
command_line::add_arg(desc, arg_offline);
command_line::add_arg(desc, arg_block_download_max_size);
command_line::add_arg(desc, arg_max_txpool_weight);
command_line::add_arg(desc, arg_service_node);
command_line::add_arg(desc, arg_public_ip);
command_line::add_arg(desc, arg_sn_bind_port);
command_line::add_arg(desc, arg_pad_transactions);
command_line::add_arg(desc, arg_block_notify);
command_line::add_arg(desc, arg_prune_blockchain);
command_line::add_arg(desc, arg_reorg_notify);
command_line::add_arg(desc, arg_block_rate_notify);
command_line::add_arg(desc, arg_keep_alt_blocks);
command_line::add_arg(desc, arg_recalculate_difficulty);
command_line::add_arg(desc, arg_store_quorum_history);
#if defined(LOKI_ENABLE_INTEGRATION_TEST_HOOKS)
command_line::add_arg(desc, loki::arg_integration_test_hardforks_override);
command_line::add_arg(desc, loki::arg_integration_test_shared_mem_name);
#endif
miner::init_options(desc);
BlockchainDB::init_options(desc);
2014-03-03 23:07:58 +01:00
}
//-----------------------------------------------------------------------------------------------
bool core::handle_command_line(const boost::program_options::variables_map& vm)
2014-03-03 23:07:58 +01:00
{
2018-02-16 12:04:04 +01:00
if (m_nettype != FAKECHAIN)
{
const bool testnet = command_line::get_arg(vm, arg_testnet_on);
const bool stagenet = command_line::get_arg(vm, arg_stagenet_on);
m_nettype = testnet ? TESTNET : stagenet ? STAGENET : MAINNET;
}
2018-01-21 16:29:55 +01:00
m_config_folder = command_line::get_arg(vm, arg_data_dir);
test_drop_download_height(command_line::get_arg(vm, arg_test_drop_download_height));
m_pad_transactions = get_arg(vm, arg_pad_transactions);
m_offline = get_arg(vm, arg_offline);
if (command_line::get_arg(vm, arg_test_drop_download) == true)
test_drop_download();
m_service_node = command_line::get_arg(vm, arg_service_node);
if (m_service_node) {
/// TODO: parse these options early, before we start p2p server etc?
m_storage_port = command_line::get_arg(vm, arg_sn_bind_port);
bool storage_ok = true;
if (m_storage_port == 0) {
MERROR("Please specify the port on which the storage server is listening with: '--" << arg_sn_bind_port.name << " <port>'");
storage_ok = false;
}
const std::string pub_ip = command_line::get_arg(vm, arg_public_ip);
if (pub_ip.size())
{
if (!epee::string_tools::get_ip_int32_from_string(m_sn_public_ip, pub_ip)) {
MERROR("Unable to parse IPv4 public address from: " << pub_ip);
storage_ok = false;
}
if (!epee::net_utils::is_ip_public(m_sn_public_ip)) {
MERROR("Address given for public-ip is not public: " << epee::string_tools::get_ip_string_from_int32(m_sn_public_ip));
storage_ok = false;
}
}
else
{
MERROR("Please specify an IPv4 public address which the service node & storage server is accessible from with: '--" << arg_public_ip.name << " <ip address>'");
storage_ok = false;
}
if (!storage_ok) {
MERROR("IMPORTANT: All service node operators are now required to run the loki storage "
<< "server and provide the public ip and port on which it can be accessed on the internet.");
return false;
}
MGINFO("Storage server endpoint is set to: "
<< (epee::net_utils::ipv4_network_address{ m_sn_public_ip, m_storage_port }).str());
}
epee::debug::g_test_dbg_lock_sleep() = command_line::get_arg(vm, arg_test_dbg_lock_sleep);
2014-03-03 23:07:58 +01:00
return true;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 12:25:57 +02:00
uint64_t core::get_current_blockchain_height() const
2014-03-03 23:07:58 +01:00
{
return m_blockchain_storage.get_current_blockchain_height();
}
//-----------------------------------------------------------------------------------------------
void core::get_blockchain_top(uint64_t& height, crypto::hash& top_id) const
2014-03-03 23:07:58 +01:00
{
top_id = m_blockchain_storage.get_tail_id(height);
}
//-----------------------------------------------------------------------------------------------
bool core::get_blocks(uint64_t start_offset, size_t count, std::vector<std::pair<cryptonote::blobdata,block>>& blocks, std::vector<cryptonote::blobdata>& txs) const
2014-03-03 23:07:58 +01:00
{
return m_blockchain_storage.get_blocks(start_offset, count, blocks, txs);
}
//-----------------------------------------------------------------------------------------------
bool core::get_blocks(uint64_t start_offset, size_t count, std::vector<std::pair<cryptonote::blobdata,block>>& blocks) const
2014-03-03 23:07:58 +01:00
{
return m_blockchain_storage.get_blocks(start_offset, count, blocks);
}
//-----------------------------------------------------------------------------------------------
bool core::get_blocks(uint64_t start_offset, size_t count, std::vector<block>& blocks) const
{
std::vector<std::pair<cryptonote::blobdata, cryptonote::block>> bs;
if (!m_blockchain_storage.get_blocks(start_offset, count, bs))
return false;
for (const auto &b: bs)
blocks.push_back(b.second);
return true;
}
//-----------------------------------------------------------------------------------------------
bool core::get_transactions(const std::vector<crypto::hash>& txs_ids, std::vector<cryptonote::blobdata>& txs, std::vector<crypto::hash>& missed_txs) const
{
return m_blockchain_storage.get_transactions_blobs(txs_ids, txs, missed_txs);
}
//-----------------------------------------------------------------------------------------------
bool core::get_split_transactions_blobs(const std::vector<crypto::hash>& txs_ids, std::vector<std::tuple<crypto::hash, cryptonote::blobdata, crypto::hash, cryptonote::blobdata>>& txs, std::vector<crypto::hash>& missed_txs) const
{
return m_blockchain_storage.get_split_transactions_blobs(txs_ids, txs, missed_txs);
}
//-----------------------------------------------------------------------------------------------
bool core::get_txpool_backlog(std::vector<tx_backlog_entry>& backlog) const
{
m_mempool.get_transaction_backlog(backlog);
return true;
}
//-----------------------------------------------------------------------------------------------
bool core::get_transactions(const std::vector<crypto::hash>& txs_ids, std::vector<transaction>& txs, std::vector<crypto::hash>& missed_txs) const
2014-03-03 23:07:58 +01:00
{
return m_blockchain_storage.get_transactions(txs_ids, txs, missed_txs);
}
//-----------------------------------------------------------------------------------------------
bool core::get_alternative_blocks(std::vector<block>& blocks) const
2014-03-03 23:07:58 +01:00
{
return m_blockchain_storage.get_alternative_blocks(blocks);
}
//-----------------------------------------------------------------------------------------------
2015-09-19 12:25:57 +02:00
size_t core::get_alternative_blocks_count() const
2014-03-03 23:07:58 +01:00
{
return m_blockchain_storage.get_alternative_blocks_count();
}
//-----------------------------------------------------------------------------------------------
bool core::init(const boost::program_options::variables_map& vm, const cryptonote::test_options *test_options, const GetCheckpointsCallback& get_checkpoints/* = nullptr */)
2014-03-03 23:07:58 +01:00
{
start_time = std::time(nullptr);
#if defined(LOKI_ENABLE_INTEGRATION_TEST_HOOKS)
const std::string arg_integration_test_override_hardforks = command_line::get_arg(vm, loki::arg_integration_test_hardforks_override);
std::vector<std::pair<uint8_t, uint64_t>> integration_test_hardforks;
if (!arg_integration_test_override_hardforks.empty())
{
// Expected format: <fork_version>:<fork_height>, ...
// Example: 7:0, 8:10, 9:20, 10:100
char const *ptr = arg_integration_test_override_hardforks.c_str();
while (ptr[0])
{
int hf_version = atoi(ptr);
while(ptr[0] != ':') ptr++;
++ptr;
int hf_height = atoi(ptr);
while(ptr[0] && ptr[0] != ',') ptr++;
integration_test_hardforks.push_back(std::make_pair(static_cast<uint8_t>(hf_version), static_cast<uint64_t>(hf_height)));
if (!ptr[0]) break;
ptr++;
}
}
cryptonote::test_options integration_hardfork_override = {integration_test_hardforks};
if (!arg_integration_test_override_hardforks.empty())
test_options = &integration_hardfork_override;
{
const std::string arg_shared_mem_name = command_line::get_arg(vm, loki::arg_integration_test_shared_mem_name);
loki::init_integration_test_context(arg_shared_mem_name);
}
#endif
const bool regtest = command_line::get_arg(vm, arg_regtest_on);
if (test_options != NULL || regtest)
2018-02-16 12:04:04 +01:00
{
m_nettype = FAKECHAIN;
}
2018-11-21 00:43:09 +01:00
bool r = handle_command_line(vm);
/// Currently terminating before blockchain is initialized results in a crash
/// during deinitialization... TODO: fix that
CHECK_AND_ASSERT_MES(r, false, "Failed to apply command line options.");
std::string db_type = command_line::get_arg(vm, cryptonote::arg_db_type);
std::string db_sync_mode = command_line::get_arg(vm, cryptonote::arg_db_sync_mode);
bool db_salvage = command_line::get_arg(vm, cryptonote::arg_db_salvage) != 0;
bool fast_sync = command_line::get_arg(vm, arg_fast_block_sync) != 0;
uint64_t blocks_threads = command_line::get_arg(vm, arg_prep_blocks_threads);
std::string check_updates_string = command_line::get_arg(vm, arg_check_updates);
size_t max_txpool_weight = command_line::get_arg(vm, arg_max_txpool_weight);
bool prune_blockchain = command_line::get_arg(vm, arg_prune_blockchain);
bool keep_alt_blocks = command_line::get_arg(vm, arg_keep_alt_blocks);
if (m_service_node)
{
r = init_service_node_key();
CHECK_AND_ASSERT_MES(r, false, "Failed to create or load service node key");
m_service_node_list.set_my_service_node_keys(&m_service_node_pubkey);
}
boost::filesystem::path folder(m_config_folder);
2018-02-16 12:04:04 +01:00
if (m_nettype == FAKECHAIN)
folder /= "fake";
// make sure the data directory exists, and try to lock it
CHECK_AND_ASSERT_MES (boost::filesystem::exists(folder) || boost::filesystem::create_directories(folder), false,
std::string("Failed to create directory ").append(folder.string()).c_str());
// check for blockchain.bin
try
{
const boost::filesystem::path old_files = folder;
if (boost::filesystem::exists(old_files / "blockchain.bin"))
{
Change logging to easylogging++ This replaces the epee and data_loggers logging systems with a single one, and also adds filename:line and explicit severity levels. Categories may be defined, and logging severity set by category (or set of categories). epee style 0-4 log level maps to a sensible severity configuration. Log files now also rotate when reaching 100 MB. To select which logs to output, use the MONERO_LOGS environment variable, with a comma separated list of categories (globs are supported), with their requested severity level after a colon. If a log matches more than one such setting, the last one in the configuration string applies. A few examples: This one is (mostly) silent, only outputting fatal errors: MONERO_LOGS=*:FATAL This one is very verbose: MONERO_LOGS=*:TRACE This one is totally silent (logwise): MONERO_LOGS="" This one outputs all errors and warnings, except for the "verify" category, which prints just fatal errors (the verify category is used for logs about incoming transactions and blocks, and it is expected that some/many will fail to verify, hence we don't want the spam): MONERO_LOGS=*:WARNING,verify:FATAL Log levels are, in decreasing order of priority: FATAL, ERROR, WARNING, INFO, DEBUG, TRACE Subcategories may be added using prefixes and globs. This example will output net.p2p logs at the TRACE level, but all other net* logs only at INFO: MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE Logs which are intended for the user (which Monero was using a lot through epee, but really isn't a nice way to go things) should use the "global" category. There are a few helper macros for using this category, eg: MGINFO("this shows up by default") or MGINFO_RED("this is red"), to try to keep a similar look and feel for now. Existing epee log macros still exist, and map to the new log levels, but since they're used as a "user facing" UI element as much as a logging system, they often don't map well to log severities (ie, a log level 0 log may be an error, or may be something we want the user to see, such as an important info). In those cases, I tried to use the new macros. In other cases, I left the existing macros in. When modifying logs, it is probably best to switch to the new macros with explicit levels. The --log-level options and set_log commands now also accept category settings, in addition to the epee style log levels.
2017-01-01 17:34:23 +01:00
MWARNING("Found old-style blockchain.bin in " << old_files.string());
MWARNING("Loki now uses a new format. You can either remove blockchain.bin to start syncing");
2018-04-10 06:51:12 +02:00
MWARNING("the blockchain anew, or use loki-blockchain-export and loki-blockchain-import to");
Change logging to easylogging++ This replaces the epee and data_loggers logging systems with a single one, and also adds filename:line and explicit severity levels. Categories may be defined, and logging severity set by category (or set of categories). epee style 0-4 log level maps to a sensible severity configuration. Log files now also rotate when reaching 100 MB. To select which logs to output, use the MONERO_LOGS environment variable, with a comma separated list of categories (globs are supported), with their requested severity level after a colon. If a log matches more than one such setting, the last one in the configuration string applies. A few examples: This one is (mostly) silent, only outputting fatal errors: MONERO_LOGS=*:FATAL This one is very verbose: MONERO_LOGS=*:TRACE This one is totally silent (logwise): MONERO_LOGS="" This one outputs all errors and warnings, except for the "verify" category, which prints just fatal errors (the verify category is used for logs about incoming transactions and blocks, and it is expected that some/many will fail to verify, hence we don't want the spam): MONERO_LOGS=*:WARNING,verify:FATAL Log levels are, in decreasing order of priority: FATAL, ERROR, WARNING, INFO, DEBUG, TRACE Subcategories may be added using prefixes and globs. This example will output net.p2p logs at the TRACE level, but all other net* logs only at INFO: MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE Logs which are intended for the user (which Monero was using a lot through epee, but really isn't a nice way to go things) should use the "global" category. There are a few helper macros for using this category, eg: MGINFO("this shows up by default") or MGINFO_RED("this is red"), to try to keep a similar look and feel for now. Existing epee log macros still exist, and map to the new log levels, but since they're used as a "user facing" UI element as much as a logging system, they often don't map well to log severities (ie, a log level 0 log may be an error, or may be something we want the user to see, such as an important info). In those cases, I tried to use the new macros. In other cases, I left the existing macros in. When modifying logs, it is probably best to switch to the new macros with explicit levels. The --log-level options and set_log commands now also accept category settings, in addition to the epee style log levels.
2017-01-01 17:34:23 +01:00
MWARNING("convert your existing blockchain.bin to the new format. See README.md for instructions.");
return false;
}
}
// folder might not be a directory, etc, etc
catch (...) { }
2017-12-13 15:04:19 +01:00
std::unique_ptr<BlockchainDB> db(new_db(db_type));
if (db == NULL)
{
2016-12-04 14:13:54 +01:00
LOG_ERROR("Attempted to use non-existent database type");
return false;
}
folder /= db->get_db_name();
Change logging to easylogging++ This replaces the epee and data_loggers logging systems with a single one, and also adds filename:line and explicit severity levels. Categories may be defined, and logging severity set by category (or set of categories). epee style 0-4 log level maps to a sensible severity configuration. Log files now also rotate when reaching 100 MB. To select which logs to output, use the MONERO_LOGS environment variable, with a comma separated list of categories (globs are supported), with their requested severity level after a colon. If a log matches more than one such setting, the last one in the configuration string applies. A few examples: This one is (mostly) silent, only outputting fatal errors: MONERO_LOGS=*:FATAL This one is very verbose: MONERO_LOGS=*:TRACE This one is totally silent (logwise): MONERO_LOGS="" This one outputs all errors and warnings, except for the "verify" category, which prints just fatal errors (the verify category is used for logs about incoming transactions and blocks, and it is expected that some/many will fail to verify, hence we don't want the spam): MONERO_LOGS=*:WARNING,verify:FATAL Log levels are, in decreasing order of priority: FATAL, ERROR, WARNING, INFO, DEBUG, TRACE Subcategories may be added using prefixes and globs. This example will output net.p2p logs at the TRACE level, but all other net* logs only at INFO: MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE Logs which are intended for the user (which Monero was using a lot through epee, but really isn't a nice way to go things) should use the "global" category. There are a few helper macros for using this category, eg: MGINFO("this shows up by default") or MGINFO_RED("this is red"), to try to keep a similar look and feel for now. Existing epee log macros still exist, and map to the new log levels, but since they're used as a "user facing" UI element as much as a logging system, they often don't map well to log severities (ie, a log level 0 log may be an error, or may be something we want the user to see, such as an important info). In those cases, I tried to use the new macros. In other cases, I left the existing macros in. When modifying logs, it is probably best to switch to the new macros with explicit levels. The --log-level options and set_log commands now also accept category settings, in addition to the epee style log levels.
2017-01-01 17:34:23 +01:00
MGINFO("Loading blockchain from folder " << folder.string() << " ...");
const std::string filename = folder.string();
// default to fast:async:1 if overridden
blockchain_db_sync_mode sync_mode = db_defaultsync;
bool sync_on_blocks = true;
uint64_t sync_threshold = 1;
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY) Bockchain: 1. Optim: Multi-thread long-hash computation when encountering groups of blocks. 2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible. 3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible. 4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks. 5. Optim: Multi-thread signature computation whenever possible. 6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD) 7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???). 8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads). Berkeley-DB: 1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc). 2. Fix: Unable to pop blocks on reorganize due to transaction errors. 3. Patch: Large number of transaction aborts when running multi-threaded bulk queries. 4. Patch: Insufficient locks error when running full sync. 5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation. 6. Optim: Add bulk queries to get output global indices. 7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 9. Optim: Added thread-safe buffers used when multi-threading bulk queries. 10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details) 11. Mod: Added checkpoint thread and auto-remove-logs option. 12. *Now usable on 32-bit systems like RPI2. LMDB: 1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect) 2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details) 5. Mod: Auto resize to +1GB instead of multiplier x1.5 ETC: 1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete. 2. Fix: 32-bit saturation bug when computing next difficulty on large blocks. [PENDING ISSUES] 1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization. This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD. 2. Berkeley db, possible bug "unable to allocate memory". TBD. [NEW OPTIONS] (*Currently all enabled for testing purposes) 1. --fast-block-sync arg=[0:1] (default: 1) a. 0 = Compute long hash per block (may take a while depending on CPU) b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence) 2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000) a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions. b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache. Fast - Write meta-data but defer data flush. Fastest - Defer meta-data and data flush. Sync - Flush data after nblocks_per_sync and wait. Async - Flush data after nblocks_per_sync but do not wait for the operation to finish. 3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower) Max number of threads to use when computing long-hash in groups. 4. --show-time-stats arg=[0:1] (default: 1) Show benchmark related time stats. 5. --db-auto-remove-logs arg=[0:1] (default: 1) For berkeley-db only. Auto remove logs if enabled. **Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version. At the moment, you need a full resync to use this optimized version. [PERFORMANCE COMPARISON] **Some figures are approximations only. Using a baseline machine of an i7-2600K+SSD+(with full pow computation): 1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain. 2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain. 3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain. Averate procesing times (with full pow computation): lmdb-optimized: 1. tx_ave = 2.5 ms / tx 2. block_ave = 5.87 ms / block memory-official-repo: 1. tx_ave = 8.85 ms / tx 2. block_ave = 19.68 ms / block lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e) 1. tx_ave = 47.8 ms / tx 2. block_ave = 64.2 ms / block **Note: The following data denotes processing times only (does not include p2p download time) lmdb-optimized processing times (with full pow computation): 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000). 2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000). 3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000). lmdb-optimized processing times (with per-block-checkpoint) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with full pow computation) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000). 2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with per-block-checkpoint) 1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 22:09:32 +02:00
if (m_nettype == FAKECHAIN)
{
#if !defined(LOKI_ENABLE_INTEGRATION_TEST_HOOKS) // In integration mode, don't delete the DB. This should be explicitly done in the tests. Otherwise the more likely behaviour is persisting the DB across multiple daemons in the same test.
// reset the db by removing the database file before opening it
if (!db->remove_data_file(filename))
{
MERROR("Failed to remove data file in " << filename);
return false;
}
#endif
}
try
{
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY) Bockchain: 1. Optim: Multi-thread long-hash computation when encountering groups of blocks. 2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible. 3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible. 4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks. 5. Optim: Multi-thread signature computation whenever possible. 6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD) 7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???). 8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads). Berkeley-DB: 1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc). 2. Fix: Unable to pop blocks on reorganize due to transaction errors. 3. Patch: Large number of transaction aborts when running multi-threaded bulk queries. 4. Patch: Insufficient locks error when running full sync. 5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation. 6. Optim: Add bulk queries to get output global indices. 7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 9. Optim: Added thread-safe buffers used when multi-threading bulk queries. 10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details) 11. Mod: Added checkpoint thread and auto-remove-logs option. 12. *Now usable on 32-bit systems like RPI2. LMDB: 1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect) 2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details) 5. Mod: Auto resize to +1GB instead of multiplier x1.5 ETC: 1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete. 2. Fix: 32-bit saturation bug when computing next difficulty on large blocks. [PENDING ISSUES] 1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization. This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD. 2. Berkeley db, possible bug "unable to allocate memory". TBD. [NEW OPTIONS] (*Currently all enabled for testing purposes) 1. --fast-block-sync arg=[0:1] (default: 1) a. 0 = Compute long hash per block (may take a while depending on CPU) b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence) 2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000) a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions. b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache. Fast - Write meta-data but defer data flush. Fastest - Defer meta-data and data flush. Sync - Flush data after nblocks_per_sync and wait. Async - Flush data after nblocks_per_sync but do not wait for the operation to finish. 3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower) Max number of threads to use when computing long-hash in groups. 4. --show-time-stats arg=[0:1] (default: 1) Show benchmark related time stats. 5. --db-auto-remove-logs arg=[0:1] (default: 1) For berkeley-db only. Auto remove logs if enabled. **Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version. At the moment, you need a full resync to use this optimized version. [PERFORMANCE COMPARISON] **Some figures are approximations only. Using a baseline machine of an i7-2600K+SSD+(with full pow computation): 1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain. 2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain. 3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain. Averate procesing times (with full pow computation): lmdb-optimized: 1. tx_ave = 2.5 ms / tx 2. block_ave = 5.87 ms / block memory-official-repo: 1. tx_ave = 8.85 ms / tx 2. block_ave = 19.68 ms / block lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e) 1. tx_ave = 47.8 ms / tx 2. block_ave = 64.2 ms / block **Note: The following data denotes processing times only (does not include p2p download time) lmdb-optimized processing times (with full pow computation): 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000). 2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000). 3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000). lmdb-optimized processing times (with per-block-checkpoint) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with full pow computation) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000). 2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with per-block-checkpoint) 1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 22:09:32 +02:00
uint64_t db_flags = 0;
std::vector<std::string> options;
boost::trim(db_sync_mode);
boost::split(options, db_sync_mode, boost::is_any_of(" :"));
const bool db_sync_mode_is_default = command_line::is_arg_defaulted(vm, cryptonote::arg_db_sync_mode);
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY) Bockchain: 1. Optim: Multi-thread long-hash computation when encountering groups of blocks. 2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible. 3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible. 4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks. 5. Optim: Multi-thread signature computation whenever possible. 6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD) 7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???). 8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads). Berkeley-DB: 1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc). 2. Fix: Unable to pop blocks on reorganize due to transaction errors. 3. Patch: Large number of transaction aborts when running multi-threaded bulk queries. 4. Patch: Insufficient locks error when running full sync. 5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation. 6. Optim: Add bulk queries to get output global indices. 7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 9. Optim: Added thread-safe buffers used when multi-threading bulk queries. 10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details) 11. Mod: Added checkpoint thread and auto-remove-logs option. 12. *Now usable on 32-bit systems like RPI2. LMDB: 1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect) 2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details) 5. Mod: Auto resize to +1GB instead of multiplier x1.5 ETC: 1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete. 2. Fix: 32-bit saturation bug when computing next difficulty on large blocks. [PENDING ISSUES] 1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization. This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD. 2. Berkeley db, possible bug "unable to allocate memory". TBD. [NEW OPTIONS] (*Currently all enabled for testing purposes) 1. --fast-block-sync arg=[0:1] (default: 1) a. 0 = Compute long hash per block (may take a while depending on CPU) b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence) 2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000) a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions. b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache. Fast - Write meta-data but defer data flush. Fastest - Defer meta-data and data flush. Sync - Flush data after nblocks_per_sync and wait. Async - Flush data after nblocks_per_sync but do not wait for the operation to finish. 3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower) Max number of threads to use when computing long-hash in groups. 4. --show-time-stats arg=[0:1] (default: 1) Show benchmark related time stats. 5. --db-auto-remove-logs arg=[0:1] (default: 1) For berkeley-db only. Auto remove logs if enabled. **Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version. At the moment, you need a full resync to use this optimized version. [PERFORMANCE COMPARISON] **Some figures are approximations only. Using a baseline machine of an i7-2600K+SSD+(with full pow computation): 1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain. 2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain. 3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain. Averate procesing times (with full pow computation): lmdb-optimized: 1. tx_ave = 2.5 ms / tx 2. block_ave = 5.87 ms / block memory-official-repo: 1. tx_ave = 8.85 ms / tx 2. block_ave = 19.68 ms / block lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e) 1. tx_ave = 47.8 ms / tx 2. block_ave = 64.2 ms / block **Note: The following data denotes processing times only (does not include p2p download time) lmdb-optimized processing times (with full pow computation): 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000). 2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000). 3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000). lmdb-optimized processing times (with per-block-checkpoint) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with full pow computation) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000). 2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with per-block-checkpoint) 1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 22:09:32 +02:00
for(const auto &option : options)
Change logging to easylogging++ This replaces the epee and data_loggers logging systems with a single one, and also adds filename:line and explicit severity levels. Categories may be defined, and logging severity set by category (or set of categories). epee style 0-4 log level maps to a sensible severity configuration. Log files now also rotate when reaching 100 MB. To select which logs to output, use the MONERO_LOGS environment variable, with a comma separated list of categories (globs are supported), with their requested severity level after a colon. If a log matches more than one such setting, the last one in the configuration string applies. A few examples: This one is (mostly) silent, only outputting fatal errors: MONERO_LOGS=*:FATAL This one is very verbose: MONERO_LOGS=*:TRACE This one is totally silent (logwise): MONERO_LOGS="" This one outputs all errors and warnings, except for the "verify" category, which prints just fatal errors (the verify category is used for logs about incoming transactions and blocks, and it is expected that some/many will fail to verify, hence we don't want the spam): MONERO_LOGS=*:WARNING,verify:FATAL Log levels are, in decreasing order of priority: FATAL, ERROR, WARNING, INFO, DEBUG, TRACE Subcategories may be added using prefixes and globs. This example will output net.p2p logs at the TRACE level, but all other net* logs only at INFO: MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE Logs which are intended for the user (which Monero was using a lot through epee, but really isn't a nice way to go things) should use the "global" category. There are a few helper macros for using this category, eg: MGINFO("this shows up by default") or MGINFO_RED("this is red"), to try to keep a similar look and feel for now. Existing epee log macros still exist, and map to the new log levels, but since they're used as a "user facing" UI element as much as a logging system, they often don't map well to log severities (ie, a log level 0 log may be an error, or may be something we want the user to see, such as an important info). In those cases, I tried to use the new macros. In other cases, I left the existing macros in. When modifying logs, it is probably best to switch to the new macros with explicit levels. The --log-level options and set_log commands now also accept category settings, in addition to the epee style log levels.
2017-01-01 17:34:23 +01:00
MDEBUG("option: " << option);
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY) Bockchain: 1. Optim: Multi-thread long-hash computation when encountering groups of blocks. 2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible. 3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible. 4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks. 5. Optim: Multi-thread signature computation whenever possible. 6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD) 7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???). 8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads). Berkeley-DB: 1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc). 2. Fix: Unable to pop blocks on reorganize due to transaction errors. 3. Patch: Large number of transaction aborts when running multi-threaded bulk queries. 4. Patch: Insufficient locks error when running full sync. 5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation. 6. Optim: Add bulk queries to get output global indices. 7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 9. Optim: Added thread-safe buffers used when multi-threading bulk queries. 10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details) 11. Mod: Added checkpoint thread and auto-remove-logs option. 12. *Now usable on 32-bit systems like RPI2. LMDB: 1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect) 2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details) 5. Mod: Auto resize to +1GB instead of multiplier x1.5 ETC: 1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete. 2. Fix: 32-bit saturation bug when computing next difficulty on large blocks. [PENDING ISSUES] 1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization. This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD. 2. Berkeley db, possible bug "unable to allocate memory". TBD. [NEW OPTIONS] (*Currently all enabled for testing purposes) 1. --fast-block-sync arg=[0:1] (default: 1) a. 0 = Compute long hash per block (may take a while depending on CPU) b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence) 2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000) a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions. b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache. Fast - Write meta-data but defer data flush. Fastest - Defer meta-data and data flush. Sync - Flush data after nblocks_per_sync and wait. Async - Flush data after nblocks_per_sync but do not wait for the operation to finish. 3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower) Max number of threads to use when computing long-hash in groups. 4. --show-time-stats arg=[0:1] (default: 1) Show benchmark related time stats. 5. --db-auto-remove-logs arg=[0:1] (default: 1) For berkeley-db only. Auto remove logs if enabled. **Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version. At the moment, you need a full resync to use this optimized version. [PERFORMANCE COMPARISON] **Some figures are approximations only. Using a baseline machine of an i7-2600K+SSD+(with full pow computation): 1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain. 2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain. 3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain. Averate procesing times (with full pow computation): lmdb-optimized: 1. tx_ave = 2.5 ms / tx 2. block_ave = 5.87 ms / block memory-official-repo: 1. tx_ave = 8.85 ms / tx 2. block_ave = 19.68 ms / block lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e) 1. tx_ave = 47.8 ms / tx 2. block_ave = 64.2 ms / block **Note: The following data denotes processing times only (does not include p2p download time) lmdb-optimized processing times (with full pow computation): 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000). 2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000). 3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000). lmdb-optimized processing times (with per-block-checkpoint) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with full pow computation) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000). 2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with per-block-checkpoint) 1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 22:09:32 +02:00
// default to fast:async:1
uint64_t DEFAULT_FLAGS = DBF_FAST;
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY) Bockchain: 1. Optim: Multi-thread long-hash computation when encountering groups of blocks. 2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible. 3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible. 4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks. 5. Optim: Multi-thread signature computation whenever possible. 6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD) 7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???). 8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads). Berkeley-DB: 1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc). 2. Fix: Unable to pop blocks on reorganize due to transaction errors. 3. Patch: Large number of transaction aborts when running multi-threaded bulk queries. 4. Patch: Insufficient locks error when running full sync. 5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation. 6. Optim: Add bulk queries to get output global indices. 7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 9. Optim: Added thread-safe buffers used when multi-threading bulk queries. 10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details) 11. Mod: Added checkpoint thread and auto-remove-logs option. 12. *Now usable on 32-bit systems like RPI2. LMDB: 1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect) 2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details) 5. Mod: Auto resize to +1GB instead of multiplier x1.5 ETC: 1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete. 2. Fix: 32-bit saturation bug when computing next difficulty on large blocks. [PENDING ISSUES] 1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization. This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD. 2. Berkeley db, possible bug "unable to allocate memory". TBD. [NEW OPTIONS] (*Currently all enabled for testing purposes) 1. --fast-block-sync arg=[0:1] (default: 1) a. 0 = Compute long hash per block (may take a while depending on CPU) b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence) 2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000) a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions. b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache. Fast - Write meta-data but defer data flush. Fastest - Defer meta-data and data flush. Sync - Flush data after nblocks_per_sync and wait. Async - Flush data after nblocks_per_sync but do not wait for the operation to finish. 3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower) Max number of threads to use when computing long-hash in groups. 4. --show-time-stats arg=[0:1] (default: 1) Show benchmark related time stats. 5. --db-auto-remove-logs arg=[0:1] (default: 1) For berkeley-db only. Auto remove logs if enabled. **Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version. At the moment, you need a full resync to use this optimized version. [PERFORMANCE COMPARISON] **Some figures are approximations only. Using a baseline machine of an i7-2600K+SSD+(with full pow computation): 1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain. 2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain. 3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain. Averate procesing times (with full pow computation): lmdb-optimized: 1. tx_ave = 2.5 ms / tx 2. block_ave = 5.87 ms / block memory-official-repo: 1. tx_ave = 8.85 ms / tx 2. block_ave = 19.68 ms / block lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e) 1. tx_ave = 47.8 ms / tx 2. block_ave = 64.2 ms / block **Note: The following data denotes processing times only (does not include p2p download time) lmdb-optimized processing times (with full pow computation): 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000). 2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000). 3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000). lmdb-optimized processing times (with per-block-checkpoint) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with full pow computation) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000). 2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with per-block-checkpoint) 1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 22:09:32 +02:00
if(options.size() == 0)
{
// default to fast:async:1
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY) Bockchain: 1. Optim: Multi-thread long-hash computation when encountering groups of blocks. 2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible. 3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible. 4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks. 5. Optim: Multi-thread signature computation whenever possible. 6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD) 7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???). 8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads). Berkeley-DB: 1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc). 2. Fix: Unable to pop blocks on reorganize due to transaction errors. 3. Patch: Large number of transaction aborts when running multi-threaded bulk queries. 4. Patch: Insufficient locks error when running full sync. 5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation. 6. Optim: Add bulk queries to get output global indices. 7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 9. Optim: Added thread-safe buffers used when multi-threading bulk queries. 10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details) 11. Mod: Added checkpoint thread and auto-remove-logs option. 12. *Now usable on 32-bit systems like RPI2. LMDB: 1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect) 2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details) 5. Mod: Auto resize to +1GB instead of multiplier x1.5 ETC: 1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete. 2. Fix: 32-bit saturation bug when computing next difficulty on large blocks. [PENDING ISSUES] 1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization. This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD. 2. Berkeley db, possible bug "unable to allocate memory". TBD. [NEW OPTIONS] (*Currently all enabled for testing purposes) 1. --fast-block-sync arg=[0:1] (default: 1) a. 0 = Compute long hash per block (may take a while depending on CPU) b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence) 2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000) a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions. b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache. Fast - Write meta-data but defer data flush. Fastest - Defer meta-data and data flush. Sync - Flush data after nblocks_per_sync and wait. Async - Flush data after nblocks_per_sync but do not wait for the operation to finish. 3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower) Max number of threads to use when computing long-hash in groups. 4. --show-time-stats arg=[0:1] (default: 1) Show benchmark related time stats. 5. --db-auto-remove-logs arg=[0:1] (default: 1) For berkeley-db only. Auto remove logs if enabled. **Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version. At the moment, you need a full resync to use this optimized version. [PERFORMANCE COMPARISON] **Some figures are approximations only. Using a baseline machine of an i7-2600K+SSD+(with full pow computation): 1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain. 2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain. 3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain. Averate procesing times (with full pow computation): lmdb-optimized: 1. tx_ave = 2.5 ms / tx 2. block_ave = 5.87 ms / block memory-official-repo: 1. tx_ave = 8.85 ms / tx 2. block_ave = 19.68 ms / block lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e) 1. tx_ave = 47.8 ms / tx 2. block_ave = 64.2 ms / block **Note: The following data denotes processing times only (does not include p2p download time) lmdb-optimized processing times (with full pow computation): 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000). 2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000). 3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000). lmdb-optimized processing times (with per-block-checkpoint) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with full pow computation) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000). 2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with per-block-checkpoint) 1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 22:09:32 +02:00
db_flags = DEFAULT_FLAGS;
}
bool safemode = false;
if(options.size() >= 1)
{
if(options[0] == "safe")
{
safemode = true;
db_flags = DBF_SAFE;
sync_mode = db_sync_mode_is_default ? db_defaultsync : db_nosync;
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY) Bockchain: 1. Optim: Multi-thread long-hash computation when encountering groups of blocks. 2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible. 3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible. 4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks. 5. Optim: Multi-thread signature computation whenever possible. 6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD) 7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???). 8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads). Berkeley-DB: 1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc). 2. Fix: Unable to pop blocks on reorganize due to transaction errors. 3. Patch: Large number of transaction aborts when running multi-threaded bulk queries. 4. Patch: Insufficient locks error when running full sync. 5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation. 6. Optim: Add bulk queries to get output global indices. 7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 9. Optim: Added thread-safe buffers used when multi-threading bulk queries. 10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details) 11. Mod: Added checkpoint thread and auto-remove-logs option. 12. *Now usable on 32-bit systems like RPI2. LMDB: 1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect) 2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details) 5. Mod: Auto resize to +1GB instead of multiplier x1.5 ETC: 1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete. 2. Fix: 32-bit saturation bug when computing next difficulty on large blocks. [PENDING ISSUES] 1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization. This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD. 2. Berkeley db, possible bug "unable to allocate memory". TBD. [NEW OPTIONS] (*Currently all enabled for testing purposes) 1. --fast-block-sync arg=[0:1] (default: 1) a. 0 = Compute long hash per block (may take a while depending on CPU) b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence) 2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000) a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions. b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache. Fast - Write meta-data but defer data flush. Fastest - Defer meta-data and data flush. Sync - Flush data after nblocks_per_sync and wait. Async - Flush data after nblocks_per_sync but do not wait for the operation to finish. 3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower) Max number of threads to use when computing long-hash in groups. 4. --show-time-stats arg=[0:1] (default: 1) Show benchmark related time stats. 5. --db-auto-remove-logs arg=[0:1] (default: 1) For berkeley-db only. Auto remove logs if enabled. **Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version. At the moment, you need a full resync to use this optimized version. [PERFORMANCE COMPARISON] **Some figures are approximations only. Using a baseline machine of an i7-2600K+SSD+(with full pow computation): 1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain. 2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain. 3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain. Averate procesing times (with full pow computation): lmdb-optimized: 1. tx_ave = 2.5 ms / tx 2. block_ave = 5.87 ms / block memory-official-repo: 1. tx_ave = 8.85 ms / tx 2. block_ave = 19.68 ms / block lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e) 1. tx_ave = 47.8 ms / tx 2. block_ave = 64.2 ms / block **Note: The following data denotes processing times only (does not include p2p download time) lmdb-optimized processing times (with full pow computation): 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000). 2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000). 3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000). lmdb-optimized processing times (with per-block-checkpoint) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with full pow computation) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000). 2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with per-block-checkpoint) 1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 22:09:32 +02:00
}
else if(options[0] == "fast")
{
db_flags = DBF_FAST;
sync_mode = db_sync_mode_is_default ? db_defaultsync : db_async;
}
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY) Bockchain: 1. Optim: Multi-thread long-hash computation when encountering groups of blocks. 2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible. 3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible. 4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks. 5. Optim: Multi-thread signature computation whenever possible. 6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD) 7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???). 8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads). Berkeley-DB: 1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc). 2. Fix: Unable to pop blocks on reorganize due to transaction errors. 3. Patch: Large number of transaction aborts when running multi-threaded bulk queries. 4. Patch: Insufficient locks error when running full sync. 5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation. 6. Optim: Add bulk queries to get output global indices. 7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 9. Optim: Added thread-safe buffers used when multi-threading bulk queries. 10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details) 11. Mod: Added checkpoint thread and auto-remove-logs option. 12. *Now usable on 32-bit systems like RPI2. LMDB: 1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect) 2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details) 5. Mod: Auto resize to +1GB instead of multiplier x1.5 ETC: 1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete. 2. Fix: 32-bit saturation bug when computing next difficulty on large blocks. [PENDING ISSUES] 1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization. This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD. 2. Berkeley db, possible bug "unable to allocate memory". TBD. [NEW OPTIONS] (*Currently all enabled for testing purposes) 1. --fast-block-sync arg=[0:1] (default: 1) a. 0 = Compute long hash per block (may take a while depending on CPU) b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence) 2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000) a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions. b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache. Fast - Write meta-data but defer data flush. Fastest - Defer meta-data and data flush. Sync - Flush data after nblocks_per_sync and wait. Async - Flush data after nblocks_per_sync but do not wait for the operation to finish. 3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower) Max number of threads to use when computing long-hash in groups. 4. --show-time-stats arg=[0:1] (default: 1) Show benchmark related time stats. 5. --db-auto-remove-logs arg=[0:1] (default: 1) For berkeley-db only. Auto remove logs if enabled. **Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version. At the moment, you need a full resync to use this optimized version. [PERFORMANCE COMPARISON] **Some figures are approximations only. Using a baseline machine of an i7-2600K+SSD+(with full pow computation): 1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain. 2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain. 3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain. Averate procesing times (with full pow computation): lmdb-optimized: 1. tx_ave = 2.5 ms / tx 2. block_ave = 5.87 ms / block memory-official-repo: 1. tx_ave = 8.85 ms / tx 2. block_ave = 19.68 ms / block lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e) 1. tx_ave = 47.8 ms / tx 2. block_ave = 64.2 ms / block **Note: The following data denotes processing times only (does not include p2p download time) lmdb-optimized processing times (with full pow computation): 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000). 2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000). 3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000). lmdb-optimized processing times (with per-block-checkpoint) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with full pow computation) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000). 2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with per-block-checkpoint) 1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 22:09:32 +02:00
else if(options[0] == "fastest")
{
db_flags = DBF_FASTEST;
sync_threshold = 1000; // default to fastest:async:1000
sync_mode = db_sync_mode_is_default ? db_defaultsync : db_async;
}
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY) Bockchain: 1. Optim: Multi-thread long-hash computation when encountering groups of blocks. 2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible. 3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible. 4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks. 5. Optim: Multi-thread signature computation whenever possible. 6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD) 7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???). 8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads). Berkeley-DB: 1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc). 2. Fix: Unable to pop blocks on reorganize due to transaction errors. 3. Patch: Large number of transaction aborts when running multi-threaded bulk queries. 4. Patch: Insufficient locks error when running full sync. 5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation. 6. Optim: Add bulk queries to get output global indices. 7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 9. Optim: Added thread-safe buffers used when multi-threading bulk queries. 10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details) 11. Mod: Added checkpoint thread and auto-remove-logs option. 12. *Now usable on 32-bit systems like RPI2. LMDB: 1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect) 2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details) 5. Mod: Auto resize to +1GB instead of multiplier x1.5 ETC: 1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete. 2. Fix: 32-bit saturation bug when computing next difficulty on large blocks. [PENDING ISSUES] 1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization. This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD. 2. Berkeley db, possible bug "unable to allocate memory". TBD. [NEW OPTIONS] (*Currently all enabled for testing purposes) 1. --fast-block-sync arg=[0:1] (default: 1) a. 0 = Compute long hash per block (may take a while depending on CPU) b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence) 2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000) a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions. b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache. Fast - Write meta-data but defer data flush. Fastest - Defer meta-data and data flush. Sync - Flush data after nblocks_per_sync and wait. Async - Flush data after nblocks_per_sync but do not wait for the operation to finish. 3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower) Max number of threads to use when computing long-hash in groups. 4. --show-time-stats arg=[0:1] (default: 1) Show benchmark related time stats. 5. --db-auto-remove-logs arg=[0:1] (default: 1) For berkeley-db only. Auto remove logs if enabled. **Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version. At the moment, you need a full resync to use this optimized version. [PERFORMANCE COMPARISON] **Some figures are approximations only. Using a baseline machine of an i7-2600K+SSD+(with full pow computation): 1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain. 2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain. 3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain. Averate procesing times (with full pow computation): lmdb-optimized: 1. tx_ave = 2.5 ms / tx 2. block_ave = 5.87 ms / block memory-official-repo: 1. tx_ave = 8.85 ms / tx 2. block_ave = 19.68 ms / block lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e) 1. tx_ave = 47.8 ms / tx 2. block_ave = 64.2 ms / block **Note: The following data denotes processing times only (does not include p2p download time) lmdb-optimized processing times (with full pow computation): 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000). 2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000). 3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000). lmdb-optimized processing times (with per-block-checkpoint) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with full pow computation) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000). 2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with per-block-checkpoint) 1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 22:09:32 +02:00
else
db_flags = DEFAULT_FLAGS;
}
if(options.size() >= 2 && !safemode)
{
if(options[1] == "sync")
sync_mode = db_sync_mode_is_default ? db_defaultsync : db_sync;
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY) Bockchain: 1. Optim: Multi-thread long-hash computation when encountering groups of blocks. 2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible. 3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible. 4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks. 5. Optim: Multi-thread signature computation whenever possible. 6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD) 7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???). 8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads). Berkeley-DB: 1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc). 2. Fix: Unable to pop blocks on reorganize due to transaction errors. 3. Patch: Large number of transaction aborts when running multi-threaded bulk queries. 4. Patch: Insufficient locks error when running full sync. 5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation. 6. Optim: Add bulk queries to get output global indices. 7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 9. Optim: Added thread-safe buffers used when multi-threading bulk queries. 10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details) 11. Mod: Added checkpoint thread and auto-remove-logs option. 12. *Now usable on 32-bit systems like RPI2. LMDB: 1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect) 2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details) 5. Mod: Auto resize to +1GB instead of multiplier x1.5 ETC: 1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete. 2. Fix: 32-bit saturation bug when computing next difficulty on large blocks. [PENDING ISSUES] 1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization. This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD. 2. Berkeley db, possible bug "unable to allocate memory". TBD. [NEW OPTIONS] (*Currently all enabled for testing purposes) 1. --fast-block-sync arg=[0:1] (default: 1) a. 0 = Compute long hash per block (may take a while depending on CPU) b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence) 2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000) a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions. b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache. Fast - Write meta-data but defer data flush. Fastest - Defer meta-data and data flush. Sync - Flush data after nblocks_per_sync and wait. Async - Flush data after nblocks_per_sync but do not wait for the operation to finish. 3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower) Max number of threads to use when computing long-hash in groups. 4. --show-time-stats arg=[0:1] (default: 1) Show benchmark related time stats. 5. --db-auto-remove-logs arg=[0:1] (default: 1) For berkeley-db only. Auto remove logs if enabled. **Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version. At the moment, you need a full resync to use this optimized version. [PERFORMANCE COMPARISON] **Some figures are approximations only. Using a baseline machine of an i7-2600K+SSD+(with full pow computation): 1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain. 2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain. 3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain. Averate procesing times (with full pow computation): lmdb-optimized: 1. tx_ave = 2.5 ms / tx 2. block_ave = 5.87 ms / block memory-official-repo: 1. tx_ave = 8.85 ms / tx 2. block_ave = 19.68 ms / block lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e) 1. tx_ave = 47.8 ms / tx 2. block_ave = 64.2 ms / block **Note: The following data denotes processing times only (does not include p2p download time) lmdb-optimized processing times (with full pow computation): 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000). 2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000). 3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000). lmdb-optimized processing times (with per-block-checkpoint) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with full pow computation) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000). 2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with per-block-checkpoint) 1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 22:09:32 +02:00
else if(options[1] == "async")
sync_mode = db_sync_mode_is_default ? db_defaultsync : db_async;
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY) Bockchain: 1. Optim: Multi-thread long-hash computation when encountering groups of blocks. 2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible. 3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible. 4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks. 5. Optim: Multi-thread signature computation whenever possible. 6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD) 7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???). 8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads). Berkeley-DB: 1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc). 2. Fix: Unable to pop blocks on reorganize due to transaction errors. 3. Patch: Large number of transaction aborts when running multi-threaded bulk queries. 4. Patch: Insufficient locks error when running full sync. 5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation. 6. Optim: Add bulk queries to get output global indices. 7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 9. Optim: Added thread-safe buffers used when multi-threading bulk queries. 10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details) 11. Mod: Added checkpoint thread and auto-remove-logs option. 12. *Now usable on 32-bit systems like RPI2. LMDB: 1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect) 2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details) 5. Mod: Auto resize to +1GB instead of multiplier x1.5 ETC: 1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete. 2. Fix: 32-bit saturation bug when computing next difficulty on large blocks. [PENDING ISSUES] 1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization. This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD. 2. Berkeley db, possible bug "unable to allocate memory". TBD. [NEW OPTIONS] (*Currently all enabled for testing purposes) 1. --fast-block-sync arg=[0:1] (default: 1) a. 0 = Compute long hash per block (may take a while depending on CPU) b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence) 2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000) a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions. b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache. Fast - Write meta-data but defer data flush. Fastest - Defer meta-data and data flush. Sync - Flush data after nblocks_per_sync and wait. Async - Flush data after nblocks_per_sync but do not wait for the operation to finish. 3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower) Max number of threads to use when computing long-hash in groups. 4. --show-time-stats arg=[0:1] (default: 1) Show benchmark related time stats. 5. --db-auto-remove-logs arg=[0:1] (default: 1) For berkeley-db only. Auto remove logs if enabled. **Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version. At the moment, you need a full resync to use this optimized version. [PERFORMANCE COMPARISON] **Some figures are approximations only. Using a baseline machine of an i7-2600K+SSD+(with full pow computation): 1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain. 2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain. 3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain. Averate procesing times (with full pow computation): lmdb-optimized: 1. tx_ave = 2.5 ms / tx 2. block_ave = 5.87 ms / block memory-official-repo: 1. tx_ave = 8.85 ms / tx 2. block_ave = 19.68 ms / block lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e) 1. tx_ave = 47.8 ms / tx 2. block_ave = 64.2 ms / block **Note: The following data denotes processing times only (does not include p2p download time) lmdb-optimized processing times (with full pow computation): 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000). 2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000). 3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000). lmdb-optimized processing times (with per-block-checkpoint) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with full pow computation) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000). 2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with per-block-checkpoint) 1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 22:09:32 +02:00
}
if(options.size() >= 3 && !safemode)
{
char *endptr;
uint64_t threshold = strtoull(options[2].c_str(), &endptr, 0);
if (*endptr == '\0' || !strcmp(endptr, "blocks"))
{
sync_on_blocks = true;
sync_threshold = threshold;
}
else if (!strcmp(endptr, "bytes"))
{
sync_on_blocks = false;
sync_threshold = threshold;
}
else
{
LOG_ERROR("Invalid db sync mode: " << options[2]);
return false;
}
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY) Bockchain: 1. Optim: Multi-thread long-hash computation when encountering groups of blocks. 2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible. 3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible. 4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks. 5. Optim: Multi-thread signature computation whenever possible. 6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD) 7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???). 8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads). Berkeley-DB: 1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc). 2. Fix: Unable to pop blocks on reorganize due to transaction errors. 3. Patch: Large number of transaction aborts when running multi-threaded bulk queries. 4. Patch: Insufficient locks error when running full sync. 5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation. 6. Optim: Add bulk queries to get output global indices. 7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 9. Optim: Added thread-safe buffers used when multi-threading bulk queries. 10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details) 11. Mod: Added checkpoint thread and auto-remove-logs option. 12. *Now usable on 32-bit systems like RPI2. LMDB: 1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect) 2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details) 5. Mod: Auto resize to +1GB instead of multiplier x1.5 ETC: 1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete. 2. Fix: 32-bit saturation bug when computing next difficulty on large blocks. [PENDING ISSUES] 1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization. This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD. 2. Berkeley db, possible bug "unable to allocate memory". TBD. [NEW OPTIONS] (*Currently all enabled for testing purposes) 1. --fast-block-sync arg=[0:1] (default: 1) a. 0 = Compute long hash per block (may take a while depending on CPU) b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence) 2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000) a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions. b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache. Fast - Write meta-data but defer data flush. Fastest - Defer meta-data and data flush. Sync - Flush data after nblocks_per_sync and wait. Async - Flush data after nblocks_per_sync but do not wait for the operation to finish. 3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower) Max number of threads to use when computing long-hash in groups. 4. --show-time-stats arg=[0:1] (default: 1) Show benchmark related time stats. 5. --db-auto-remove-logs arg=[0:1] (default: 1) For berkeley-db only. Auto remove logs if enabled. **Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version. At the moment, you need a full resync to use this optimized version. [PERFORMANCE COMPARISON] **Some figures are approximations only. Using a baseline machine of an i7-2600K+SSD+(with full pow computation): 1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain. 2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain. 3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain. Averate procesing times (with full pow computation): lmdb-optimized: 1. tx_ave = 2.5 ms / tx 2. block_ave = 5.87 ms / block memory-official-repo: 1. tx_ave = 8.85 ms / tx 2. block_ave = 19.68 ms / block lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e) 1. tx_ave = 47.8 ms / tx 2. block_ave = 64.2 ms / block **Note: The following data denotes processing times only (does not include p2p download time) lmdb-optimized processing times (with full pow computation): 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000). 2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000). 3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000). lmdb-optimized processing times (with per-block-checkpoint) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with full pow computation) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000). 2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with per-block-checkpoint) 1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 22:09:32 +02:00
}
if (db_salvage)
db_flags |= DBF_SALVAGE;
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY) Bockchain: 1. Optim: Multi-thread long-hash computation when encountering groups of blocks. 2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible. 3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible. 4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks. 5. Optim: Multi-thread signature computation whenever possible. 6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD) 7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???). 8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads). Berkeley-DB: 1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc). 2. Fix: Unable to pop blocks on reorganize due to transaction errors. 3. Patch: Large number of transaction aborts when running multi-threaded bulk queries. 4. Patch: Insufficient locks error when running full sync. 5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation. 6. Optim: Add bulk queries to get output global indices. 7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 9. Optim: Added thread-safe buffers used when multi-threading bulk queries. 10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details) 11. Mod: Added checkpoint thread and auto-remove-logs option. 12. *Now usable on 32-bit systems like RPI2. LMDB: 1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect) 2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details) 5. Mod: Auto resize to +1GB instead of multiplier x1.5 ETC: 1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete. 2. Fix: 32-bit saturation bug when computing next difficulty on large blocks. [PENDING ISSUES] 1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization. This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD. 2. Berkeley db, possible bug "unable to allocate memory". TBD. [NEW OPTIONS] (*Currently all enabled for testing purposes) 1. --fast-block-sync arg=[0:1] (default: 1) a. 0 = Compute long hash per block (may take a while depending on CPU) b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence) 2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000) a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions. b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache. Fast - Write meta-data but defer data flush. Fastest - Defer meta-data and data flush. Sync - Flush data after nblocks_per_sync and wait. Async - Flush data after nblocks_per_sync but do not wait for the operation to finish. 3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower) Max number of threads to use when computing long-hash in groups. 4. --show-time-stats arg=[0:1] (default: 1) Show benchmark related time stats. 5. --db-auto-remove-logs arg=[0:1] (default: 1) For berkeley-db only. Auto remove logs if enabled. **Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version. At the moment, you need a full resync to use this optimized version. [PERFORMANCE COMPARISON] **Some figures are approximations only. Using a baseline machine of an i7-2600K+SSD+(with full pow computation): 1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain. 2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain. 3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain. Averate procesing times (with full pow computation): lmdb-optimized: 1. tx_ave = 2.5 ms / tx 2. block_ave = 5.87 ms / block memory-official-repo: 1. tx_ave = 8.85 ms / tx 2. block_ave = 19.68 ms / block lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e) 1. tx_ave = 47.8 ms / tx 2. block_ave = 64.2 ms / block **Note: The following data denotes processing times only (does not include p2p download time) lmdb-optimized processing times (with full pow computation): 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000). 2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000). 3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000). lmdb-optimized processing times (with per-block-checkpoint) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with full pow computation) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000). 2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with per-block-checkpoint) 1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 22:09:32 +02:00
db->open(filename, db_flags);
if(!db->m_open)
return false;
}
catch (const DB_ERROR& e)
{
LOG_ERROR("Error opening database: " << e.what());
return false;
}
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY) Bockchain: 1. Optim: Multi-thread long-hash computation when encountering groups of blocks. 2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible. 3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible. 4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks. 5. Optim: Multi-thread signature computation whenever possible. 6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD) 7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???). 8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads). Berkeley-DB: 1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc). 2. Fix: Unable to pop blocks on reorganize due to transaction errors. 3. Patch: Large number of transaction aborts when running multi-threaded bulk queries. 4. Patch: Insufficient locks error when running full sync. 5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation. 6. Optim: Add bulk queries to get output global indices. 7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 9. Optim: Added thread-safe buffers used when multi-threading bulk queries. 10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details) 11. Mod: Added checkpoint thread and auto-remove-logs option. 12. *Now usable on 32-bit systems like RPI2. LMDB: 1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect) 2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details) 5. Mod: Auto resize to +1GB instead of multiplier x1.5 ETC: 1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete. 2. Fix: 32-bit saturation bug when computing next difficulty on large blocks. [PENDING ISSUES] 1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization. This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD. 2. Berkeley db, possible bug "unable to allocate memory". TBD. [NEW OPTIONS] (*Currently all enabled for testing purposes) 1. --fast-block-sync arg=[0:1] (default: 1) a. 0 = Compute long hash per block (may take a while depending on CPU) b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence) 2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000) a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions. b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache. Fast - Write meta-data but defer data flush. Fastest - Defer meta-data and data flush. Sync - Flush data after nblocks_per_sync and wait. Async - Flush data after nblocks_per_sync but do not wait for the operation to finish. 3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower) Max number of threads to use when computing long-hash in groups. 4. --show-time-stats arg=[0:1] (default: 1) Show benchmark related time stats. 5. --db-auto-remove-logs arg=[0:1] (default: 1) For berkeley-db only. Auto remove logs if enabled. **Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version. At the moment, you need a full resync to use this optimized version. [PERFORMANCE COMPARISON] **Some figures are approximations only. Using a baseline machine of an i7-2600K+SSD+(with full pow computation): 1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain. 2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain. 3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain. Averate procesing times (with full pow computation): lmdb-optimized: 1. tx_ave = 2.5 ms / tx 2. block_ave = 5.87 ms / block memory-official-repo: 1. tx_ave = 8.85 ms / tx 2. block_ave = 19.68 ms / block lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e) 1. tx_ave = 47.8 ms / tx 2. block_ave = 64.2 ms / block **Note: The following data denotes processing times only (does not include p2p download time) lmdb-optimized processing times (with full pow computation): 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000). 2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000). 3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000). lmdb-optimized processing times (with per-block-checkpoint) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with full pow computation) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000). 2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with per-block-checkpoint) 1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 22:09:32 +02:00
m_blockchain_storage.set_user_options(blocks_threads,
sync_on_blocks, sync_threshold, sync_mode, fast_sync);
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY) Bockchain: 1. Optim: Multi-thread long-hash computation when encountering groups of blocks. 2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible. 3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible. 4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks. 5. Optim: Multi-thread signature computation whenever possible. 6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD) 7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???). 8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads). Berkeley-DB: 1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc). 2. Fix: Unable to pop blocks on reorganize due to transaction errors. 3. Patch: Large number of transaction aborts when running multi-threaded bulk queries. 4. Patch: Insufficient locks error when running full sync. 5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation. 6. Optim: Add bulk queries to get output global indices. 7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 9. Optim: Added thread-safe buffers used when multi-threading bulk queries. 10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details) 11. Mod: Added checkpoint thread and auto-remove-logs option. 12. *Now usable on 32-bit systems like RPI2. LMDB: 1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect) 2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details) 5. Mod: Auto resize to +1GB instead of multiplier x1.5 ETC: 1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete. 2. Fix: 32-bit saturation bug when computing next difficulty on large blocks. [PENDING ISSUES] 1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization. This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD. 2. Berkeley db, possible bug "unable to allocate memory". TBD. [NEW OPTIONS] (*Currently all enabled for testing purposes) 1. --fast-block-sync arg=[0:1] (default: 1) a. 0 = Compute long hash per block (may take a while depending on CPU) b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence) 2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000) a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions. b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache. Fast - Write meta-data but defer data flush. Fastest - Defer meta-data and data flush. Sync - Flush data after nblocks_per_sync and wait. Async - Flush data after nblocks_per_sync but do not wait for the operation to finish. 3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower) Max number of threads to use when computing long-hash in groups. 4. --show-time-stats arg=[0:1] (default: 1) Show benchmark related time stats. 5. --db-auto-remove-logs arg=[0:1] (default: 1) For berkeley-db only. Auto remove logs if enabled. **Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version. At the moment, you need a full resync to use this optimized version. [PERFORMANCE COMPARISON] **Some figures are approximations only. Using a baseline machine of an i7-2600K+SSD+(with full pow computation): 1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain. 2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain. 3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain. Averate procesing times (with full pow computation): lmdb-optimized: 1. tx_ave = 2.5 ms / tx 2. block_ave = 5.87 ms / block memory-official-repo: 1. tx_ave = 8.85 ms / tx 2. block_ave = 19.68 ms / block lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e) 1. tx_ave = 47.8 ms / tx 2. block_ave = 64.2 ms / block **Note: The following data denotes processing times only (does not include p2p download time) lmdb-optimized processing times (with full pow computation): 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000). 2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000). 3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000). lmdb-optimized processing times (with per-block-checkpoint) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with full pow computation) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000). 2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with per-block-checkpoint) 1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 22:09:32 +02:00
try
{
if (!command_line::is_arg_defaulted(vm, arg_block_notify))
m_blockchain_storage.set_block_notify(std::shared_ptr<tools::Notify>(new tools::Notify(command_line::get_arg(vm, arg_block_notify).c_str())));
}
catch (const std::exception &e)
{
MERROR("Failed to parse block notify spec");
}
try
{
if (!command_line::is_arg_defaulted(vm, arg_reorg_notify))
m_blockchain_storage.set_reorg_notify(std::shared_ptr<tools::Notify>(new tools::Notify(command_line::get_arg(vm, arg_reorg_notify).c_str())));
}
catch (const std::exception &e)
{
MERROR("Failed to parse reorg notify spec");
}
try
{
if (!command_line::is_arg_defaulted(vm, arg_block_rate_notify))
m_block_rate_notify.reset(new tools::Notify(command_line::get_arg(vm, arg_block_rate_notify).c_str()));
}
catch (const std::exception &e)
{
MERROR("Failed to parse block rate notify spec");
}
2018-10-10 08:51:16 +02:00
const std::vector<std::pair<uint8_t, uint64_t>> regtest_hard_forks = {std::make_pair(1, 0), std::make_pair(Blockchain::get_hard_fork_heights(MAINNET).back().version, 1), std::make_pair(0, 0)};
const cryptonote::test_options regtest_test_options = {
ArticMine's new block weight algorithm This curbs runaway growth while still allowing substantial spikes in block weight Original specification from ArticMine: here is the scaling proposal Define: LongTermBlockWeight Before fork: LongTermBlockWeight = BlockWeight At or after fork: LongTermBlockWeight = min(BlockWeight, 1.4*LongTermEffectiveMedianBlockWeight) Note: To avoid possible consensus issues over rounding the LongTermBlockWeight for a given block should be calculated to the nearest byte, and stored as a integer in the block itself. The stored LongTermBlockWeight is then used for future calculations of the LongTermEffectiveMedianBlockWeight and not recalculated each time. Define: LongTermEffectiveMedianBlockWeight LongTermEffectiveMedianBlockWeight = max(300000, MedianOverPrevious100000Blocks(LongTermBlockWeight)) Change Definition of EffectiveMedianBlockWeight From (current definition) EffectiveMedianBlockWeight = max(300000, MedianOverPrevious100Blocks(BlockWeight)) To (proposed definition) EffectiveMedianBlockWeight = min(max(300000, MedianOverPrevious100Blocks(BlockWeight)), 50*LongTermEffectiveMedianBlockWeight) Notes: 1) There are no other changes to the existing penalty formula, median calculation, fees etc. 2) There is the requirement to store the LongTermBlockWeight of a block unencrypted in the block itself. This is to avoid possible consensus issues over rounding and also to prevent the calculations from becoming unwieldy as we move away from the fork. 3) When the EffectiveMedianBlockWeight cap is reached it is still possible to mine blocks up to 2x the EffectiveMedianBlockWeight by paying the corresponding penalty.
2019-01-21 18:18:50 +01:00
regtest_hard_forks,
0
};
const difficulty_type fixed_difficulty = command_line::get_arg(vm, arg_fixed_difficulty);
BlockchainDB *initialized_db = db.release();
// Service Nodes
{
m_service_node_list.set_db_pointer(initialized_db);
m_service_node_list.set_quorum_history_storage(command_line::get_arg(vm, arg_store_quorum_history));
m_blockchain_storage.hook_block_added(m_service_node_list);
m_blockchain_storage.hook_blockchain_detached(m_service_node_list);
m_blockchain_storage.hook_init(m_service_node_list);
m_blockchain_storage.hook_validate_miner_tx(m_service_node_list);
// NOTE: There is an implicit dependency on service node lists being hooked first!
m_blockchain_storage.hook_init(m_quorum_cop);
m_blockchain_storage.hook_block_added(m_quorum_cop);
m_blockchain_storage.hook_blockchain_detached(m_quorum_cop);
}
// Checkpoints
{
auto data_dir = boost::filesystem::path(m_config_folder);
boost::filesystem::path json(JSON_HASH_FILE_NAME);
boost::filesystem::path checkpoint_json_hashfile_fullpath = data_dir / json;
m_checkpoints_path = checkpoint_json_hashfile_fullpath.string();
}
r = m_blockchain_storage.init(initialized_db, m_nettype, m_offline, regtest ? &regtest_test_options : test_options, fixed_difficulty, get_checkpoints);
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY) Bockchain: 1. Optim: Multi-thread long-hash computation when encountering groups of blocks. 2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible. 3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible. 4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks. 5. Optim: Multi-thread signature computation whenever possible. 6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD) 7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???). 8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads). Berkeley-DB: 1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc). 2. Fix: Unable to pop blocks on reorganize due to transaction errors. 3. Patch: Large number of transaction aborts when running multi-threaded bulk queries. 4. Patch: Insufficient locks error when running full sync. 5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation. 6. Optim: Add bulk queries to get output global indices. 7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 9. Optim: Added thread-safe buffers used when multi-threading bulk queries. 10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details) 11. Mod: Added checkpoint thread and auto-remove-logs option. 12. *Now usable on 32-bit systems like RPI2. LMDB: 1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect) 2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details) 5. Mod: Auto resize to +1GB instead of multiplier x1.5 ETC: 1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete. 2. Fix: 32-bit saturation bug when computing next difficulty on large blocks. [PENDING ISSUES] 1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization. This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD. 2. Berkeley db, possible bug "unable to allocate memory". TBD. [NEW OPTIONS] (*Currently all enabled for testing purposes) 1. --fast-block-sync arg=[0:1] (default: 1) a. 0 = Compute long hash per block (may take a while depending on CPU) b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence) 2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000) a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions. b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache. Fast - Write meta-data but defer data flush. Fastest - Defer meta-data and data flush. Sync - Flush data after nblocks_per_sync and wait. Async - Flush data after nblocks_per_sync but do not wait for the operation to finish. 3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower) Max number of threads to use when computing long-hash in groups. 4. --show-time-stats arg=[0:1] (default: 1) Show benchmark related time stats. 5. --db-auto-remove-logs arg=[0:1] (default: 1) For berkeley-db only. Auto remove logs if enabled. **Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version. At the moment, you need a full resync to use this optimized version. [PERFORMANCE COMPARISON] **Some figures are approximations only. Using a baseline machine of an i7-2600K+SSD+(with full pow computation): 1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain. 2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain. 3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain. Averate procesing times (with full pow computation): lmdb-optimized: 1. tx_ave = 2.5 ms / tx 2. block_ave = 5.87 ms / block memory-official-repo: 1. tx_ave = 8.85 ms / tx 2. block_ave = 19.68 ms / block lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e) 1. tx_ave = 47.8 ms / tx 2. block_ave = 64.2 ms / block **Note: The following data denotes processing times only (does not include p2p download time) lmdb-optimized processing times (with full pow computation): 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000). 2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000). 3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000). lmdb-optimized processing times (with per-block-checkpoint) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with full pow computation) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000). 2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with per-block-checkpoint) 1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 22:09:32 +02:00
if (!command_line::is_arg_defaulted(vm, arg_recalculate_difficulty))
{
uint64_t recalc_diff_from_block = command_line::get_arg(vm, arg_recalculate_difficulty);
cryptonote::BlockchainDB::fixup_context context = {};
context.type = cryptonote::BlockchainDB::fixup_type::calculate_difficulty;
context.calculate_difficulty_params.start_height = recalc_diff_from_block;
initialized_db->fixup(context);
}
r = m_mempool.init(max_txpool_weight);
CHECK_AND_ASSERT_MES(r, false, "Failed to initialize memory pool");
// now that we have a valid m_blockchain_storage, we can clean out any
// transactions in the pool that do not conform to the current fork
m_mempool.validate(m_blockchain_storage.get_current_hard_fork_version());
bool show_time_stats = command_line::get_arg(vm, arg_show_time_stats) != 0;
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY) Bockchain: 1. Optim: Multi-thread long-hash computation when encountering groups of blocks. 2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible. 3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible. 4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks. 5. Optim: Multi-thread signature computation whenever possible. 6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD) 7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???). 8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads). Berkeley-DB: 1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc). 2. Fix: Unable to pop blocks on reorganize due to transaction errors. 3. Patch: Large number of transaction aborts when running multi-threaded bulk queries. 4. Patch: Insufficient locks error when running full sync. 5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation. 6. Optim: Add bulk queries to get output global indices. 7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 9. Optim: Added thread-safe buffers used when multi-threading bulk queries. 10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details) 11. Mod: Added checkpoint thread and auto-remove-logs option. 12. *Now usable on 32-bit systems like RPI2. LMDB: 1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect) 2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details) 5. Mod: Auto resize to +1GB instead of multiplier x1.5 ETC: 1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete. 2. Fix: 32-bit saturation bug when computing next difficulty on large blocks. [PENDING ISSUES] 1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization. This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD. 2. Berkeley db, possible bug "unable to allocate memory". TBD. [NEW OPTIONS] (*Currently all enabled for testing purposes) 1. --fast-block-sync arg=[0:1] (default: 1) a. 0 = Compute long hash per block (may take a while depending on CPU) b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence) 2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000) a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions. b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache. Fast - Write meta-data but defer data flush. Fastest - Defer meta-data and data flush. Sync - Flush data after nblocks_per_sync and wait. Async - Flush data after nblocks_per_sync but do not wait for the operation to finish. 3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower) Max number of threads to use when computing long-hash in groups. 4. --show-time-stats arg=[0:1] (default: 1) Show benchmark related time stats. 5. --db-auto-remove-logs arg=[0:1] (default: 1) For berkeley-db only. Auto remove logs if enabled. **Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version. At the moment, you need a full resync to use this optimized version. [PERFORMANCE COMPARISON] **Some figures are approximations only. Using a baseline machine of an i7-2600K+SSD+(with full pow computation): 1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain. 2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain. 3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain. Averate procesing times (with full pow computation): lmdb-optimized: 1. tx_ave = 2.5 ms / tx 2. block_ave = 5.87 ms / block memory-official-repo: 1. tx_ave = 8.85 ms / tx 2. block_ave = 19.68 ms / block lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e) 1. tx_ave = 47.8 ms / tx 2. block_ave = 64.2 ms / block **Note: The following data denotes processing times only (does not include p2p download time) lmdb-optimized processing times (with full pow computation): 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000). 2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000). 3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000). lmdb-optimized processing times (with per-block-checkpoint) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with full pow computation) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000). 2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with per-block-checkpoint) 1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 22:09:32 +02:00
m_blockchain_storage.set_show_time_stats(show_time_stats);
2014-03-03 23:07:58 +01:00
CHECK_AND_ASSERT_MES(r, false, "Failed to initialize blockchain storage");
block_sync_size = command_line::get_arg(vm, arg_block_sync_size);
if (block_sync_size > BLOCKS_SYNCHRONIZING_MAX_COUNT)
MERROR("Error --dblock-sync-size cannot be greater than " << BLOCKS_SYNCHRONIZING_MAX_COUNT);
MGINFO("Loading checkpoints");
CHECK_AND_ASSERT_MES(update_checkpoints_from_json_file(), false, "One or more checkpoints loaded from json conflicted with existing checkpoints.");
// DNS versions checking
if (check_updates_string == "disabled")
check_updates_level = UPDATES_DISABLED;
else if (check_updates_string == "notify")
check_updates_level = UPDATES_NOTIFY;
else if (check_updates_string == "download")
check_updates_level = UPDATES_DOWNLOAD;
else if (check_updates_string == "update")
check_updates_level = UPDATES_UPDATE;
else {
MERROR("Invalid argument to --dns-versions-check: " << check_updates_string);
return false;
}
2018-02-16 12:04:04 +01:00
r = m_miner.init(vm, m_nettype);
CHECK_AND_ASSERT_MES(r, false, "Failed to initialize miner instance");
2014-03-03 23:07:58 +01:00
if (!keep_alt_blocks && !m_blockchain_storage.get_db().is_read_only())
m_blockchain_storage.get_db().drop_alt_blocks();
if (prune_blockchain)
{
// display a message if the blockchain is not pruned yet
if (m_blockchain_storage.get_current_blockchain_height() > 1 && !m_blockchain_storage.get_blockchain_pruning_seed())
{
MGINFO("Pruning blockchain...");
CHECK_AND_ASSERT_MES(m_blockchain_storage.prune_blockchain(), false, "Failed to prune blockchain");
}
else
{
CHECK_AND_ASSERT_MES(m_blockchain_storage.update_blockchain_pruning(), false, "Failed to update blockchain pruning");
}
}
2014-03-03 23:07:58 +01:00
return load_state_data();
}
//-----------------------------------------------------------------------------------------------
bool core::init_service_node_key()
{
std::string keypath = m_config_folder + "/key";
if (epee::file_io_utils::is_file_exist(keypath))
{
std::string keystr;
bool r = epee::file_io_utils::load_file_to_string(keypath, keystr);
2018-10-24 03:00:33 +02:00
memcpy(&unwrap(unwrap(m_service_node_key)), keystr.data(), sizeof(m_service_node_key));
wipeable_string wipe(keystr);
CHECK_AND_ASSERT_MES(r, false, "failed to load service node key from file");
r = crypto::secret_key_to_public_key(m_service_node_key, m_service_node_pubkey);
CHECK_AND_ASSERT_MES(r, false, "failed to generate pubkey from secret key");
}
else
{
cryptonote::keypair keypair = keypair::generate(hw::get_device("default"));
m_service_node_pubkey = keypair.pub;
m_service_node_key = keypair.sec;
std::string keystr(reinterpret_cast<const char *>(&m_service_node_key), sizeof(m_service_node_key));
bool r = epee::file_io_utils::save_string_to_file(keypath, keystr);
wipeable_string wipe(keystr);
CHECK_AND_ASSERT_MES(r, false, "failed to save service node key to file");
using namespace boost::filesystem;
permissions(keypath, owner_read);
}
MGINFO_YELLOW("Service node pubkey is " << epee::string_tools::pod_to_hex(m_service_node_pubkey));
return true;
}
//-----------------------------------------------------------------------------------------------
2014-03-03 23:07:58 +01:00
bool core::set_genesis_block(const block& b)
{
return m_blockchain_storage.reset_and_set_genesis_block(b);
}
//-----------------------------------------------------------------------------------------------
bool core::load_state_data()
{
// may be some code later
return true;
}
//-----------------------------------------------------------------------------------------------
bool core::deinit()
2014-03-03 23:07:58 +01:00
{
m_service_node_list.store();
m_service_node_list.set_db_pointer(nullptr);
m_miner.stop();
m_mempool.deinit();
2016-10-03 03:06:55 +02:00
m_blockchain_storage.deinit();
2014-03-03 23:07:58 +01:00
return true;
}
//-----------------------------------------------------------------------------------------------
void core::test_drop_download()
{
m_test_drop_download = false;
}
//-----------------------------------------------------------------------------------------------
void core::test_drop_download_height(uint64_t height)
{
m_test_drop_download_height = height;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 12:25:57 +02:00
bool core::get_test_drop_download() const
{
return m_test_drop_download;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 12:25:57 +02:00
bool core::get_test_drop_download_height() const
{
if (m_test_drop_download_height == 0)
return true;
if (get_blockchain_storage().get_current_blockchain_height() <= m_test_drop_download_height)
return true;
return false;
}
2014-03-03 23:07:58 +01:00
//-----------------------------------------------------------------------------------------------
bool core::handle_incoming_tx_pre(const blobdata& tx_blob, tx_verification_context& tvc, cryptonote::transaction &tx, crypto::hash &tx_hash, bool keeped_by_block, bool relayed, bool do_not_relay)
2014-03-03 23:07:58 +01:00
{
tvc = boost::value_initialized<tx_verification_context>();
if(tx_blob.size() > get_max_tx_size())
{
LOG_PRINT_L1("WRONG TRANSACTION BLOB, too big size " << tx_blob.size() << ", rejected");
2014-03-03 23:07:58 +01:00
tvc.m_verifivation_failed = true;
tvc.m_too_big = true;
2014-03-03 23:07:58 +01:00
return false;
}
tx_hash = crypto::null_hash;
2014-03-03 23:07:58 +01:00
if(!parse_tx_from_blob(tx, tx_hash, tx_blob))
2014-03-03 23:07:58 +01:00
{
LOG_PRINT_L1("WRONG TRANSACTION BLOB, Failed to parse, rejected");
2014-03-03 23:07:58 +01:00
tvc.m_verifivation_failed = true;
return false;
}
//std::cout << "!"<< tx.vin.size() << std::endl;
bad_semantics_txes_lock.lock();
for (int idx = 0; idx < 2; ++idx)
{
if (bad_semantics_txes[idx].find(tx_hash) != bad_semantics_txes[idx].end())
{
bad_semantics_txes_lock.unlock();
LOG_PRINT_L1("Transaction already seen with bad semantics, rejected");
tvc.m_verifivation_failed = true;
return false;
}
}
bad_semantics_txes_lock.unlock();
return true;
}
//-----------------------------------------------------------------------------------------------
bool core::handle_incoming_tx_post(const blobdata& tx_blob, tx_verification_context& tvc, cryptonote::transaction &tx, crypto::hash &tx_hash, bool keeped_by_block, bool relayed, bool do_not_relay)
{
2014-03-03 23:07:58 +01:00
if(!check_tx_syntax(tx))
{
LOG_PRINT_L1("WRONG TRANSACTION BLOB, Failed to check tx " << tx_hash << " syntax, rejected");
2014-03-03 23:07:58 +01:00
tvc.m_verifivation_failed = true;
return false;
}
return true;
}
//-----------------------------------------------------------------------------------------------
void core::set_semantics_failed(const crypto::hash &tx_hash)
{
LOG_PRINT_L1("WRONG TRANSACTION BLOB, Failed to check tx " << tx_hash << " semantic, rejected");
bad_semantics_txes_lock.lock();
bad_semantics_txes[0].insert(tx_hash);
if (bad_semantics_txes[0].size() >= BAD_SEMANTICS_TXES_MAX_SIZE)
{
std::swap(bad_semantics_txes[0], bad_semantics_txes[1]);
bad_semantics_txes[0].clear();
}
bad_semantics_txes_lock.unlock();
}
//-----------------------------------------------------------------------------------------------
static bool is_canonical_bulletproof_layout(const std::vector<rct::Bulletproof> &proofs)
{
if (proofs.size() != 1)
return false;
const size_t sz = proofs[0].V.size();
if (sz == 0 || sz > BULLETPROOF_MAX_OUTPUTS)
return false;
return true;
}
//-----------------------------------------------------------------------------------------------
bool core::handle_incoming_tx_accumulated_batch(std::vector<tx_verification_batch_info> &tx_info, bool keeped_by_block)
{
bool ret = true;
if (keeped_by_block && get_blockchain_storage().is_within_compiled_block_hash_area())
{
MTRACE("Skipping semantics check for tx kept by block in embedded hash area");
return true;
}
std::vector<const rct::rctSig*> rvv;
for (size_t n = 0; n < tx_info.size(); ++n)
2014-03-03 23:07:58 +01:00
{
if (!check_tx_semantic(*tx_info[n].tx, keeped_by_block))
{
set_semantics_failed(tx_info[n].tx_hash);
tx_info[n].tvc.m_verifivation_failed = true;
tx_info[n].result = false;
continue;
}
Make tx type and version scoped enums This converts the transaction type and version to scoped enum, giving type safety and making the tx type assignment less error prone because there is no implicit conversion or comparison with raw integers that has to be worried about. This ends up converting any use of `cryptonote::transaction::type_xyz` to `cryptonote::transaction::txtype::xyz`. For version, names like `transaction::version_v4` become `cryptonote::txversion::v4_tx_types`. This also allows/includes various other simplifications related to or enabled by this change: - handle `is_deregister` dynamically in serialization code (setting `type::standard` or `type::deregister` rather than using a version-determined union) - `get_type()` is no longer needed with the above change: it is now much simpler to directly access `type` which will always have the correct value (even for v2 or v3 transaction types). And though there was an assertion on the enum value, `get_type()` was being used only sporadically: many places accessed `.type` directly. - the old unscoped enum didn't have a type but was assumed castable to/from `uint16_t`, which technically meant there was potential undefined behaviour when deserializing any type values >= 8. - tx type range checks weren't being done in all serialization paths; they are now. Because `get_type()` was not used everywhere (lots of places simply accessed `.type` directory) these might not have been caught. - `set_type()` is not needed; it was only being used in a single place (wallet2.cpp) and only for v4 txes, so the version protection code was never doing anything. - added a std::ostream << operator for the enum types so that they can be output with `<< tx_type <<` rather than needing to wrap it in `type_to_string(tx_type)` everywhere. For the versions, you get the annotated version string (e.g. 4_tx_types) rather than just the number 4.
2019-06-11 20:53:46 +02:00
if (tx_info[n].tx->type != txtype::standard)
continue;
const rct::rctSig &rv = tx_info[n].tx->rct_signatures;
switch (rv.type) {
case rct::RCTTypeNull:
// coinbase should not come here, so we reject for all other types
MERROR_VER("Unexpected Null rctSig type");
set_semantics_failed(tx_info[n].tx_hash);
tx_info[n].tvc.m_verifivation_failed = true;
tx_info[n].result = false;
break;
case rct::RCTTypeSimple:
if (!rct::verRctSemanticsSimple(rv))
{
MERROR_VER("rct signature semantics check failed");
set_semantics_failed(tx_info[n].tx_hash);
tx_info[n].tvc.m_verifivation_failed = true;
tx_info[n].result = false;
break;
}
break;
case rct::RCTTypeFull:
if (!rct::verRct(rv, true))
{
MERROR_VER("rct signature semantics check failed");
set_semantics_failed(tx_info[n].tx_hash);
tx_info[n].tvc.m_verifivation_failed = true;
tx_info[n].result = false;
break;
}
break;
case rct::RCTTypeBulletproof:
case rct::RCTTypeBulletproof2:
if (!is_canonical_bulletproof_layout(rv.p.bulletproofs))
{
MERROR_VER("Bulletproof does not have canonical form");
set_semantics_failed(tx_info[n].tx_hash);
tx_info[n].tvc.m_verifivation_failed = true;
tx_info[n].result = false;
break;
}
rvv.push_back(&rv); // delayed batch verification
break;
default:
MERROR_VER("Unknown rct type: " << rv.type);
set_semantics_failed(tx_info[n].tx_hash);
tx_info[n].tvc.m_verifivation_failed = true;
tx_info[n].result = false;
break;
}
}
if (!rvv.empty() && !rct::verRctSemanticsSimple(rvv))
{
LOG_PRINT_L1("One transaction among this group has bad semantics, verifying one at a time");
ret = false;
const bool assumed_bad = rvv.size() == 1; // if there's only one tx, it must be the bad one
for (size_t n = 0; n < tx_info.size(); ++n)
{
if (!tx_info[n].result)
continue;
if (tx_info[n].tx->rct_signatures.type != rct::RCTTypeBulletproof && tx_info[n].tx->rct_signatures.type != rct::RCTTypeBulletproof2)
continue;
if (assumed_bad || !rct::verRctSemanticsSimple(tx_info[n].tx->rct_signatures))
{
set_semantics_failed(tx_info[n].tx_hash);
tx_info[n].tvc.m_verifivation_failed = true;
tx_info[n].result = false;
}
}
2014-03-03 23:07:58 +01:00
}
return ret;
}
//-----------------------------------------------------------------------------------------------
bool core::handle_incoming_txs(const std::vector<blobdata>& tx_blobs, std::vector<tx_verification_context>& tvc, bool keeped_by_block, bool relayed, bool do_not_relay)
{
TRY_ENTRY();
CRITICAL_REGION_LOCAL(m_incoming_tx_lock);
struct result { bool res; cryptonote::transaction tx; crypto::hash hash; };
std::vector<result> results(tx_blobs.size());
tvc.resize(tx_blobs.size());
tools::threadpool& tpool = tools::threadpool::getInstance();
tools::threadpool::waiter waiter;
std::vector<blobdata>::const_iterator it = tx_blobs.begin();
for (size_t i = 0; i < tx_blobs.size(); i++, ++it) {
tpool.submit(&waiter, [&, i, it] {
try
{
results[i].res = handle_incoming_tx_pre(*it, tvc[i], results[i].tx, results[i].hash, keeped_by_block, relayed, do_not_relay);
}
catch (const std::exception &e)
{
MERROR_VER("Exception in handle_incoming_tx_pre: " << e.what());
tvc[i].m_verifivation_failed = true;
results[i].res = false;
}
});
}
waiter.wait(&tpool);
it = tx_blobs.begin();
std::vector<bool> already_have(tx_blobs.size(), false);
for (size_t i = 0; i < tx_blobs.size(); i++, ++it) {
if (!results[i].res)
continue;
if(m_mempool.have_tx(results[i].hash))
{
LOG_PRINT_L2("tx " << results[i].hash << "already have transaction in tx_pool");
already_have[i] = true;
}
else if(m_blockchain_storage.have_tx(results[i].hash))
{
LOG_PRINT_L2("tx " << results[i].hash << " already have transaction in blockchain");
already_have[i] = true;
}
else
{
tpool.submit(&waiter, [&, i, it] {
try
{
results[i].res = handle_incoming_tx_post(*it, tvc[i], results[i].tx, results[i].hash, keeped_by_block, relayed, do_not_relay);
}
catch (const std::exception &e)
{
MERROR_VER("Exception in handle_incoming_tx_post: " << e.what());
tvc[i].m_verifivation_failed = true;
results[i].res = false;
}
});
}
}
waiter.wait(&tpool);
2014-03-03 23:07:58 +01:00
std::vector<tx_verification_batch_info> tx_info;
tx_info.reserve(tx_blobs.size());
for (size_t i = 0; i < tx_blobs.size(); i++) {
if (!results[i].res || already_have[i])
continue;
tx_info.push_back({&results[i].tx, results[i].hash, tvc[i], results[i].res});
}
if (!tx_info.empty())
handle_incoming_tx_accumulated_batch(tx_info, keeped_by_block);
bool ok = true;
it = tx_blobs.begin();
for (size_t i = 0; i < tx_blobs.size(); i++, ++it) {
if (!results[i].res)
{
ok = false;
continue;
}
if (keeped_by_block)
get_blockchain_storage().on_new_tx_from_block(results[i].tx);
if (already_have[i])
continue;
const size_t weight = get_transaction_weight(results[i].tx, it->size());
ok &= add_new_tx(results[i].tx, results[i].hash, tx_blobs[i], weight, tvc[i], keeped_by_block, relayed, do_not_relay);
if(tvc[i].m_verifivation_failed)
{
MERROR_VER("Transaction verification failed: " << results[i].hash);
}
else if(tvc[i].m_verifivation_impossible)
{
MERROR_VER("Transaction verification impossible: " << results[i].hash);
}
if(tvc[i].m_added_to_pool)
MDEBUG("tx added: " << results[i].hash);
}
return ok;
CATCH_ENTRY_L0("core::handle_incoming_txs()", false);
}
//-----------------------------------------------------------------------------------------------
bool core::handle_incoming_tx(const blobdata& tx_blob, tx_verification_context& tvc, bool keeped_by_block, bool relayed, bool do_not_relay)
{
std::vector<cryptonote::blobdata> tx_blobs;
tx_blobs.push_back(tx_blob);
std::vector<tx_verification_context> tvcv(1);
bool r = handle_incoming_txs(tx_blobs, tvcv, keeped_by_block, relayed, do_not_relay);
tvc = tvcv[0];
2014-03-03 23:07:58 +01:00
return r;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 12:25:57 +02:00
bool core::get_stat_info(core_stat_info& st_inf) const
2014-03-03 23:07:58 +01:00
{
st_inf.mining_speed = m_miner.get_speed();
st_inf.alternative_blocks = m_blockchain_storage.get_alternative_blocks_count();
st_inf.blockchain_height = m_blockchain_storage.get_current_blockchain_height();
st_inf.tx_pool_size = m_mempool.get_transactions_count();
st_inf.top_block_id_str = epee::string_tools::pod_to_hex(m_blockchain_storage.get_tail_id());
return true;
}
2014-03-03 23:07:58 +01:00
//-----------------------------------------------------------------------------------------------
2015-09-19 12:25:57 +02:00
bool core::check_tx_semantic(const transaction& tx, bool keeped_by_block) const
2014-03-03 23:07:58 +01:00
{
Make tx type and version scoped enums This converts the transaction type and version to scoped enum, giving type safety and making the tx type assignment less error prone because there is no implicit conversion or comparison with raw integers that has to be worried about. This ends up converting any use of `cryptonote::transaction::type_xyz` to `cryptonote::transaction::txtype::xyz`. For version, names like `transaction::version_v4` become `cryptonote::txversion::v4_tx_types`. This also allows/includes various other simplifications related to or enabled by this change: - handle `is_deregister` dynamically in serialization code (setting `type::standard` or `type::deregister` rather than using a version-determined union) - `get_type()` is no longer needed with the above change: it is now much simpler to directly access `type` which will always have the correct value (even for v2 or v3 transaction types). And though there was an assertion on the enum value, `get_type()` was being used only sporadically: many places accessed `.type` directly. - the old unscoped enum didn't have a type but was assumed castable to/from `uint16_t`, which technically meant there was potential undefined behaviour when deserializing any type values >= 8. - tx type range checks weren't being done in all serialization paths; they are now. Because `get_type()` was not used everywhere (lots of places simply accessed `.type` directory) these might not have been caught. - `set_type()` is not needed; it was only being used in a single place (wallet2.cpp) and only for v4 txes, so the version protection code was never doing anything. - added a std::ostream << operator for the enum types so that they can be output with `<< tx_type <<` rather than needing to wrap it in `type_to_string(tx_type)` everywhere. For the versions, you get the annotated version string (e.g. 4_tx_types) rather than just the number 4.
2019-06-11 20:53:46 +02:00
if (tx.type != txtype::standard)
{
if (tx.vin.size() != 0)
{
Make tx type and version scoped enums This converts the transaction type and version to scoped enum, giving type safety and making the tx type assignment less error prone because there is no implicit conversion or comparison with raw integers that has to be worried about. This ends up converting any use of `cryptonote::transaction::type_xyz` to `cryptonote::transaction::txtype::xyz`. For version, names like `transaction::version_v4` become `cryptonote::txversion::v4_tx_types`. This also allows/includes various other simplifications related to or enabled by this change: - handle `is_deregister` dynamically in serialization code (setting `type::standard` or `type::deregister` rather than using a version-determined union) - `get_type()` is no longer needed with the above change: it is now much simpler to directly access `type` which will always have the correct value (even for v2 or v3 transaction types). And though there was an assertion on the enum value, `get_type()` was being used only sporadically: many places accessed `.type` directly. - the old unscoped enum didn't have a type but was assumed castable to/from `uint16_t`, which technically meant there was potential undefined behaviour when deserializing any type values >= 8. - tx type range checks weren't being done in all serialization paths; they are now. Because `get_type()` was not used everywhere (lots of places simply accessed `.type` directory) these might not have been caught. - `set_type()` is not needed; it was only being used in a single place (wallet2.cpp) and only for v4 txes, so the version protection code was never doing anything. - added a std::ostream << operator for the enum types so that they can be output with `<< tx_type <<` rather than needing to wrap it in `type_to_string(tx_type)` everywhere. For the versions, you get the annotated version string (e.g. 4_tx_types) rather than just the number 4.
2019-06-11 20:53:46 +02:00
MERROR_VER("tx type: " << tx.type << " must have 0 inputs, received: " << tx.vin.size() << ", rejected for tx id = " << get_transaction_hash(tx));
return false;
}
}
else if (!tx.vin.size())
2014-03-03 23:07:58 +01:00
{
Change logging to easylogging++ This replaces the epee and data_loggers logging systems with a single one, and also adds filename:line and explicit severity levels. Categories may be defined, and logging severity set by category (or set of categories). epee style 0-4 log level maps to a sensible severity configuration. Log files now also rotate when reaching 100 MB. To select which logs to output, use the MONERO_LOGS environment variable, with a comma separated list of categories (globs are supported), with their requested severity level after a colon. If a log matches more than one such setting, the last one in the configuration string applies. A few examples: This one is (mostly) silent, only outputting fatal errors: MONERO_LOGS=*:FATAL This one is very verbose: MONERO_LOGS=*:TRACE This one is totally silent (logwise): MONERO_LOGS="" This one outputs all errors and warnings, except for the "verify" category, which prints just fatal errors (the verify category is used for logs about incoming transactions and blocks, and it is expected that some/many will fail to verify, hence we don't want the spam): MONERO_LOGS=*:WARNING,verify:FATAL Log levels are, in decreasing order of priority: FATAL, ERROR, WARNING, INFO, DEBUG, TRACE Subcategories may be added using prefixes and globs. This example will output net.p2p logs at the TRACE level, but all other net* logs only at INFO: MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE Logs which are intended for the user (which Monero was using a lot through epee, but really isn't a nice way to go things) should use the "global" category. There are a few helper macros for using this category, eg: MGINFO("this shows up by default") or MGINFO_RED("this is red"), to try to keep a similar look and feel for now. Existing epee log macros still exist, and map to the new log levels, but since they're used as a "user facing" UI element as much as a logging system, they often don't map well to log severities (ie, a log level 0 log may be an error, or may be something we want the user to see, such as an important info). In those cases, I tried to use the new macros. In other cases, I left the existing macros in. When modifying logs, it is probably best to switch to the new macros with explicit levels. The --log-level options and set_log commands now also accept category settings, in addition to the epee style log levels.
2017-01-01 17:34:23 +01:00
MERROR_VER("tx with empty inputs, rejected for tx id= " << get_transaction_hash(tx));
2014-03-03 23:07:58 +01:00
return false;
}
if(!check_inputs_types_supported(tx))
{
Change logging to easylogging++ This replaces the epee and data_loggers logging systems with a single one, and also adds filename:line and explicit severity levels. Categories may be defined, and logging severity set by category (or set of categories). epee style 0-4 log level maps to a sensible severity configuration. Log files now also rotate when reaching 100 MB. To select which logs to output, use the MONERO_LOGS environment variable, with a comma separated list of categories (globs are supported), with their requested severity level after a colon. If a log matches more than one such setting, the last one in the configuration string applies. A few examples: This one is (mostly) silent, only outputting fatal errors: MONERO_LOGS=*:FATAL This one is very verbose: MONERO_LOGS=*:TRACE This one is totally silent (logwise): MONERO_LOGS="" This one outputs all errors and warnings, except for the "verify" category, which prints just fatal errors (the verify category is used for logs about incoming transactions and blocks, and it is expected that some/many will fail to verify, hence we don't want the spam): MONERO_LOGS=*:WARNING,verify:FATAL Log levels are, in decreasing order of priority: FATAL, ERROR, WARNING, INFO, DEBUG, TRACE Subcategories may be added using prefixes and globs. This example will output net.p2p logs at the TRACE level, but all other net* logs only at INFO: MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE Logs which are intended for the user (which Monero was using a lot through epee, but really isn't a nice way to go things) should use the "global" category. There are a few helper macros for using this category, eg: MGINFO("this shows up by default") or MGINFO_RED("this is red"), to try to keep a similar look and feel for now. Existing epee log macros still exist, and map to the new log levels, but since they're used as a "user facing" UI element as much as a logging system, they often don't map well to log severities (ie, a log level 0 log may be an error, or may be something we want the user to see, such as an important info). In those cases, I tried to use the new macros. In other cases, I left the existing macros in. When modifying logs, it is probably best to switch to the new macros with explicit levels. The --log-level options and set_log commands now also accept category settings, in addition to the epee style log levels.
2017-01-01 17:34:23 +01:00
MERROR_VER("unsupported input types for tx id= " << get_transaction_hash(tx));
2014-03-03 23:07:58 +01:00
return false;
}
if(!check_outs_valid(tx))
{
Change logging to easylogging++ This replaces the epee and data_loggers logging systems with a single one, and also adds filename:line and explicit severity levels. Categories may be defined, and logging severity set by category (or set of categories). epee style 0-4 log level maps to a sensible severity configuration. Log files now also rotate when reaching 100 MB. To select which logs to output, use the MONERO_LOGS environment variable, with a comma separated list of categories (globs are supported), with their requested severity level after a colon. If a log matches more than one such setting, the last one in the configuration string applies. A few examples: This one is (mostly) silent, only outputting fatal errors: MONERO_LOGS=*:FATAL This one is very verbose: MONERO_LOGS=*:TRACE This one is totally silent (logwise): MONERO_LOGS="" This one outputs all errors and warnings, except for the "verify" category, which prints just fatal errors (the verify category is used for logs about incoming transactions and blocks, and it is expected that some/many will fail to verify, hence we don't want the spam): MONERO_LOGS=*:WARNING,verify:FATAL Log levels are, in decreasing order of priority: FATAL, ERROR, WARNING, INFO, DEBUG, TRACE Subcategories may be added using prefixes and globs. This example will output net.p2p logs at the TRACE level, but all other net* logs only at INFO: MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE Logs which are intended for the user (which Monero was using a lot through epee, but really isn't a nice way to go things) should use the "global" category. There are a few helper macros for using this category, eg: MGINFO("this shows up by default") or MGINFO_RED("this is red"), to try to keep a similar look and feel for now. Existing epee log macros still exist, and map to the new log levels, but since they're used as a "user facing" UI element as much as a logging system, they often don't map well to log severities (ie, a log level 0 log may be an error, or may be something we want the user to see, such as an important info). In those cases, I tried to use the new macros. In other cases, I left the existing macros in. When modifying logs, it is probably best to switch to the new macros with explicit levels. The --log-level options and set_log commands now also accept category settings, in addition to the epee style log levels.
2017-01-01 17:34:23 +01:00
MERROR_VER("tx with invalid outputs, rejected for tx id= " << get_transaction_hash(tx));
2014-03-03 23:07:58 +01:00
return false;
}
Service Node Deregister Part 5 (#89) * Retrieve quorum list from height, reviewed * Setup data structures for de/register TX * Submit and validate partial/full deregisters * Add P2P relaying of partial deregistration votes * Code review adjustments for deregistration part 1 - Fix check_tx_semantic - Remove signature_pod as votes are now stored as blobs. Serialization overrides don't intefere with crypto::signature anymore. * deregistration_vote_pool - changed sign/verify interface and removed repeated code * Misc review, fix sign/verify api, vote threshold * Deregister/tx edge case handling for combinatoric votes * core, service_node_list: separated address from service node pubkey * Retrieve quorum list from height, reviewed * Setup data structures for de/register TX * Submit and validate partial/full deregisters * Add P2P relaying of partial deregistration votes * Code review adjustments for deregistration part 1 - Fix check_tx_semantic - Remove signature_pod as votes are now stored as blobs. Serialization overrides don't intefere with crypto::signature anymore. * deregistration_vote_pool - changed sign/verify interface and removed repeated code * Misc review, fix sign/verify api, vote threshold * Deregister/tx edge case handling for combinatoric votes * Store service node lists for the duration of deregister lifetimes * Quorum min/max bug, sort node list, fix node to test list * Change quorum to store acc pub address, fix oob bug * Code review for expiring votes, acc keys to pub_key, improve err msgs * Add early out for is_deregistration_tx and protect against quorum changes * Remove debug code, fix segfault * Remove irrelevant check for tx v3 in blockchain, fix >= height for pruning quorum states Incorrect assumption that a transaction can be kept in the chain if it could eventually become invalid, because if it were the chain would be split and eventually these transaction would be dropped. But also that we should not override the pre-existing logic which handles this case anyway.
2018-07-18 04:42:47 +02:00
Make tx type and version scoped enums This converts the transaction type and version to scoped enum, giving type safety and making the tx type assignment less error prone because there is no implicit conversion or comparison with raw integers that has to be worried about. This ends up converting any use of `cryptonote::transaction::type_xyz` to `cryptonote::transaction::txtype::xyz`. For version, names like `transaction::version_v4` become `cryptonote::txversion::v4_tx_types`. This also allows/includes various other simplifications related to or enabled by this change: - handle `is_deregister` dynamically in serialization code (setting `type::standard` or `type::deregister` rather than using a version-determined union) - `get_type()` is no longer needed with the above change: it is now much simpler to directly access `type` which will always have the correct value (even for v2 or v3 transaction types). And though there was an assertion on the enum value, `get_type()` was being used only sporadically: many places accessed `.type` directly. - the old unscoped enum didn't have a type but was assumed castable to/from `uint16_t`, which technically meant there was potential undefined behaviour when deserializing any type values >= 8. - tx type range checks weren't being done in all serialization paths; they are now. Because `get_type()` was not used everywhere (lots of places simply accessed `.type` directory) these might not have been caught. - `set_type()` is not needed; it was only being used in a single place (wallet2.cpp) and only for v4 txes, so the version protection code was never doing anything. - added a std::ostream << operator for the enum types so that they can be output with `<< tx_type <<` rather than needing to wrap it in `type_to_string(tx_type)` everywhere. For the versions, you get the annotated version string (e.g. 4_tx_types) rather than just the number 4.
2019-06-11 20:53:46 +02:00
if (tx.version >= txversion::v2_ringct)
{
if (tx.rct_signatures.outPk.size() != tx.vout.size())
{
Change logging to easylogging++ This replaces the epee and data_loggers logging systems with a single one, and also adds filename:line and explicit severity levels. Categories may be defined, and logging severity set by category (or set of categories). epee style 0-4 log level maps to a sensible severity configuration. Log files now also rotate when reaching 100 MB. To select which logs to output, use the MONERO_LOGS environment variable, with a comma separated list of categories (globs are supported), with their requested severity level after a colon. If a log matches more than one such setting, the last one in the configuration string applies. A few examples: This one is (mostly) silent, only outputting fatal errors: MONERO_LOGS=*:FATAL This one is very verbose: MONERO_LOGS=*:TRACE This one is totally silent (logwise): MONERO_LOGS="" This one outputs all errors and warnings, except for the "verify" category, which prints just fatal errors (the verify category is used for logs about incoming transactions and blocks, and it is expected that some/many will fail to verify, hence we don't want the spam): MONERO_LOGS=*:WARNING,verify:FATAL Log levels are, in decreasing order of priority: FATAL, ERROR, WARNING, INFO, DEBUG, TRACE Subcategories may be added using prefixes and globs. This example will output net.p2p logs at the TRACE level, but all other net* logs only at INFO: MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE Logs which are intended for the user (which Monero was using a lot through epee, but really isn't a nice way to go things) should use the "global" category. There are a few helper macros for using this category, eg: MGINFO("this shows up by default") or MGINFO_RED("this is red"), to try to keep a similar look and feel for now. Existing epee log macros still exist, and map to the new log levels, but since they're used as a "user facing" UI element as much as a logging system, they often don't map well to log severities (ie, a log level 0 log may be an error, or may be something we want the user to see, such as an important info). In those cases, I tried to use the new macros. In other cases, I left the existing macros in. When modifying logs, it is probably best to switch to the new macros with explicit levels. The --log-level options and set_log commands now also accept category settings, in addition to the epee style log levels.
2017-01-01 17:34:23 +01:00
MERROR_VER("tx with mismatched vout/outPk count, rejected for tx id= " << get_transaction_hash(tx));
return false;
}
}
2014-03-03 23:07:58 +01:00
if(!check_money_overflow(tx))
{
Change logging to easylogging++ This replaces the epee and data_loggers logging systems with a single one, and also adds filename:line and explicit severity levels. Categories may be defined, and logging severity set by category (or set of categories). epee style 0-4 log level maps to a sensible severity configuration. Log files now also rotate when reaching 100 MB. To select which logs to output, use the MONERO_LOGS environment variable, with a comma separated list of categories (globs are supported), with their requested severity level after a colon. If a log matches more than one such setting, the last one in the configuration string applies. A few examples: This one is (mostly) silent, only outputting fatal errors: MONERO_LOGS=*:FATAL This one is very verbose: MONERO_LOGS=*:TRACE This one is totally silent (logwise): MONERO_LOGS="" This one outputs all errors and warnings, except for the "verify" category, which prints just fatal errors (the verify category is used for logs about incoming transactions and blocks, and it is expected that some/many will fail to verify, hence we don't want the spam): MONERO_LOGS=*:WARNING,verify:FATAL Log levels are, in decreasing order of priority: FATAL, ERROR, WARNING, INFO, DEBUG, TRACE Subcategories may be added using prefixes and globs. This example will output net.p2p logs at the TRACE level, but all other net* logs only at INFO: MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE Logs which are intended for the user (which Monero was using a lot through epee, but really isn't a nice way to go things) should use the "global" category. There are a few helper macros for using this category, eg: MGINFO("this shows up by default") or MGINFO_RED("this is red"), to try to keep a similar look and feel for now. Existing epee log macros still exist, and map to the new log levels, but since they're used as a "user facing" UI element as much as a logging system, they often don't map well to log severities (ie, a log level 0 log may be an error, or may be something we want the user to see, such as an important info). In those cases, I tried to use the new macros. In other cases, I left the existing macros in. When modifying logs, it is probably best to switch to the new macros with explicit levels. The --log-level options and set_log commands now also accept category settings, in addition to the epee style log levels.
2017-01-01 17:34:23 +01:00
MERROR_VER("tx has money overflow, rejected for tx id= " << get_transaction_hash(tx));
2014-03-03 23:07:58 +01:00
return false;
}
Make tx type and version scoped enums This converts the transaction type and version to scoped enum, giving type safety and making the tx type assignment less error prone because there is no implicit conversion or comparison with raw integers that has to be worried about. This ends up converting any use of `cryptonote::transaction::type_xyz` to `cryptonote::transaction::txtype::xyz`. For version, names like `transaction::version_v4` become `cryptonote::txversion::v4_tx_types`. This also allows/includes various other simplifications related to or enabled by this change: - handle `is_deregister` dynamically in serialization code (setting `type::standard` or `type::deregister` rather than using a version-determined union) - `get_type()` is no longer needed with the above change: it is now much simpler to directly access `type` which will always have the correct value (even for v2 or v3 transaction types). And though there was an assertion on the enum value, `get_type()` was being used only sporadically: many places accessed `.type` directly. - the old unscoped enum didn't have a type but was assumed castable to/from `uint16_t`, which technically meant there was potential undefined behaviour when deserializing any type values >= 8. - tx type range checks weren't being done in all serialization paths; they are now. Because `get_type()` was not used everywhere (lots of places simply accessed `.type` directory) these might not have been caught. - `set_type()` is not needed; it was only being used in a single place (wallet2.cpp) and only for v4 txes, so the version protection code was never doing anything. - added a std::ostream << operator for the enum types so that they can be output with `<< tx_type <<` rather than needing to wrap it in `type_to_string(tx_type)` everywhere. For the versions, you get the annotated version string (e.g. 4_tx_types) rather than just the number 4.
2019-06-11 20:53:46 +02:00
if (tx.version == txversion::v1)
2014-03-03 23:07:58 +01:00
{
uint64_t amount_in = 0;
get_inputs_money_amount(tx, amount_in);
uint64_t amount_out = get_outs_money_amount(tx);
if(amount_in <= amount_out)
{
Change logging to easylogging++ This replaces the epee and data_loggers logging systems with a single one, and also adds filename:line and explicit severity levels. Categories may be defined, and logging severity set by category (or set of categories). epee style 0-4 log level maps to a sensible severity configuration. Log files now also rotate when reaching 100 MB. To select which logs to output, use the MONERO_LOGS environment variable, with a comma separated list of categories (globs are supported), with their requested severity level after a colon. If a log matches more than one such setting, the last one in the configuration string applies. A few examples: This one is (mostly) silent, only outputting fatal errors: MONERO_LOGS=*:FATAL This one is very verbose: MONERO_LOGS=*:TRACE This one is totally silent (logwise): MONERO_LOGS="" This one outputs all errors and warnings, except for the "verify" category, which prints just fatal errors (the verify category is used for logs about incoming transactions and blocks, and it is expected that some/many will fail to verify, hence we don't want the spam): MONERO_LOGS=*:WARNING,verify:FATAL Log levels are, in decreasing order of priority: FATAL, ERROR, WARNING, INFO, DEBUG, TRACE Subcategories may be added using prefixes and globs. This example will output net.p2p logs at the TRACE level, but all other net* logs only at INFO: MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE Logs which are intended for the user (which Monero was using a lot through epee, but really isn't a nice way to go things) should use the "global" category. There are a few helper macros for using this category, eg: MGINFO("this shows up by default") or MGINFO_RED("this is red"), to try to keep a similar look and feel for now. Existing epee log macros still exist, and map to the new log levels, but since they're used as a "user facing" UI element as much as a logging system, they often don't map well to log severities (ie, a log level 0 log may be an error, or may be something we want the user to see, such as an important info). In those cases, I tried to use the new macros. In other cases, I left the existing macros in. When modifying logs, it is probably best to switch to the new macros with explicit levels. The --log-level options and set_log commands now also accept category settings, in addition to the epee style log levels.
2017-01-01 17:34:23 +01:00
MERROR_VER("tx with wrong amounts: ins " << amount_in << ", outs " << amount_out << ", rejected for tx id= " << get_transaction_hash(tx));
return false;
}
2014-03-03 23:07:58 +01:00
}
if(!keeped_by_block && get_transaction_weight(tx) >= m_blockchain_storage.get_current_cumulative_block_weight_limit() - CRYPTONOTE_COINBASE_BLOB_RESERVED_SIZE)
2014-03-03 23:07:58 +01:00
{
MERROR_VER("tx is too large " << get_transaction_weight(tx) << ", expected not bigger than " << m_blockchain_storage.get_current_cumulative_block_weight_limit() - CRYPTONOTE_COINBASE_BLOB_RESERVED_SIZE);
2014-03-03 23:07:58 +01:00
return false;
}
if(!check_tx_inputs_keyimages_diff(tx))
{
Change logging to easylogging++ This replaces the epee and data_loggers logging systems with a single one, and also adds filename:line and explicit severity levels. Categories may be defined, and logging severity set by category (or set of categories). epee style 0-4 log level maps to a sensible severity configuration. Log files now also rotate when reaching 100 MB. To select which logs to output, use the MONERO_LOGS environment variable, with a comma separated list of categories (globs are supported), with their requested severity level after a colon. If a log matches more than one such setting, the last one in the configuration string applies. A few examples: This one is (mostly) silent, only outputting fatal errors: MONERO_LOGS=*:FATAL This one is very verbose: MONERO_LOGS=*:TRACE This one is totally silent (logwise): MONERO_LOGS="" This one outputs all errors and warnings, except for the "verify" category, which prints just fatal errors (the verify category is used for logs about incoming transactions and blocks, and it is expected that some/many will fail to verify, hence we don't want the spam): MONERO_LOGS=*:WARNING,verify:FATAL Log levels are, in decreasing order of priority: FATAL, ERROR, WARNING, INFO, DEBUG, TRACE Subcategories may be added using prefixes and globs. This example will output net.p2p logs at the TRACE level, but all other net* logs only at INFO: MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE Logs which are intended for the user (which Monero was using a lot through epee, but really isn't a nice way to go things) should use the "global" category. There are a few helper macros for using this category, eg: MGINFO("this shows up by default") or MGINFO_RED("this is red"), to try to keep a similar look and feel for now. Existing epee log macros still exist, and map to the new log levels, but since they're used as a "user facing" UI element as much as a logging system, they often don't map well to log severities (ie, a log level 0 log may be an error, or may be something we want the user to see, such as an important info). In those cases, I tried to use the new macros. In other cases, I left the existing macros in. When modifying logs, it is probably best to switch to the new macros with explicit levels. The --log-level options and set_log commands now also accept category settings, in addition to the epee style log levels.
2017-01-01 17:34:23 +01:00
MERROR_VER("tx uses a single key image more than once");
2014-03-03 23:07:58 +01:00
return false;
}
if (!check_tx_inputs_ring_members_diff(tx))
{
MERROR_VER("tx uses duplicate ring members");
return false;
}
if (!check_tx_inputs_keyimages_domain(tx))
{
MERROR_VER("tx uses key image not in the valid domain");
return false;
}
2014-03-03 23:07:58 +01:00
return true;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 12:25:57 +02:00
bool core::is_key_image_spent(const crypto::key_image &key_image) const
{
return m_blockchain_storage.have_tx_keyimg_as_spent(key_image);
}
//-----------------------------------------------------------------------------------------------
2015-09-19 12:25:57 +02:00
bool core::are_key_images_spent(const std::vector<crypto::key_image>& key_im, std::vector<bool> &spent) const
{
spent.clear();
for(auto& ki: key_im)
{
spent.push_back(m_blockchain_storage.have_tx_keyimg_as_spent(ki));
}
return true;
}
//-----------------------------------------------------------------------------------------------
size_t core::get_block_sync_size(uint64_t height) const
{
2018-02-16 12:04:04 +01:00
static const uint64_t quick_height = m_nettype == TESTNET ? 801219 : m_nettype == MAINNET ? 1220516 : 0;
if (block_sync_size > 0)
return block_sync_size;
if (height >= quick_height)
return BLOCKS_SYNCHRONIZING_DEFAULT_COUNT;
return BLOCKS_SYNCHRONIZING_DEFAULT_COUNT_PRE_V4;
}
//-----------------------------------------------------------------------------------------------
bool core::are_key_images_spent_in_pool(const std::vector<crypto::key_image>& key_im, std::vector<bool> &spent) const
{
spent.clear();
return m_mempool.check_for_key_images(key_im, spent);
}
//-----------------------------------------------------------------------------------------------
std::pair<uint64_t, uint64_t> core::get_coinbase_tx_sum(const uint64_t start_offset, const size_t count)
2016-10-10 21:45:51 +02:00
{
uint64_t emission_amount = 0;
uint64_t total_fee_amount = 0;
if (count)
2016-10-10 21:45:51 +02:00
{
const uint64_t end = start_offset + count - 1;
m_blockchain_storage.for_blocks_range(start_offset, end,
[this, &emission_amount, &total_fee_amount](uint64_t, const crypto::hash& hash, const block& b){
std::vector<transaction> txs;
std::vector<crypto::hash> missed_txs;
uint64_t coinbase_amount = get_outs_money_amount(b.miner_tx);
this->get_transactions(b.tx_hashes, txs, missed_txs);
uint64_t tx_fee_amount = 0;
for(const auto& tx: txs)
{
tx_fee_amount += get_tx_fee(tx);
}
emission_amount += coinbase_amount - tx_fee_amount;
total_fee_amount += tx_fee_amount;
return true;
});
2016-10-10 21:45:51 +02:00
}
return std::pair<uint64_t, uint64_t>(emission_amount, total_fee_amount);
2016-10-10 21:45:51 +02:00
}
//-----------------------------------------------------------------------------------------------
2015-09-19 12:25:57 +02:00
bool core::check_tx_inputs_keyimages_diff(const transaction& tx) const
2014-03-03 23:07:58 +01:00
{
std::unordered_set<crypto::key_image> ki;
for(const auto& in: tx.vin)
2014-03-03 23:07:58 +01:00
{
CHECKED_GET_SPECIFIC_VARIANT(in, const txin_to_key, tokey_in, false);
if(!ki.insert(tokey_in.k_image).second)
return false;
}
return true;
}
//-----------------------------------------------------------------------------------------------
bool core::check_tx_inputs_ring_members_diff(const transaction& tx) const
{
const uint8_t version = m_blockchain_storage.get_current_hard_fork_version();
if (version >= 6)
{
for(const auto& in: tx.vin)
{
CHECKED_GET_SPECIFIC_VARIANT(in, const txin_to_key, tokey_in, false);
for (size_t n = 1; n < tokey_in.key_offsets.size(); ++n)
if (tokey_in.key_offsets[n] == 0)
return false;
}
}
return true;
}
//-----------------------------------------------------------------------------------------------
bool core::check_tx_inputs_keyimages_domain(const transaction& tx) const
{
std::unordered_set<crypto::key_image> ki;
for(const auto& in: tx.vin)
{
CHECKED_GET_SPECIFIC_VARIANT(in, const txin_to_key, tokey_in, false);
if (!(rct::scalarmultKey(rct::ki2rct(tokey_in.k_image), rct::curveOrder()) == rct::identity()))
return false;
}
return true;
}
//-----------------------------------------------------------------------------------------------
bool core::add_new_tx(transaction& tx, tx_verification_context& tvc, bool keeped_by_block, bool relayed, bool do_not_relay)
2014-03-03 23:07:58 +01:00
{
crypto::hash tx_hash = get_transaction_hash(tx);
blobdata bl;
t_serializable_object_to_blob(tx, bl);
size_t tx_weight = get_transaction_weight(tx, bl.size());
return add_new_tx(tx, tx_hash, bl, tx_weight, tvc, keeped_by_block, relayed, do_not_relay);
2014-03-03 23:07:58 +01:00
}
//-----------------------------------------------------------------------------------------------
2015-09-19 12:25:57 +02:00
size_t core::get_blockchain_total_transactions() const
2014-03-03 23:07:58 +01:00
{
return m_blockchain_storage.get_total_transactions();
}
//-----------------------------------------------------------------------------------------------
bool core::add_new_tx(transaction& tx, const crypto::hash& tx_hash, const cryptonote::blobdata &blob, size_t tx_weight, tx_verification_context& tvc, bool keeped_by_block, bool relayed, bool do_not_relay)
2014-03-03 23:07:58 +01:00
{
if(m_mempool.have_tx(tx_hash))
{
LOG_PRINT_L2("tx " << tx_hash << "already have transaction in tx_pool");
return true;
}
if(m_blockchain_storage.have_tx(tx_hash))
{
LOG_PRINT_L2("tx " << tx_hash << " already have transaction in blockchain");
return true;
}
uint8_t version = m_blockchain_storage.get_current_hard_fork_version();
return m_mempool.add_tx(tx, tx_hash, blob, tx_weight, tvc, keeped_by_block, relayed, do_not_relay, version, m_service_node_list);
}
//-----------------------------------------------------------------------------------------------
bool core::relay_txpool_transactions()
{
// we attempt to relay txes that should be relayed, but were not
std::vector<std::pair<crypto::hash, cryptonote::blobdata>> txs;
if (m_mempool.get_relayable_transactions(txs) && !txs.empty())
{
cryptonote_connection_context fake_context = AUTO_VAL_INIT(fake_context);
tx_verification_context tvc = AUTO_VAL_INIT(tvc);
NOTIFY_NEW_TRANSACTIONS::request r;
for (auto it = txs.begin(); it != txs.end(); ++it)
{
r.txs.push_back(it->second);
}
get_protocol()->relay_transactions(r, fake_context);
m_mempool.set_relayed(txs);
}
return true;
2014-03-03 23:07:58 +01:00
}
//-----------------------------------------------------------------------------------------------
bool core::submit_uptime_proof()
{
if (!m_service_node)
return true;
NOTIFY_UPTIME_PROOF::request req = m_service_node_list.generate_uptime_proof(m_service_node_pubkey, m_service_node_key, m_sn_public_ip, m_storage_port);
cryptonote_connection_context fake_context = AUTO_VAL_INIT(fake_context);
bool relayed = get_protocol()->relay_uptime_proof(req, fake_context);
if (relayed)
MGINFO("Submitted uptime-proof for service node (yours): " << m_service_node_pubkey);
return true;
}
//-----------------------------------------------------------------------------------------------
bool core::handle_uptime_proof(const NOTIFY_UPTIME_PROOF::request &proof)
{
return m_service_node_list.handle_uptime_proof(proof);
}
//-----------------------------------------------------------------------------------------------
void core::on_transaction_relayed(const cryptonote::blobdata& tx_blob)
{
std::vector<std::pair<crypto::hash, cryptonote::blobdata>> txs;
cryptonote::transaction tx;
crypto::hash tx_hash;
if (!parse_and_validate_tx_from_blob(tx_blob, tx, tx_hash))
{
2016-12-04 14:13:54 +01:00
LOG_ERROR("Failed to parse relayed transaction");
return;
}
txs.push_back(std::make_pair(tx_hash, std::move(tx_blob)));
m_mempool.set_relayed(txs);
}
//-----------------------------------------------------------------------------------------------
bool core::relay_service_node_votes()
Service Node Deregister Part 5 (#89) * Retrieve quorum list from height, reviewed * Setup data structures for de/register TX * Submit and validate partial/full deregisters * Add P2P relaying of partial deregistration votes * Code review adjustments for deregistration part 1 - Fix check_tx_semantic - Remove signature_pod as votes are now stored as blobs. Serialization overrides don't intefere with crypto::signature anymore. * deregistration_vote_pool - changed sign/verify interface and removed repeated code * Misc review, fix sign/verify api, vote threshold * Deregister/tx edge case handling for combinatoric votes * core, service_node_list: separated address from service node pubkey * Retrieve quorum list from height, reviewed * Setup data structures for de/register TX * Submit and validate partial/full deregisters * Add P2P relaying of partial deregistration votes * Code review adjustments for deregistration part 1 - Fix check_tx_semantic - Remove signature_pod as votes are now stored as blobs. Serialization overrides don't intefere with crypto::signature anymore. * deregistration_vote_pool - changed sign/verify interface and removed repeated code * Misc review, fix sign/verify api, vote threshold * Deregister/tx edge case handling for combinatoric votes * Store service node lists for the duration of deregister lifetimes * Quorum min/max bug, sort node list, fix node to test list * Change quorum to store acc pub address, fix oob bug * Code review for expiring votes, acc keys to pub_key, improve err msgs * Add early out for is_deregistration_tx and protect against quorum changes * Remove debug code, fix segfault * Remove irrelevant check for tx v3 in blockchain, fix >= height for pruning quorum states Incorrect assumption that a transaction can be kept in the chain if it could eventually become invalid, because if it were the chain would be split and eventually these transaction would be dropped. But also that we should not override the pre-existing logic which handles this case anyway.
2018-07-18 04:42:47 +02:00
{
NOTIFY_NEW_SERVICE_NODE_VOTE::request req = {};
req.votes = m_quorum_cop.get_relayable_votes(get_current_blockchain_height());
if (req.votes.size())
Service Node Deregister Part 5 (#89) * Retrieve quorum list from height, reviewed * Setup data structures for de/register TX * Submit and validate partial/full deregisters * Add P2P relaying of partial deregistration votes * Code review adjustments for deregistration part 1 - Fix check_tx_semantic - Remove signature_pod as votes are now stored as blobs. Serialization overrides don't intefere with crypto::signature anymore. * deregistration_vote_pool - changed sign/verify interface and removed repeated code * Misc review, fix sign/verify api, vote threshold * Deregister/tx edge case handling for combinatoric votes * core, service_node_list: separated address from service node pubkey * Retrieve quorum list from height, reviewed * Setup data structures for de/register TX * Submit and validate partial/full deregisters * Add P2P relaying of partial deregistration votes * Code review adjustments for deregistration part 1 - Fix check_tx_semantic - Remove signature_pod as votes are now stored as blobs. Serialization overrides don't intefere with crypto::signature anymore. * deregistration_vote_pool - changed sign/verify interface and removed repeated code * Misc review, fix sign/verify api, vote threshold * Deregister/tx edge case handling for combinatoric votes * Store service node lists for the duration of deregister lifetimes * Quorum min/max bug, sort node list, fix node to test list * Change quorum to store acc pub address, fix oob bug * Code review for expiring votes, acc keys to pub_key, improve err msgs * Add early out for is_deregistration_tx and protect against quorum changes * Remove debug code, fix segfault * Remove irrelevant check for tx v3 in blockchain, fix >= height for pruning quorum states Incorrect assumption that a transaction can be kept in the chain if it could eventually become invalid, because if it were the chain would be split and eventually these transaction would be dropped. But also that we should not override the pre-existing logic which handles this case anyway.
2018-07-18 04:42:47 +02:00
{
cryptonote_connection_context fake_context = AUTO_VAL_INIT(fake_context);
if (get_protocol()->relay_service_node_votes(req, fake_context))
2019-05-01 08:01:17 +02:00
{
m_quorum_cop.set_votes_relayed(req.votes);
2019-05-01 08:01:17 +02:00
}
}
return true;
}
//-----------------------------------------------------------------------------------------------
bool core::get_block_template(block& b, const account_public_address& adr, difficulty_type& diffic, uint64_t& height, uint64_t& expected_reward, const blobdata& ex_nonce)
2014-03-03 23:07:58 +01:00
{
return m_blockchain_storage.create_block_template(b, adr, diffic, height, expected_reward, ex_nonce);
2014-03-03 23:07:58 +01:00
}
//-----------------------------------------------------------------------------------------------
bool core::get_block_template(block& b, const crypto::hash *prev_block, const account_public_address& adr, difficulty_type& diffic, uint64_t& height, uint64_t& expected_reward, const blobdata& ex_nonce)
{
return m_blockchain_storage.create_block_template(b, prev_block, adr, diffic, height, expected_reward, ex_nonce);
}
//-----------------------------------------------------------------------------------------------
2015-09-19 12:25:57 +02:00
bool core::find_blockchain_supplement(const std::list<crypto::hash>& qblock_ids, NOTIFY_RESPONSE_CHAIN_ENTRY::request& resp) const
2014-03-03 23:07:58 +01:00
{
return m_blockchain_storage.find_blockchain_supplement(qblock_ids, resp);
}
//-----------------------------------------------------------------------------------------------
bool core::find_blockchain_supplement(const uint64_t req_start_block, const std::list<crypto::hash>& qblock_ids, std::vector<std::pair<std::pair<cryptonote::blobdata, crypto::hash>, std::vector<std::pair<crypto::hash, cryptonote::blobdata> > > >& blocks, uint64_t& total_height, uint64_t& start_height, bool pruned, bool get_miner_tx_hash, size_t max_count) const
2014-03-03 23:07:58 +01:00
{
return m_blockchain_storage.find_blockchain_supplement(req_start_block, qblock_ids, blocks, total_height, start_height, pruned, get_miner_tx_hash, max_count);
2014-03-03 23:07:58 +01:00
}
//-----------------------------------------------------------------------------------------------
bool core::get_outs(const COMMAND_RPC_GET_OUTPUTS_BIN::request& req, COMMAND_RPC_GET_OUTPUTS_BIN::response& res) const
{
return m_blockchain_storage.get_outs(req, res);
}
//-----------------------------------------------------------------------------------------------
bool core::get_output_distribution(uint64_t amount, uint64_t from_height, uint64_t to_height, uint64_t &start_height, std::vector<uint64_t> &distribution, uint64_t &base) const
{
return m_blockchain_storage.get_output_distribution(amount, from_height, to_height, start_height, distribution, base);
}
//-----------------------------------------------------------------------------------------------
Infinite Staking Part 2 (#406) * Cleanup and undoing some protocol breakages * Simplify expiration of nodes * Request unlock schedules entire node for expiration * Fix off by one in expiring nodes * Undo expiring code for pre v10 nodes * Fix RPC returning register as unlock height and not checking 0 * Rename key image unlock height const * Undo testnet hardfork debug changes * Remove is_type for get_type, fix missing var rename * Move serialisable data into public namespace * Serialise tx types properly * Fix typo in no service node known msg * Code review * Fix == to >= on serialising tx type * Code review 2 * Fix tests and key image unlock * Add command to print locked key images * Update ui to display lock stakes, query in print cmd blacklist * Modify print stakes to be less slow * Remove autostaking code * Refactor staking into sweep functions It appears staking was derived off stake_main written separately at implementation at the beginning. This merges them back into a common code path, after removing autostake there's only some minor differences. It also makes sure that any changes to sweeping upstream are going to be considered in the staking process which we want. * Display unlock height for stakes * Begin creating output blacklist * Make blacklist output a migration step * Implement get_output_blacklist for lmdb * In wallet output selection ignore blacklisted outputs * Apply blacklisted outputs to output selection * Fix broken tests, switch key image unlock * Fix broken unit_tests * Begin change to limit locked key images to 4 globally * Revamp prepare registration for new min contribution rules * Fix up old back case in prepare registration * Remove debug code * Cleanup debug code and some unecessary changes * Fix migration step on mainnet db * Fix blacklist outputs for pre-existing DB's * Remove irrelevant note * Tweak scanning addresses for locked stakes Since we only now allow contributions from the primary address we can skip checking all subaddress + lookahead to speed up wallet scanning * Define macro for SCNu64 for Mingw * Fix failure on empty DB * Add missing error msg, remove contributor from stake * Improve staking messages * Flush prompt to always display * Return the msg from stake failure and fix stake parsing error * Tweak fork rules for smaller bulletproofs * Tweak pooled nodes minimum amounts * Fix crash on exit, there's no need to store on destructor Since all information about service nodes is derived from the blockchain and we store state every time we receive a block, storing in the destructor is redundant as there is no new information to store. * Make prompt be consistent with CLI * Check max number of key images from per user to node * Implement error message on get_output_blacklist failure * Remove resolved TODO's/comments * Handle infinite staking in print_sn * Atoi->strtol, fix prepare_registration, virtual override, stale msgs
2019-02-14 02:12:57 +01:00
bool core::get_output_blacklist(std::vector<uint64_t> &blacklist) const
{
return m_blockchain_storage.get_output_blacklist(blacklist);
}
//-----------------------------------------------------------------------------------------------
2015-09-19 12:25:57 +02:00
bool core::get_tx_outputs_gindexs(const crypto::hash& tx_id, std::vector<uint64_t>& indexs) const
2014-03-03 23:07:58 +01:00
{
return m_blockchain_storage.get_tx_outputs_gindexs(tx_id, indexs);
}
//-----------------------------------------------------------------------------------------------
bool core::get_tx_outputs_gindexs(const crypto::hash& tx_id, size_t n_txes, std::vector<std::vector<uint64_t>>& indexs) const
{
return m_blockchain_storage.get_tx_outputs_gindexs(tx_id, n_txes, indexs);
}
//-----------------------------------------------------------------------------------------------
2014-03-03 23:07:58 +01:00
void core::pause_mine()
{
m_miner.pause();
}
//-----------------------------------------------------------------------------------------------
void core::resume_mine()
{
m_miner.resume();
}
//-----------------------------------------------------------------------------------------------
block_complete_entry get_block_complete_entry(block& b, tx_memory_pool &pool)
{
block_complete_entry bce;
bce.block = cryptonote::block_to_blob(b);
for (const auto &tx_hash: b.tx_hashes)
{
cryptonote::blobdata txblob;
CHECK_AND_ASSERT_THROW_MES(pool.get_transaction(tx_hash, txblob), "Transaction not found in pool");
bce.txs.push_back(txblob);
}
return bce;
}
//-----------------------------------------------------------------------------------------------
bool core::handle_block_found(block& b, block_verification_context &bvc)
2014-03-03 23:07:58 +01:00
{
bvc = boost::value_initialized<block_verification_context>();
std::vector<block_complete_entry> blocks;
m_miner.pause();
{
LOKI_DEFER { m_miner.resume(); };
try
{
blocks.push_back(get_block_complete_entry(b, m_mempool));
}
catch (const std::exception &e)
{
return false;
}
std::vector<block> pblocks;
if (!prepare_handle_incoming_blocks(blocks, pblocks))
{
MERROR("Block found, but failed to prepare to add");
return false;
}
add_new_block(b, bvc, nullptr /*checkpoint*/);
cleanup_handle_incoming_blocks(true);
m_miner.on_block_chain_update();
}
2014-03-03 23:07:58 +01:00
CHECK_AND_ASSERT_MES(!bvc.m_verifivation_failed, false, "mined block failed verification");
if(bvc.m_added_to_main_chain)
{
std::vector<crypto::hash> missed_txs;
std::vector<cryptonote::blobdata> txs;
m_blockchain_storage.get_transactions_blobs(b.tx_hashes, txs, missed_txs);
2014-03-03 23:07:58 +01:00
if(missed_txs.size() && m_blockchain_storage.get_block_id_by_height(get_block_height(b)) != get_block_hash(b))
{
LOG_PRINT_L1("Block found but, seems that reorganize just happened after that, do not relay this block");
2014-03-03 23:07:58 +01:00
return true;
}
2018-03-01 12:36:19 +01:00
CHECK_AND_ASSERT_MES(txs.size() == b.tx_hashes.size() && !missed_txs.size(), false, "can't find some transactions in found block:" << get_block_hash(b) << " txs.size()=" << txs.size()
2014-03-03 23:07:58 +01:00
<< ", b.tx_hashes.size()=" << b.tx_hashes.size() << ", missed_txs.size()" << missed_txs.size());
cryptonote_connection_context exclude_context = boost::value_initialized<cryptonote_connection_context>();
NOTIFY_NEW_FLUFFY_BLOCK::request arg = AUTO_VAL_INIT(arg);
arg.current_blockchain_height = m_blockchain_storage.get_current_blockchain_height();
arg.b = blocks[0];
2014-03-03 23:07:58 +01:00
m_pprotocol->relay_block(arg, exclude_context);
}
return true;
2014-03-03 23:07:58 +01:00
}
//-----------------------------------------------------------------------------------------------
void core::on_synchronized()
{
m_miner.on_synchronized();
}
//-----------------------------------------------------------------------------------------------
void core::safesyncmode(const bool onoff)
{
m_blockchain_storage.safesyncmode(onoff);
}
//-----------------------------------------------------------------------------------------------
bool core::add_new_block(const block& b, block_verification_context& bvc, checkpoint_t const *checkpoint)
2014-03-03 23:07:58 +01:00
{
// TODO(loki): Temporary soft-fork code can be removed
uint64_t latest_height = std::max(get_current_blockchain_height(), get_target_blockchain_height());
if (get_nettype() == cryptonote::MAINNET &&
latest_height >= HF_VERSION_12_CHECKPOINTING_SOFT_FORK_HEIGHT &&
get_block_height(b) < HF_VERSION_12_CHECKPOINTING_SOFT_FORK_HEIGHT)
{
// NOTE: After the soft fork, ignore all checkpoints from before the fork so we
// only create and enforce checkpoints from after the soft-fork.
checkpoint = nullptr;
}
bool result = m_blockchain_storage.add_new_block(b, bvc, checkpoint);
if (result)
{
// TODO(loki): PERF(loki): This causes perf problems in integration mode, so in real-time operation it may not be
// noticeable but could bubble up and cause slowness if the runtime variables align up undesiredly.
relay_service_node_votes(); // NOTE: nop if synchronising due to not accepting votes whilst syncing
}
return result;
2014-03-03 23:07:58 +01:00
}
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY) Bockchain: 1. Optim: Multi-thread long-hash computation when encountering groups of blocks. 2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible. 3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible. 4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks. 5. Optim: Multi-thread signature computation whenever possible. 6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD) 7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???). 8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads). Berkeley-DB: 1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc). 2. Fix: Unable to pop blocks on reorganize due to transaction errors. 3. Patch: Large number of transaction aborts when running multi-threaded bulk queries. 4. Patch: Insufficient locks error when running full sync. 5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation. 6. Optim: Add bulk queries to get output global indices. 7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 9. Optim: Added thread-safe buffers used when multi-threading bulk queries. 10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details) 11. Mod: Added checkpoint thread and auto-remove-logs option. 12. *Now usable on 32-bit systems like RPI2. LMDB: 1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect) 2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details) 5. Mod: Auto resize to +1GB instead of multiplier x1.5 ETC: 1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete. 2. Fix: 32-bit saturation bug when computing next difficulty on large blocks. [PENDING ISSUES] 1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization. This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD. 2. Berkeley db, possible bug "unable to allocate memory". TBD. [NEW OPTIONS] (*Currently all enabled for testing purposes) 1. --fast-block-sync arg=[0:1] (default: 1) a. 0 = Compute long hash per block (may take a while depending on CPU) b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence) 2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000) a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions. b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache. Fast - Write meta-data but defer data flush. Fastest - Defer meta-data and data flush. Sync - Flush data after nblocks_per_sync and wait. Async - Flush data after nblocks_per_sync but do not wait for the operation to finish. 3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower) Max number of threads to use when computing long-hash in groups. 4. --show-time-stats arg=[0:1] (default: 1) Show benchmark related time stats. 5. --db-auto-remove-logs arg=[0:1] (default: 1) For berkeley-db only. Auto remove logs if enabled. **Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version. At the moment, you need a full resync to use this optimized version. [PERFORMANCE COMPARISON] **Some figures are approximations only. Using a baseline machine of an i7-2600K+SSD+(with full pow computation): 1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain. 2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain. 3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain. Averate procesing times (with full pow computation): lmdb-optimized: 1. tx_ave = 2.5 ms / tx 2. block_ave = 5.87 ms / block memory-official-repo: 1. tx_ave = 8.85 ms / tx 2. block_ave = 19.68 ms / block lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e) 1. tx_ave = 47.8 ms / tx 2. block_ave = 64.2 ms / block **Note: The following data denotes processing times only (does not include p2p download time) lmdb-optimized processing times (with full pow computation): 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000). 2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000). 3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000). lmdb-optimized processing times (with per-block-checkpoint) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with full pow computation) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000). 2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with per-block-checkpoint) 1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 22:09:32 +02:00
//-----------------------------------------------------------------------------------------------
bool core::prepare_handle_incoming_blocks(const std::vector<block_complete_entry> &blocks_entry, std::vector<block> &blocks)
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY) Bockchain: 1. Optim: Multi-thread long-hash computation when encountering groups of blocks. 2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible. 3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible. 4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks. 5. Optim: Multi-thread signature computation whenever possible. 6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD) 7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???). 8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads). Berkeley-DB: 1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc). 2. Fix: Unable to pop blocks on reorganize due to transaction errors. 3. Patch: Large number of transaction aborts when running multi-threaded bulk queries. 4. Patch: Insufficient locks error when running full sync. 5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation. 6. Optim: Add bulk queries to get output global indices. 7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 9. Optim: Added thread-safe buffers used when multi-threading bulk queries. 10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details) 11. Mod: Added checkpoint thread and auto-remove-logs option. 12. *Now usable on 32-bit systems like RPI2. LMDB: 1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect) 2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details) 5. Mod: Auto resize to +1GB instead of multiplier x1.5 ETC: 1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete. 2. Fix: 32-bit saturation bug when computing next difficulty on large blocks. [PENDING ISSUES] 1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization. This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD. 2. Berkeley db, possible bug "unable to allocate memory". TBD. [NEW OPTIONS] (*Currently all enabled for testing purposes) 1. --fast-block-sync arg=[0:1] (default: 1) a. 0 = Compute long hash per block (may take a while depending on CPU) b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence) 2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000) a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions. b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache. Fast - Write meta-data but defer data flush. Fastest - Defer meta-data and data flush. Sync - Flush data after nblocks_per_sync and wait. Async - Flush data after nblocks_per_sync but do not wait for the operation to finish. 3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower) Max number of threads to use when computing long-hash in groups. 4. --show-time-stats arg=[0:1] (default: 1) Show benchmark related time stats. 5. --db-auto-remove-logs arg=[0:1] (default: 1) For berkeley-db only. Auto remove logs if enabled. **Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version. At the moment, you need a full resync to use this optimized version. [PERFORMANCE COMPARISON] **Some figures are approximations only. Using a baseline machine of an i7-2600K+SSD+(with full pow computation): 1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain. 2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain. 3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain. Averate procesing times (with full pow computation): lmdb-optimized: 1. tx_ave = 2.5 ms / tx 2. block_ave = 5.87 ms / block memory-official-repo: 1. tx_ave = 8.85 ms / tx 2. block_ave = 19.68 ms / block lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e) 1. tx_ave = 47.8 ms / tx 2. block_ave = 64.2 ms / block **Note: The following data denotes processing times only (does not include p2p download time) lmdb-optimized processing times (with full pow computation): 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000). 2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000). 3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000). lmdb-optimized processing times (with per-block-checkpoint) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with full pow computation) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000). 2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with per-block-checkpoint) 1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 22:09:32 +02:00
{
m_incoming_tx_lock.lock();
if (!m_blockchain_storage.prepare_handle_incoming_blocks(blocks_entry, blocks))
{
cleanup_handle_incoming_blocks(false);
return false;
}
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY) Bockchain: 1. Optim: Multi-thread long-hash computation when encountering groups of blocks. 2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible. 3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible. 4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks. 5. Optim: Multi-thread signature computation whenever possible. 6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD) 7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???). 8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads). Berkeley-DB: 1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc). 2. Fix: Unable to pop blocks on reorganize due to transaction errors. 3. Patch: Large number of transaction aborts when running multi-threaded bulk queries. 4. Patch: Insufficient locks error when running full sync. 5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation. 6. Optim: Add bulk queries to get output global indices. 7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 9. Optim: Added thread-safe buffers used when multi-threading bulk queries. 10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details) 11. Mod: Added checkpoint thread and auto-remove-logs option. 12. *Now usable on 32-bit systems like RPI2. LMDB: 1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect) 2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details) 5. Mod: Auto resize to +1GB instead of multiplier x1.5 ETC: 1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete. 2. Fix: 32-bit saturation bug when computing next difficulty on large blocks. [PENDING ISSUES] 1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization. This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD. 2. Berkeley db, possible bug "unable to allocate memory". TBD. [NEW OPTIONS] (*Currently all enabled for testing purposes) 1. --fast-block-sync arg=[0:1] (default: 1) a. 0 = Compute long hash per block (may take a while depending on CPU) b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence) 2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000) a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions. b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache. Fast - Write meta-data but defer data flush. Fastest - Defer meta-data and data flush. Sync - Flush data after nblocks_per_sync and wait. Async - Flush data after nblocks_per_sync but do not wait for the operation to finish. 3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower) Max number of threads to use when computing long-hash in groups. 4. --show-time-stats arg=[0:1] (default: 1) Show benchmark related time stats. 5. --db-auto-remove-logs arg=[0:1] (default: 1) For berkeley-db only. Auto remove logs if enabled. **Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version. At the moment, you need a full resync to use this optimized version. [PERFORMANCE COMPARISON] **Some figures are approximations only. Using a baseline machine of an i7-2600K+SSD+(with full pow computation): 1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain. 2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain. 3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain. Averate procesing times (with full pow computation): lmdb-optimized: 1. tx_ave = 2.5 ms / tx 2. block_ave = 5.87 ms / block memory-official-repo: 1. tx_ave = 8.85 ms / tx 2. block_ave = 19.68 ms / block lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e) 1. tx_ave = 47.8 ms / tx 2. block_ave = 64.2 ms / block **Note: The following data denotes processing times only (does not include p2p download time) lmdb-optimized processing times (with full pow computation): 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000). 2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000). 3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000). lmdb-optimized processing times (with per-block-checkpoint) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with full pow computation) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000). 2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with per-block-checkpoint) 1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 22:09:32 +02:00
return true;
}
//-----------------------------------------------------------------------------------------------
bool core::cleanup_handle_incoming_blocks(bool force_sync)
{
bool success = false;
try {
success = m_blockchain_storage.cleanup_handle_incoming_blocks(force_sync);
}
catch (...) {}
m_incoming_tx_lock.unlock();
return success;
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY) Bockchain: 1. Optim: Multi-thread long-hash computation when encountering groups of blocks. 2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible. 3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible. 4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks. 5. Optim: Multi-thread signature computation whenever possible. 6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD) 7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???). 8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads). Berkeley-DB: 1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc). 2. Fix: Unable to pop blocks on reorganize due to transaction errors. 3. Patch: Large number of transaction aborts when running multi-threaded bulk queries. 4. Patch: Insufficient locks error when running full sync. 5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation. 6. Optim: Add bulk queries to get output global indices. 7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 9. Optim: Added thread-safe buffers used when multi-threading bulk queries. 10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details) 11. Mod: Added checkpoint thread and auto-remove-logs option. 12. *Now usable on 32-bit systems like RPI2. LMDB: 1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect) 2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details) 5. Mod: Auto resize to +1GB instead of multiplier x1.5 ETC: 1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete. 2. Fix: 32-bit saturation bug when computing next difficulty on large blocks. [PENDING ISSUES] 1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization. This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD. 2. Berkeley db, possible bug "unable to allocate memory". TBD. [NEW OPTIONS] (*Currently all enabled for testing purposes) 1. --fast-block-sync arg=[0:1] (default: 1) a. 0 = Compute long hash per block (may take a while depending on CPU) b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence) 2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000) a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions. b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache. Fast - Write meta-data but defer data flush. Fastest - Defer meta-data and data flush. Sync - Flush data after nblocks_per_sync and wait. Async - Flush data after nblocks_per_sync but do not wait for the operation to finish. 3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower) Max number of threads to use when computing long-hash in groups. 4. --show-time-stats arg=[0:1] (default: 1) Show benchmark related time stats. 5. --db-auto-remove-logs arg=[0:1] (default: 1) For berkeley-db only. Auto remove logs if enabled. **Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version. At the moment, you need a full resync to use this optimized version. [PERFORMANCE COMPARISON] **Some figures are approximations only. Using a baseline machine of an i7-2600K+SSD+(with full pow computation): 1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain. 2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain. 3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain. Averate procesing times (with full pow computation): lmdb-optimized: 1. tx_ave = 2.5 ms / tx 2. block_ave = 5.87 ms / block memory-official-repo: 1. tx_ave = 8.85 ms / tx 2. block_ave = 19.68 ms / block lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e) 1. tx_ave = 47.8 ms / tx 2. block_ave = 64.2 ms / block **Note: The following data denotes processing times only (does not include p2p download time) lmdb-optimized processing times (with full pow computation): 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000). 2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000). 3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000). lmdb-optimized processing times (with per-block-checkpoint) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with full pow computation) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000). 2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with per-block-checkpoint) 1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 22:09:32 +02:00
}
2014-03-03 23:07:58 +01:00
//-----------------------------------------------------------------------------------------------
bool core::handle_incoming_block(const blobdata& block_blob, const block *b, block_verification_context& bvc, checkpoint_t const *checkpoint, bool update_miner_blocktemplate)
2014-03-03 23:07:58 +01:00
{
TRY_ENTRY();
2014-03-03 23:07:58 +01:00
bvc = boost::value_initialized<block_verification_context>();
if (!check_incoming_block_size(block_blob))
2014-03-03 23:07:58 +01:00
{
bvc.m_verifivation_failed = true;
return false;
}
if (((size_t)-1) <= 0xffffffff && block_blob.size() >= 0x3fffffff)
MWARNING("This block's size is " << block_blob.size() << ", closing on the 32 bit limit");
CHECK_AND_ASSERT_MES(update_checkpoints_from_json_file(), false, "One or more checkpoints loaded from json conflicted with existing checkpoints.");
block lb;
if (!b)
2014-03-03 23:07:58 +01:00
{
crypto::hash block_hash;
if(!parse_and_validate_block_from_blob(block_blob, lb, block_hash))
{
LOG_PRINT_L1("Failed to parse and validate new block");
bvc.m_verifivation_failed = true;
return false;
}
b = &lb;
2014-03-03 23:07:58 +01:00
}
add_new_block(*b, bvc, checkpoint);
2014-03-03 23:07:58 +01:00
if(update_miner_blocktemplate && bvc.m_added_to_main_chain)
m_miner.on_block_chain_update();
2014-03-03 23:07:58 +01:00
return true;
CATCH_ENTRY_L0("core::handle_incoming_block()", false);
2014-03-03 23:07:58 +01:00
}
//-----------------------------------------------------------------------------------------------
2014-06-11 23:32:53 +02:00
// Used by the RPC server to check the size of an incoming
// block_blob
2015-09-19 12:25:57 +02:00
bool core::check_incoming_block_size(const blobdata& block_blob) const
2014-06-11 23:32:53 +02:00
{
// note: we assume block weight is always >= block blob size, so we check incoming
// blob size against the block weight limit, which acts as a sanity check without
// having to parse/weigh first; in fact, since the block blob is the block header
// plus the tx hashes, the weight will typically be much larger than the blob size
if(block_blob.size() > m_blockchain_storage.get_current_cumulative_block_weight_limit() + BLOCK_SIZE_SANITY_LEEWAY)
2014-06-11 23:32:53 +02:00
{
LOG_PRINT_L1("WRONG BLOCK BLOB, sanity check failed on size " << block_blob.size() << ", rejected");
2014-06-11 23:32:53 +02:00
return false;
}
return true;
}
//-----------------------------------------------------------------------------------------------
2015-09-19 12:25:57 +02:00
crypto::hash core::get_tail_id() const
2014-03-03 23:07:58 +01:00
{
return m_blockchain_storage.get_tail_id();
}
//-----------------------------------------------------------------------------------------------
difficulty_type core::get_block_cumulative_difficulty(uint64_t height) const
{
return m_blockchain_storage.get_db().get_block_cumulative_difficulty(height);
}
//-----------------------------------------------------------------------------------------------
2015-09-19 12:25:57 +02:00
size_t core::get_pool_transactions_count() const
2014-03-03 23:07:58 +01:00
{
return m_mempool.get_transactions_count();
}
//-----------------------------------------------------------------------------------------------
2015-09-19 12:25:57 +02:00
bool core::have_block(const crypto::hash& id) const
2014-03-03 23:07:58 +01:00
{
return m_blockchain_storage.have_block(id);
}
//-----------------------------------------------------------------------------------------------
bool core::parse_tx_from_blob(transaction& tx, crypto::hash& tx_hash, const blobdata& blob) const
2014-03-03 23:07:58 +01:00
{
return parse_and_validate_tx_from_blob(blob, tx, tx_hash);
2014-03-03 23:07:58 +01:00
}
//-----------------------------------------------------------------------------------------------
2015-09-19 12:25:57 +02:00
bool core::check_tx_syntax(const transaction& tx) const
2014-03-03 23:07:58 +01:00
{
return true;
}
//-----------------------------------------------------------------------------------------------
bool core::get_pool_transactions(std::vector<transaction>& txs, bool include_sensitive_data) const
2014-03-03 23:07:58 +01:00
{
m_mempool.get_transactions(txs, include_sensitive_data);
2014-07-17 16:31:44 +02:00
return true;
2014-03-03 23:07:58 +01:00
}
//-----------------------------------------------------------------------------------------------
bool core::get_pool_transaction_hashes(std::vector<crypto::hash>& txs, bool include_sensitive_data) const
{
m_mempool.get_transaction_hashes(txs, include_sensitive_data);
return true;
}
//-----------------------------------------------------------------------------------------------
bool core::get_pool_transaction_stats(struct txpool_stats& stats, bool include_sensitive_data) const
{
m_mempool.get_transaction_stats(stats, include_sensitive_data);
return true;
}
//-----------------------------------------------------------------------------------------------
bool core::get_pool_transaction(const crypto::hash &id, cryptonote::blobdata& tx) const
{
return m_mempool.get_transaction(id, tx);
}
2014-03-03 23:07:58 +01:00
//-----------------------------------------------------------------------------------------------
bool core::pool_has_tx(const crypto::hash &id) const
{
return m_mempool.have_tx(id);
}
//-----------------------------------------------------------------------------------------------
bool core::get_pool_transactions_and_spent_keys_info(std::vector<tx_info>& tx_infos, std::vector<spent_key_image_info>& key_image_infos, bool include_sensitive_data) const
{
return m_mempool.get_transactions_and_spent_keys_info(tx_infos, key_image_infos, include_sensitive_data);
}
//-----------------------------------------------------------------------------------------------
bool core::get_pool_for_rpc(std::vector<cryptonote::rpc::tx_in_pool>& tx_infos, cryptonote::rpc::key_images_with_tx_hashes& key_image_infos) const
{
return m_mempool.get_pool_for_rpc(tx_infos, key_image_infos);
}
//-----------------------------------------------------------------------------------------------
2015-09-19 12:25:57 +02:00
bool core::get_short_chain_history(std::list<crypto::hash>& ids) const
2014-03-03 23:07:58 +01:00
{
return m_blockchain_storage.get_short_chain_history(ids);
}
//-----------------------------------------------------------------------------------------------
bool core::handle_get_objects(NOTIFY_REQUEST_GET_OBJECTS::request& arg, NOTIFY_RESPONSE_GET_OBJECTS::request& rsp, cryptonote_connection_context& context)
{
return m_blockchain_storage.handle_get_objects(arg, rsp);
}
//-----------------------------------------------------------------------------------------------
2015-09-19 12:25:57 +02:00
crypto::hash core::get_block_id_by_height(uint64_t height) const
2014-03-03 23:07:58 +01:00
{
return m_blockchain_storage.get_block_id_by_height(height);
}
//-----------------------------------------------------------------------------------------------
bool core::get_block_by_hash(const crypto::hash &h, block &blk, bool *orphan) const
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY) Bockchain: 1. Optim: Multi-thread long-hash computation when encountering groups of blocks. 2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible. 3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible. 4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks. 5. Optim: Multi-thread signature computation whenever possible. 6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD) 7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???). 8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads). Berkeley-DB: 1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc). 2. Fix: Unable to pop blocks on reorganize due to transaction errors. 3. Patch: Large number of transaction aborts when running multi-threaded bulk queries. 4. Patch: Insufficient locks error when running full sync. 5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation. 6. Optim: Add bulk queries to get output global indices. 7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 9. Optim: Added thread-safe buffers used when multi-threading bulk queries. 10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details) 11. Mod: Added checkpoint thread and auto-remove-logs option. 12. *Now usable on 32-bit systems like RPI2. LMDB: 1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect) 2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details) 5. Mod: Auto resize to +1GB instead of multiplier x1.5 ETC: 1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete. 2. Fix: 32-bit saturation bug when computing next difficulty on large blocks. [PENDING ISSUES] 1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization. This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD. 2. Berkeley db, possible bug "unable to allocate memory". TBD. [NEW OPTIONS] (*Currently all enabled for testing purposes) 1. --fast-block-sync arg=[0:1] (default: 1) a. 0 = Compute long hash per block (may take a while depending on CPU) b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence) 2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000) a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions. b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache. Fast - Write meta-data but defer data flush. Fastest - Defer meta-data and data flush. Sync - Flush data after nblocks_per_sync and wait. Async - Flush data after nblocks_per_sync but do not wait for the operation to finish. 3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower) Max number of threads to use when computing long-hash in groups. 4. --show-time-stats arg=[0:1] (default: 1) Show benchmark related time stats. 5. --db-auto-remove-logs arg=[0:1] (default: 1) For berkeley-db only. Auto remove logs if enabled. **Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version. At the moment, you need a full resync to use this optimized version. [PERFORMANCE COMPARISON] **Some figures are approximations only. Using a baseline machine of an i7-2600K+SSD+(with full pow computation): 1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain. 2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain. 3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain. Averate procesing times (with full pow computation): lmdb-optimized: 1. tx_ave = 2.5 ms / tx 2. block_ave = 5.87 ms / block memory-official-repo: 1. tx_ave = 8.85 ms / tx 2. block_ave = 19.68 ms / block lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e) 1. tx_ave = 47.8 ms / tx 2. block_ave = 64.2 ms / block **Note: The following data denotes processing times only (does not include p2p download time) lmdb-optimized processing times (with full pow computation): 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000). 2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000). 3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000). lmdb-optimized processing times (with per-block-checkpoint) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with full pow computation) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000). 2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with per-block-checkpoint) 1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 22:09:32 +02:00
{
return m_blockchain_storage.get_block_by_hash(h, blk, orphan);
2014-03-03 23:07:58 +01:00
}
//-----------------------------------------------------------------------------------------------
2015-09-19 12:25:57 +02:00
std::string core::print_pool(bool short_format) const
2014-03-03 23:07:58 +01:00
{
return m_mempool.print_pool(short_format);
}
//-----------------------------------------------------------------------------------------------
static bool check_storage_server_ping(time_t last_time_storage_server_pinged)
{
const auto elapsed = std::time(nullptr) - last_time_storage_server_pinged;
if (elapsed > STORAGE_SERVER_PING_LIFETIME)
{
MWARNING("Have not heard from the storage server since at least: "
<< tools::get_human_readable_timespan(std::chrono::seconds(last_time_storage_server_pinged)));
return false;
}
return true;
}
//-----------------------------------------------------------------------------------------------
void core::do_uptime_proof_call()
{
// wait one block before starting uptime proofs.
std::vector<service_nodes::service_node_pubkey_info> const states = get_service_node_list_state({ m_service_node_pubkey });
if (!states.empty() && (states[0].info.registration_height + 1) < get_current_blockchain_height())
{
// Code snippet from Github @Jagerman
service_nodes::service_node_info const &info = states[0].info;
m_check_uptime_proof_interval.do_call([&info, this]() {
if (info.proof.timestamp <= static_cast<uint64_t>(time(nullptr) - UPTIME_PROOF_FREQUENCY_IN_SECONDS))
{
uint8_t hf_version = get_blockchain_storage().get_current_hard_fork_version();
if (!check_storage_server_ping(m_last_storage_server_ping))
{
if (hf_version >= cryptonote::network_version_12_checkpointing)
{
MGINFO_RED(
"Failed to submit uptime proof: have not heard from the storage server recently. Make sure that it "
"is running! It is required to run alongside the Loki daemon after hard-fork 12");
return true;
}
else
{
MGINFO_RED(
"We have not heard from the storage server recently. Make sure that it is running! After hard fork "
"12, this Service Node will stop submitting uptime proofs if it does not hear from the Loki Storage "
"Server.");
}
}
this->submit_uptime_proof();
}
return true;
});
}
else
{
// reset the interval so that we're ready when we register, OR if we get deregistered this primes us up for re-registration in the same session
m_check_uptime_proof_interval = epee::math_helper::once_a_time_seconds<UPTIME_PROOF_BUFFER_IN_SECONDS, true /*start_immediately*/>();
}
}
//-----------------------------------------------------------------------------------------------
2014-03-03 23:07:58 +01:00
bool core::on_idle()
{
if(!m_starter_message_showed)
{
std::string main_message;
if (m_offline)
2018-04-10 06:49:20 +02:00
main_message = "The daemon is running offline and will not attempt to sync to the Loki network.";
else
main_message = "The daemon will start synchronizing with the network. This may take a long time to complete.";
Change logging to easylogging++ This replaces the epee and data_loggers logging systems with a single one, and also adds filename:line and explicit severity levels. Categories may be defined, and logging severity set by category (or set of categories). epee style 0-4 log level maps to a sensible severity configuration. Log files now also rotate when reaching 100 MB. To select which logs to output, use the MONERO_LOGS environment variable, with a comma separated list of categories (globs are supported), with their requested severity level after a colon. If a log matches more than one such setting, the last one in the configuration string applies. A few examples: This one is (mostly) silent, only outputting fatal errors: MONERO_LOGS=*:FATAL This one is very verbose: MONERO_LOGS=*:TRACE This one is totally silent (logwise): MONERO_LOGS="" This one outputs all errors and warnings, except for the "verify" category, which prints just fatal errors (the verify category is used for logs about incoming transactions and blocks, and it is expected that some/many will fail to verify, hence we don't want the spam): MONERO_LOGS=*:WARNING,verify:FATAL Log levels are, in decreasing order of priority: FATAL, ERROR, WARNING, INFO, DEBUG, TRACE Subcategories may be added using prefixes and globs. This example will output net.p2p logs at the TRACE level, but all other net* logs only at INFO: MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE Logs which are intended for the user (which Monero was using a lot through epee, but really isn't a nice way to go things) should use the "global" category. There are a few helper macros for using this category, eg: MGINFO("this shows up by default") or MGINFO_RED("this is red"), to try to keep a similar look and feel for now. Existing epee log macros still exist, and map to the new log levels, but since they're used as a "user facing" UI element as much as a logging system, they often don't map well to log severities (ie, a log level 0 log may be an error, or may be something we want the user to see, such as an important info). In those cases, I tried to use the new macros. In other cases, I left the existing macros in. When modifying logs, it is probably best to switch to the new macros with explicit levels. The --log-level options and set_log commands now also accept category settings, in addition to the epee style log levels.
2017-01-01 17:34:23 +01:00
MGINFO_YELLOW(ENDL << "**********************************************************************" << ENDL
<< main_message << ENDL
2014-03-03 23:07:58 +01:00
<< ENDL
<< "You can set the level of process detailization through \"set_log <level|categories>\" command," << ENDL
<< "where <level> is between 0 (no details) and 4 (very verbose), or custom category based levels (eg, *:WARNING)." << ENDL
2014-03-03 23:07:58 +01:00
<< ENDL
<< "Use the \"help\" command to see the list of available commands." << ENDL
2017-11-22 13:53:18 +01:00
<< "Use \"help <command>\" to see a command's documentation." << ENDL
<< "**********************************************************************" << ENDL);
2014-03-03 23:07:58 +01:00
m_starter_message_showed = true;
}
m_fork_moaner.do_call(boost::bind(&core::check_fork_time, this));
m_txpool_auto_relayer.do_call(boost::bind(&core::relay_txpool_transactions, this));
m_service_node_vote_relayer.do_call(boost::bind(&core::relay_service_node_votes, this));
// m_check_updates_interval.do_call(boost::bind(&core::check_updates, this));
2017-11-12 00:25:12 +01:00
m_check_disk_space_interval.do_call(boost::bind(&core::check_disk_space, this));
m_block_rate_interval.do_call(boost::bind(&core::check_block_rate, this));
time_t const lifetime = time(nullptr) - get_start_time();
if (m_service_node && lifetime > DIFFICULTY_TARGET_V2) // Give us some time to connect to peers before sending uptimes
{
do_uptime_proof_call();
}
m_blockchain_pruning_interval.do_call(boost::bind(&core::update_blockchain_pruning, this));
2014-03-03 23:07:58 +01:00
m_miner.on_idle();
m_mempool.on_idle();
#if defined(LOKI_ENABLE_INTEGRATION_TEST_HOOKS)
loki::integration_test.core_is_idle = true;
#endif
2014-03-03 23:07:58 +01:00
return true;
}
//-----------------------------------------------------------------------------------------------
bool core::check_fork_time()
{
if (m_nettype == FAKECHAIN)
return true;
HardFork::State state = m_blockchain_storage.get_hard_fork_state();
Change logging to easylogging++ This replaces the epee and data_loggers logging systems with a single one, and also adds filename:line and explicit severity levels. Categories may be defined, and logging severity set by category (or set of categories). epee style 0-4 log level maps to a sensible severity configuration. Log files now also rotate when reaching 100 MB. To select which logs to output, use the MONERO_LOGS environment variable, with a comma separated list of categories (globs are supported), with their requested severity level after a colon. If a log matches more than one such setting, the last one in the configuration string applies. A few examples: This one is (mostly) silent, only outputting fatal errors: MONERO_LOGS=*:FATAL This one is very verbose: MONERO_LOGS=*:TRACE This one is totally silent (logwise): MONERO_LOGS="" This one outputs all errors and warnings, except for the "verify" category, which prints just fatal errors (the verify category is used for logs about incoming transactions and blocks, and it is expected that some/many will fail to verify, hence we don't want the spam): MONERO_LOGS=*:WARNING,verify:FATAL Log levels are, in decreasing order of priority: FATAL, ERROR, WARNING, INFO, DEBUG, TRACE Subcategories may be added using prefixes and globs. This example will output net.p2p logs at the TRACE level, but all other net* logs only at INFO: MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE Logs which are intended for the user (which Monero was using a lot through epee, but really isn't a nice way to go things) should use the "global" category. There are a few helper macros for using this category, eg: MGINFO("this shows up by default") or MGINFO_RED("this is red"), to try to keep a similar look and feel for now. Existing epee log macros still exist, and map to the new log levels, but since they're used as a "user facing" UI element as much as a logging system, they often don't map well to log severities (ie, a log level 0 log may be an error, or may be something we want the user to see, such as an important info). In those cases, I tried to use the new macros. In other cases, I left the existing macros in. When modifying logs, it is probably best to switch to the new macros with explicit levels. The --log-level options and set_log commands now also accept category settings, in addition to the epee style log levels.
2017-01-01 17:34:23 +01:00
const el::Level level = el::Level::Warning;
switch (state) {
case HardFork::LikelyForked:
Change logging to easylogging++ This replaces the epee and data_loggers logging systems with a single one, and also adds filename:line and explicit severity levels. Categories may be defined, and logging severity set by category (or set of categories). epee style 0-4 log level maps to a sensible severity configuration. Log files now also rotate when reaching 100 MB. To select which logs to output, use the MONERO_LOGS environment variable, with a comma separated list of categories (globs are supported), with their requested severity level after a colon. If a log matches more than one such setting, the last one in the configuration string applies. A few examples: This one is (mostly) silent, only outputting fatal errors: MONERO_LOGS=*:FATAL This one is very verbose: MONERO_LOGS=*:TRACE This one is totally silent (logwise): MONERO_LOGS="" This one outputs all errors and warnings, except for the "verify" category, which prints just fatal errors (the verify category is used for logs about incoming transactions and blocks, and it is expected that some/many will fail to verify, hence we don't want the spam): MONERO_LOGS=*:WARNING,verify:FATAL Log levels are, in decreasing order of priority: FATAL, ERROR, WARNING, INFO, DEBUG, TRACE Subcategories may be added using prefixes and globs. This example will output net.p2p logs at the TRACE level, but all other net* logs only at INFO: MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE Logs which are intended for the user (which Monero was using a lot through epee, but really isn't a nice way to go things) should use the "global" category. There are a few helper macros for using this category, eg: MGINFO("this shows up by default") or MGINFO_RED("this is red"), to try to keep a similar look and feel for now. Existing epee log macros still exist, and map to the new log levels, but since they're used as a "user facing" UI element as much as a logging system, they often don't map well to log severities (ie, a log level 0 log may be an error, or may be something we want the user to see, such as an important info). In those cases, I tried to use the new macros. In other cases, I left the existing macros in. When modifying logs, it is probably best to switch to the new macros with explicit levels. The --log-level options and set_log commands now also accept category settings, in addition to the epee style log levels.
2017-01-01 17:34:23 +01:00
MCLOG_RED(level, "global", "**********************************************************************");
MCLOG_RED(level, "global", "Last scheduled hard fork is too far in the past.");
MCLOG_RED(level, "global", "We are most likely forked from the network. Daemon update needed now.");
MCLOG_RED(level, "global", "**********************************************************************");
break;
case HardFork::UpdateNeeded:
Change logging to easylogging++ This replaces the epee and data_loggers logging systems with a single one, and also adds filename:line and explicit severity levels. Categories may be defined, and logging severity set by category (or set of categories). epee style 0-4 log level maps to a sensible severity configuration. Log files now also rotate when reaching 100 MB. To select which logs to output, use the MONERO_LOGS environment variable, with a comma separated list of categories (globs are supported), with their requested severity level after a colon. If a log matches more than one such setting, the last one in the configuration string applies. A few examples: This one is (mostly) silent, only outputting fatal errors: MONERO_LOGS=*:FATAL This one is very verbose: MONERO_LOGS=*:TRACE This one is totally silent (logwise): MONERO_LOGS="" This one outputs all errors and warnings, except for the "verify" category, which prints just fatal errors (the verify category is used for logs about incoming transactions and blocks, and it is expected that some/many will fail to verify, hence we don't want the spam): MONERO_LOGS=*:WARNING,verify:FATAL Log levels are, in decreasing order of priority: FATAL, ERROR, WARNING, INFO, DEBUG, TRACE Subcategories may be added using prefixes and globs. This example will output net.p2p logs at the TRACE level, but all other net* logs only at INFO: MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE Logs which are intended for the user (which Monero was using a lot through epee, but really isn't a nice way to go things) should use the "global" category. There are a few helper macros for using this category, eg: MGINFO("this shows up by default") or MGINFO_RED("this is red"), to try to keep a similar look and feel for now. Existing epee log macros still exist, and map to the new log levels, but since they're used as a "user facing" UI element as much as a logging system, they often don't map well to log severities (ie, a log level 0 log may be an error, or may be something we want the user to see, such as an important info). In those cases, I tried to use the new macros. In other cases, I left the existing macros in. When modifying logs, it is probably best to switch to the new macros with explicit levels. The --log-level options and set_log commands now also accept category settings, in addition to the epee style log levels.
2017-01-01 17:34:23 +01:00
MCLOG_RED(level, "global", "**********************************************************************");
MCLOG_RED(level, "global", "Last scheduled hard fork time shows a daemon update is needed soon.");
Change logging to easylogging++ This replaces the epee and data_loggers logging systems with a single one, and also adds filename:line and explicit severity levels. Categories may be defined, and logging severity set by category (or set of categories). epee style 0-4 log level maps to a sensible severity configuration. Log files now also rotate when reaching 100 MB. To select which logs to output, use the MONERO_LOGS environment variable, with a comma separated list of categories (globs are supported), with their requested severity level after a colon. If a log matches more than one such setting, the last one in the configuration string applies. A few examples: This one is (mostly) silent, only outputting fatal errors: MONERO_LOGS=*:FATAL This one is very verbose: MONERO_LOGS=*:TRACE This one is totally silent (logwise): MONERO_LOGS="" This one outputs all errors and warnings, except for the "verify" category, which prints just fatal errors (the verify category is used for logs about incoming transactions and blocks, and it is expected that some/many will fail to verify, hence we don't want the spam): MONERO_LOGS=*:WARNING,verify:FATAL Log levels are, in decreasing order of priority: FATAL, ERROR, WARNING, INFO, DEBUG, TRACE Subcategories may be added using prefixes and globs. This example will output net.p2p logs at the TRACE level, but all other net* logs only at INFO: MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE Logs which are intended for the user (which Monero was using a lot through epee, but really isn't a nice way to go things) should use the "global" category. There are a few helper macros for using this category, eg: MGINFO("this shows up by default") or MGINFO_RED("this is red"), to try to keep a similar look and feel for now. Existing epee log macros still exist, and map to the new log levels, but since they're used as a "user facing" UI element as much as a logging system, they often don't map well to log severities (ie, a log level 0 log may be an error, or may be something we want the user to see, such as an important info). In those cases, I tried to use the new macros. In other cases, I left the existing macros in. When modifying logs, it is probably best to switch to the new macros with explicit levels. The --log-level options and set_log commands now also accept category settings, in addition to the epee style log levels.
2017-01-01 17:34:23 +01:00
MCLOG_RED(level, "global", "**********************************************************************");
break;
default:
break;
}
return true;
}
//-----------------------------------------------------------------------------------------------
uint8_t core::get_ideal_hard_fork_version() const
{
return get_blockchain_storage().get_ideal_hard_fork_version();
}
//-----------------------------------------------------------------------------------------------
uint8_t core::get_ideal_hard_fork_version(uint64_t height) const
{
return get_blockchain_storage().get_ideal_hard_fork_version(height);
}
//-----------------------------------------------------------------------------------------------
uint8_t core::get_hard_fork_version(uint64_t height) const
{
return get_blockchain_storage().get_hard_fork_version(height);
}
//-----------------------------------------------------------------------------------------------
uint64_t core::get_earliest_ideal_height_for_version(uint8_t version) const
{
return get_blockchain_storage().get_earliest_ideal_height_for_version(version);
}
//-----------------------------------------------------------------------------------------------
bool core::check_updates()
{
static const char software[] = "loki";
#ifdef BUILD_TAG
static const char buildtag[] = BOOST_PP_STRINGIZE(BUILD_TAG);
static const char subdir[] = "cli"; // because it can never be simple
#else
static const char buildtag[] = "source";
static const char subdir[] = "source"; // because it can never be simple
#endif
if (m_offline)
return true;
if (check_updates_level == UPDATES_DISABLED)
return true;
std::string version, hash;
MCDEBUG("updates", "Checking for a new " << software << " version for " << buildtag);
if (!tools::check_updates(software, buildtag, version, hash))
return false;
2018-04-10 06:49:20 +02:00
if (tools::vercmp(version.c_str(), LOKI_VERSION) <= 0)
{
m_update_available = false;
return true;
}
std::string url = tools::get_update_url(software, subdir, buildtag, version, true);
MCLOG_CYAN(el::Level::Info, "global", "Version " << version << " of " << software << " for " << buildtag << " is available: " << url << ", SHA256 hash " << hash);
m_update_available = true;
if (check_updates_level == UPDATES_NOTIFY)
return true;
url = tools::get_update_url(software, subdir, buildtag, version, false);
std::string filename;
const char *slash = strrchr(url.c_str(), '/');
if (slash)
filename = slash + 1;
else
filename = std::string(software) + "-update-" + version;
boost::filesystem::path path(epee::string_tools::get_current_module_folder());
path /= filename;
2017-02-26 22:00:38 +01:00
boost::unique_lock<boost::mutex> lock(m_update_mutex);
if (m_update_download != 0)
{
MCDEBUG("updates", "Already downloading update");
return true;
}
crypto::hash file_hash;
if (!tools::sha256sum(path.string(), file_hash) || (hash != epee::string_tools::pod_to_hex(file_hash)))
{
MCDEBUG("updates", "We don't have that file already, downloading");
const std::string tmppath = path.string() + ".tmp";
if (epee::file_io_utils::is_file_exist(tmppath))
{
MCDEBUG("updates", "We have part of the file already, resuming download");
}
2017-02-26 22:00:38 +01:00
m_last_update_length = 0;
m_update_download = tools::download_async(tmppath, url, [this, hash, path](const std::string &tmppath, const std::string &uri, bool success) {
bool remove = false, good = true;
2017-02-26 22:00:38 +01:00
if (success)
{
crypto::hash file_hash;
if (!tools::sha256sum(tmppath, file_hash))
2017-02-26 22:00:38 +01:00
{
MCERROR("updates", "Failed to hash " << tmppath);
remove = true;
good = false;
2017-02-26 22:00:38 +01:00
}
else if (hash != epee::string_tools::pod_to_hex(file_hash))
2017-02-26 22:00:38 +01:00
{
MCERROR("updates", "Download from " << uri << " does not match the expected hash");
remove = true;
good = false;
2017-02-26 22:00:38 +01:00
}
}
else
{
MCERROR("updates", "Failed to download " << uri);
good = false;
2017-02-26 22:00:38 +01:00
}
boost::unique_lock<boost::mutex> lock(m_update_mutex);
m_update_download = 0;
if (success && !remove)
{
std::error_code e = tools::replace_file(tmppath, path.string());
if (e)
{
MCERROR("updates", "Failed to rename downloaded file");
good = false;
}
}
else if (remove)
{
if (!boost::filesystem::remove(tmppath))
{
MCERROR("updates", "Failed to remove invalid downloaded file");
good = false;
}
}
if (good)
MCLOG_CYAN(el::Level::Info, "updates", "New version downloaded to " << path.string());
2017-02-26 22:00:38 +01:00
}, [this](const std::string &path, const std::string &uri, size_t length, ssize_t content_length) {
if (length >= m_last_update_length + 1024 * 1024 * 10)
{
m_last_update_length = length;
MCDEBUG("updates", "Downloaded " << length << "/" << (content_length ? std::to_string(content_length) : "unknown"));
}
return true;
});
}
else
{
MCDEBUG("updates", "We already have " << path << " with expected hash");
}
2017-02-26 22:00:38 +01:00
lock.unlock();
if (check_updates_level == UPDATES_DOWNLOAD)
return true;
MCERROR("updates", "Download/update not implemented yet");
return true;
}
//-----------------------------------------------------------------------------------------------
2017-11-12 00:25:12 +01:00
bool core::check_disk_space()
{
uint64_t free_space = get_free_space();
if (free_space < 1ull * 1024 * 1024 * 1024) // 1 GB
{
const el::Level level = el::Level::Warning;
MCLOG_RED(level, "global", "Free space is below 1 GB on " << m_config_folder);
}
return true;
}
//-----------------------------------------------------------------------------------------------
double factorial(unsigned int n)
{
if (n <= 1)
return 1.0;
double f = n;
while (n-- > 1)
f *= n;
return f;
}
//-----------------------------------------------------------------------------------------------
static double probability1(unsigned int blocks, unsigned int expected)
{
// https://www.umass.edu/wsp/resources/poisson/#computing
return pow(expected, blocks) / (factorial(blocks) * exp(expected));
}
//-----------------------------------------------------------------------------------------------
static double probability(unsigned int blocks, unsigned int expected)
{
double p = 0.0;
if (blocks <= expected)
{
for (unsigned int b = 0; b <= blocks; ++b)
p += probability1(b, expected);
}
else if (blocks > expected)
{
for (unsigned int b = blocks; b <= expected * 3 /* close enough */; ++b)
p += probability1(b, expected);
}
return p;
}
//-----------------------------------------------------------------------------------------------
bool core::check_block_rate()
{
if (m_offline || m_nettype == FAKECHAIN || m_target_blockchain_height > get_current_blockchain_height())
{
MDEBUG("Not checking block rate, offline or syncing");
return true;
}
#if defined(LOKI_ENABLE_INTEGRATION_TEST_HOOKS)
MDEBUG("Not checking block rate, integration test mode");
return true;
#endif
static constexpr double threshold = 1. / (864000 / DIFFICULTY_TARGET_V2); // one false positive every 10 days
const time_t now = time(NULL);
const std::vector<time_t> timestamps = m_blockchain_storage.get_last_block_timestamps(60);
static const unsigned int seconds[] = { 5400, 3600, 1800, 1200, 600 };
for (size_t n = 0; n < sizeof(seconds)/sizeof(seconds[0]); ++n)
{
unsigned int b = 0;
const time_t time_boundary = now - static_cast<time_t>(seconds[n]);
for (time_t ts: timestamps) b += ts >= time_boundary;
const double p = probability(b, seconds[n] / DIFFICULTY_TARGET_V2);
MDEBUG("blocks in the last " << seconds[n] / 60 << " minutes: " << b << " (probability " << p << ")");
if (p < threshold)
{
2018-11-05 01:43:15 +01:00
MWARNING("There were " << b << " blocks in the last " << seconds[n] / 60 << " minutes, there might be large hash rate changes, or we might be partitioned, cut off from the Loki network or under attack. Or it could be just sheer bad luck.");
std::shared_ptr<tools::Notify> block_rate_notify = m_block_rate_notify;
if (block_rate_notify)
{
auto expected = seconds[n] / DIFFICULTY_TARGET_V2;
block_rate_notify->notify("%t", std::to_string(seconds[n] / 60).c_str(), "%b", std::to_string(b).c_str(), "%e", std::to_string(expected).c_str(), NULL);
}
break; // no need to look further
}
}
return true;
}
//-----------------------------------------------------------------------------------------------
bool core::update_blockchain_pruning()
{
return m_blockchain_storage.update_blockchain_pruning();
}
//-----------------------------------------------------------------------------------------------
bool core::check_blockchain_pruning()
{
return m_blockchain_storage.check_blockchain_pruning();
}
//-----------------------------------------------------------------------------------------------
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY) Bockchain: 1. Optim: Multi-thread long-hash computation when encountering groups of blocks. 2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible. 3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible. 4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks. 5. Optim: Multi-thread signature computation whenever possible. 6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD) 7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???). 8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads). Berkeley-DB: 1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc). 2. Fix: Unable to pop blocks on reorganize due to transaction errors. 3. Patch: Large number of transaction aborts when running multi-threaded bulk queries. 4. Patch: Insufficient locks error when running full sync. 5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation. 6. Optim: Add bulk queries to get output global indices. 7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 9. Optim: Added thread-safe buffers used when multi-threading bulk queries. 10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details) 11. Mod: Added checkpoint thread and auto-remove-logs option. 12. *Now usable on 32-bit systems like RPI2. LMDB: 1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect) 2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details) 5. Mod: Auto resize to +1GB instead of multiplier x1.5 ETC: 1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete. 2. Fix: 32-bit saturation bug when computing next difficulty on large blocks. [PENDING ISSUES] 1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization. This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD. 2. Berkeley db, possible bug "unable to allocate memory". TBD. [NEW OPTIONS] (*Currently all enabled for testing purposes) 1. --fast-block-sync arg=[0:1] (default: 1) a. 0 = Compute long hash per block (may take a while depending on CPU) b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence) 2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000) a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions. b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache. Fast - Write meta-data but defer data flush. Fastest - Defer meta-data and data flush. Sync - Flush data after nblocks_per_sync and wait. Async - Flush data after nblocks_per_sync but do not wait for the operation to finish. 3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower) Max number of threads to use when computing long-hash in groups. 4. --show-time-stats arg=[0:1] (default: 1) Show benchmark related time stats. 5. --db-auto-remove-logs arg=[0:1] (default: 1) For berkeley-db only. Auto remove logs if enabled. **Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version. At the moment, you need a full resync to use this optimized version. [PERFORMANCE COMPARISON] **Some figures are approximations only. Using a baseline machine of an i7-2600K+SSD+(with full pow computation): 1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain. 2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain. 3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain. Averate procesing times (with full pow computation): lmdb-optimized: 1. tx_ave = 2.5 ms / tx 2. block_ave = 5.87 ms / block memory-official-repo: 1. tx_ave = 8.85 ms / tx 2. block_ave = 19.68 ms / block lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e) 1. tx_ave = 47.8 ms / tx 2. block_ave = 64.2 ms / block **Note: The following data denotes processing times only (does not include p2p download time) lmdb-optimized processing times (with full pow computation): 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000). 2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000). 3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000). lmdb-optimized processing times (with per-block-checkpoint) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with full pow computation) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000). 2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with per-block-checkpoint) 1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 22:09:32 +02:00
void core::set_target_blockchain_height(uint64_t target_blockchain_height)
{
m_target_blockchain_height = target_blockchain_height;
}
//-----------------------------------------------------------------------------------------------
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY) Bockchain: 1. Optim: Multi-thread long-hash computation when encountering groups of blocks. 2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible. 3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible. 4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks. 5. Optim: Multi-thread signature computation whenever possible. 6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD) 7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???). 8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads). Berkeley-DB: 1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc). 2. Fix: Unable to pop blocks on reorganize due to transaction errors. 3. Patch: Large number of transaction aborts when running multi-threaded bulk queries. 4. Patch: Insufficient locks error when running full sync. 5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation. 6. Optim: Add bulk queries to get output global indices. 7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 9. Optim: Added thread-safe buffers used when multi-threading bulk queries. 10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details) 11. Mod: Added checkpoint thread and auto-remove-logs option. 12. *Now usable on 32-bit systems like RPI2. LMDB: 1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect) 2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3) 3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key 4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details) 5. Mod: Auto resize to +1GB instead of multiplier x1.5 ETC: 1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete. 2. Fix: 32-bit saturation bug when computing next difficulty on large blocks. [PENDING ISSUES] 1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization. This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD. 2. Berkeley db, possible bug "unable to allocate memory". TBD. [NEW OPTIONS] (*Currently all enabled for testing purposes) 1. --fast-block-sync arg=[0:1] (default: 1) a. 0 = Compute long hash per block (may take a while depending on CPU) b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence) 2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000) a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions. b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache. Fast - Write meta-data but defer data flush. Fastest - Defer meta-data and data flush. Sync - Flush data after nblocks_per_sync and wait. Async - Flush data after nblocks_per_sync but do not wait for the operation to finish. 3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower) Max number of threads to use when computing long-hash in groups. 4. --show-time-stats arg=[0:1] (default: 1) Show benchmark related time stats. 5. --db-auto-remove-logs arg=[0:1] (default: 1) For berkeley-db only. Auto remove logs if enabled. **Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version. At the moment, you need a full resync to use this optimized version. [PERFORMANCE COMPARISON] **Some figures are approximations only. Using a baseline machine of an i7-2600K+SSD+(with full pow computation): 1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain. 2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain. 3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain. Averate procesing times (with full pow computation): lmdb-optimized: 1. tx_ave = 2.5 ms / tx 2. block_ave = 5.87 ms / block memory-official-repo: 1. tx_ave = 8.85 ms / tx 2. block_ave = 19.68 ms / block lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e) 1. tx_ave = 47.8 ms / tx 2. block_ave = 64.2 ms / block **Note: The following data denotes processing times only (does not include p2p download time) lmdb-optimized processing times (with full pow computation): 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000). 2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000). 3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000). lmdb-optimized processing times (with per-block-checkpoint) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with full pow computation) 1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000). 2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000). berkeley-db optimized processing times (with per-block-checkpoint) 1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 22:09:32 +02:00
uint64_t core::get_target_blockchain_height() const
{
return m_target_blockchain_height;
}
//-----------------------------------------------------------------------------------------------
uint64_t core::prevalidate_block_hashes(uint64_t height, const std::vector<crypto::hash> &hashes)
{
return get_blockchain_storage().prevalidate_block_hashes(height, hashes);
}
//-----------------------------------------------------------------------------------------------
2017-11-12 00:25:12 +01:00
uint64_t core::get_free_space() const
{
boost::filesystem::path path(m_config_folder);
boost::filesystem::space_info si = boost::filesystem::space(path);
return si.available;
}
//-----------------------------------------------------------------------------------------------
std::shared_ptr<const service_nodes::testing_quorum> core::get_testing_quorum(service_nodes::quorum_type type, uint64_t height, bool include_old) const
2019-05-01 08:01:17 +02:00
{
return m_service_node_list.get_testing_quorum(type, height, include_old);
}
//-----------------------------------------------------------------------------------------------
Relax deregistration rules The replaces the deregistration mechanism with a new state change mechanism (beginning at the v12 fork) which can change a service node's network status via three potential values (and is extensible in the future to handle more): - deregistered -- this is the same as the existing deregistration; the SN is instantly removed from the SN list. - decommissioned -- this is a sort of temporary deregistration: your SN remains in the service node list, but is removed from the rewards list and from any network duties. - recommissioned -- this tx is sent by a quorum if they observe a decommissioned SN sending uptime proofs again. Upon reception, the SN is reactivated and put on the end of the reward list. Since this is broadening the quorum use, this also renames the relevant quorum to a "obligations" quorum (since it validates SN obligations), while the transactions are "state_change" transactions (since they change the state of a registered SN). The new parameters added to service_node_rules.h control how this works: // Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks) // towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up // CREDIT_PER_DAY for each day the service node remains active up to a maximum of // DECOMMISSION_MAX_CREDIT. // // If a service node stops sending uptime proofs, a quorum will consider whether the service node // has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration, // it instead submits a decommission. This removes the service node from the list of active // service nodes both for rewards and for any active network duties. If the service node comes // back online (i.e. starts sending the required performance proofs again) before the credits run // out then a quorum will reinstate the service node using a recommission transaction, which adds // the service node back to the bottom of the service node reward list, and resets its accumulated // credits to 0. If it does not come back online within the required number of blocks (i.e. the // accumulated credit at the point of decommissioning) then a quorum will send a permanent // deregistration transaction to the network, starting a 30-day deregistration count down. This commit currently includes values (which are not necessarily finalized): - 8 hours (240 blocks) of credit required for activation of a decommission (rather than a deregister) - 0 initial credits at registration - a maximum of 24 hours (720 blocks) of credits - credits accumulate at a rate that you hit 24 hours of credits after 30 days of operation. Miscellaneous other details of this PR: - a new TX extra tag is used for the state change (including deregistrations). The old extra tag has no version or type tag, so couldn't be reused. The data in the new tag is slightly more efficiently packed than the old deregistration transaction, so it gets used for deregistrations (starting at the v12 fork) as well. - Correct validator/worker selection required generalizing the shuffle function to be able to shuffle just part of a vector. This lets us stick any down service nodes at the end of the potential list, then select validators by only shuffling the part of the index vector that contains active service indices. Once the validators are selected, the remainder of the list (this time including decommissioned SN indices) is shuffled to select quorum workers to check, thus allowing decommisioned nodes to be randomly included in the nodes to check without being selected as a validator. - Swarm recalculation was not quite right: swarms were recalculated on SN registrations, even if those registrations were include shared node registrations, but *not* recalculated on stakes. Starting with the upgrade this behaviour is fixed (swarms aren't actually used currently and aren't consensus-relevant so recalculating early won't hurt anything). - Details on decomm/dereg are added to RPC info and print_sn/print_sn_status - Slightly improves the % of reward output in the print_sn output by rounding it to two digits, and reserves space in the output string to avoid excessive reallocations. - Adds various debugging at higher debug levels to quorum voting (into all of voting itself, vote transmission, and vote reception). - Reset service node list internal data structure version to 0. The SN list has to be rescanned anyway at upgrade (its size has changed), so we might as well reset the version and remove the version-dependent serialization code. (Note that the affected code here is for SN states in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
bool core::is_service_node(const crypto::public_key& pubkey, bool require_active) const
{
Relax deregistration rules The replaces the deregistration mechanism with a new state change mechanism (beginning at the v12 fork) which can change a service node's network status via three potential values (and is extensible in the future to handle more): - deregistered -- this is the same as the existing deregistration; the SN is instantly removed from the SN list. - decommissioned -- this is a sort of temporary deregistration: your SN remains in the service node list, but is removed from the rewards list and from any network duties. - recommissioned -- this tx is sent by a quorum if they observe a decommissioned SN sending uptime proofs again. Upon reception, the SN is reactivated and put on the end of the reward list. Since this is broadening the quorum use, this also renames the relevant quorum to a "obligations" quorum (since it validates SN obligations), while the transactions are "state_change" transactions (since they change the state of a registered SN). The new parameters added to service_node_rules.h control how this works: // Service node decommissioning: as service nodes stay up they earn "credits" (measured in blocks) // towards a future outage. A new service node starts out with INITIAL_CREDIT, and then builds up // CREDIT_PER_DAY for each day the service node remains active up to a maximum of // DECOMMISSION_MAX_CREDIT. // // If a service node stops sending uptime proofs, a quorum will consider whether the service node // has built up enough credits (at least MINIMUM): if so, instead of submitting a deregistration, // it instead submits a decommission. This removes the service node from the list of active // service nodes both for rewards and for any active network duties. If the service node comes // back online (i.e. starts sending the required performance proofs again) before the credits run // out then a quorum will reinstate the service node using a recommission transaction, which adds // the service node back to the bottom of the service node reward list, and resets its accumulated // credits to 0. If it does not come back online within the required number of blocks (i.e. the // accumulated credit at the point of decommissioning) then a quorum will send a permanent // deregistration transaction to the network, starting a 30-day deregistration count down. This commit currently includes values (which are not necessarily finalized): - 8 hours (240 blocks) of credit required for activation of a decommission (rather than a deregister) - 0 initial credits at registration - a maximum of 24 hours (720 blocks) of credits - credits accumulate at a rate that you hit 24 hours of credits after 30 days of operation. Miscellaneous other details of this PR: - a new TX extra tag is used for the state change (including deregistrations). The old extra tag has no version or type tag, so couldn't be reused. The data in the new tag is slightly more efficiently packed than the old deregistration transaction, so it gets used for deregistrations (starting at the v12 fork) as well. - Correct validator/worker selection required generalizing the shuffle function to be able to shuffle just part of a vector. This lets us stick any down service nodes at the end of the potential list, then select validators by only shuffling the part of the index vector that contains active service indices. Once the validators are selected, the remainder of the list (this time including decommissioned SN indices) is shuffled to select quorum workers to check, thus allowing decommisioned nodes to be randomly included in the nodes to check without being selected as a validator. - Swarm recalculation was not quite right: swarms were recalculated on SN registrations, even if those registrations were include shared node registrations, but *not* recalculated on stakes. Starting with the upgrade this behaviour is fixed (swarms aren't actually used currently and aren't consensus-relevant so recalculating early won't hurt anything). - Details on decomm/dereg are added to RPC info and print_sn/print_sn_status - Slightly improves the % of reward output in the print_sn output by rounding it to two digits, and reserves space in the output string to avoid excessive reallocations. - Adds various debugging at higher debug levels to quorum voting (into all of voting itself, vote transmission, and vote reception). - Reset service node list internal data structure version to 0. The SN list has to be rescanned anyway at upgrade (its size has changed), so we might as well reset the version and remove the version-dependent serialization code. (Note that the affected code here is for SN states in lmdb storage, not for SN-to-SN communication serialization).
2019-06-18 23:57:02 +02:00
return m_service_node_list.is_service_node(pubkey, require_active);
Service Node Deregister Part 5 (#89) * Retrieve quorum list from height, reviewed * Setup data structures for de/register TX * Submit and validate partial/full deregisters * Add P2P relaying of partial deregistration votes * Code review adjustments for deregistration part 1 - Fix check_tx_semantic - Remove signature_pod as votes are now stored as blobs. Serialization overrides don't intefere with crypto::signature anymore. * deregistration_vote_pool - changed sign/verify interface and removed repeated code * Misc review, fix sign/verify api, vote threshold * Deregister/tx edge case handling for combinatoric votes * core, service_node_list: separated address from service node pubkey * Retrieve quorum list from height, reviewed * Setup data structures for de/register TX * Submit and validate partial/full deregisters * Add P2P relaying of partial deregistration votes * Code review adjustments for deregistration part 1 - Fix check_tx_semantic - Remove signature_pod as votes are now stored as blobs. Serialization overrides don't intefere with crypto::signature anymore. * deregistration_vote_pool - changed sign/verify interface and removed repeated code * Misc review, fix sign/verify api, vote threshold * Deregister/tx edge case handling for combinatoric votes * Store service node lists for the duration of deregister lifetimes * Quorum min/max bug, sort node list, fix node to test list * Change quorum to store acc pub address, fix oob bug * Code review for expiring votes, acc keys to pub_key, improve err msgs * Add early out for is_deregistration_tx and protect against quorum changes * Remove debug code, fix segfault * Remove irrelevant check for tx v3 in blockchain, fix >= height for pruning quorum states Incorrect assumption that a transaction can be kept in the chain if it could eventually become invalid, because if it were the chain would be split and eventually these transaction would be dropped. But also that we should not override the pre-existing logic which handles this case anyway.
2018-07-18 04:42:47 +02:00
}
//-----------------------------------------------------------------------------------------------
Infinite Staking Part 1 (#387) * Remove dead branches in hot-path check_tx_inputs Also renames #define for mixins to better match naming convention * Shuffle around some more code into common branches * Fix min/max tx version rules, since there 1 tx v2 on v9 fork * First draft infinite staking implementation * Actually generate the right key image and expire appropriately * Add framework to lock key images after expiry * Return locked key images for nodes, add request unlock option * Introduce transaction types for key image unlock * Update validation steps to accept tx types, key_image_unlock * Add mapping for lockable key images to amounts * Change inconsistent naming scheme of contributors * Create key image unlock transaction type and process it * Update tx params to allow v4 types and as a result construct_tx* * Fix some serialisation issues not sending all the information * Fix dupe tx extra tag causing incorrect deserialisation * Add warning comments * Fix key image unlocks parsing error * Simplify key image proof checks * Fix rebase errors * Correctly calculate the key image unlock times * Blacklist key image on deregistration * Serialise key image blacklist * Rollback blacklisted key images * Fix expiry logic error * Disallow requesting stake unlock if already unlocked client side * Add double spend checks for key image unlocks * Rename get_staking_requirement_lock_blocks To staking_initial_num_lock_blocks * Begin modifying output selection to not use locked outputs * Modify output selection to avoid locked/blacklisted key images * Cleanup and undoing some protocol breakages * Simplify expiration of nodes * Request unlock schedules entire node for expiration * Fix off by one in expiring nodes * Undo expiring code for pre v10 nodes * Fix RPC returning register as unlock height and not checking 0 * Rename key image unlock height const * Undo testnet hardfork debug changes * Remove is_type for get_type, fix missing var rename * Move serialisable data into public namespace * Serialise tx types properly * Fix typo in no service node known msg * Code review * Fix == to >= on serialising tx type * Code review 2 * Fix tests and key image unlock * Add additional test, fix assert * Remove debug code in wallet * Fix merge dev problem
2019-01-25 04:15:52 +01:00
const std::vector<service_nodes::key_image_blacklist_entry> &core::get_service_node_blacklisted_key_images() const
{
const auto &result = m_service_node_list.get_blacklisted_key_images();
return result;
}
//-----------------------------------------------------------------------------------------------
std::vector<service_nodes::service_node_pubkey_info> core::get_service_node_list_state(const std::vector<crypto::public_key> &service_node_pubkeys) const
{
std::vector<service_nodes::service_node_pubkey_info> result = m_service_node_list.get_service_node_list_state(service_node_pubkeys);
return result;
}
//-----------------------------------------------------------------------------------------------
bool core::add_service_node_vote(const service_nodes::quorum_vote_t& vote, vote_verification_context &vvc)
Service Node Deregister Part 5 (#89) * Retrieve quorum list from height, reviewed * Setup data structures for de/register TX * Submit and validate partial/full deregisters * Add P2P relaying of partial deregistration votes * Code review adjustments for deregistration part 1 - Fix check_tx_semantic - Remove signature_pod as votes are now stored as blobs. Serialization overrides don't intefere with crypto::signature anymore. * deregistration_vote_pool - changed sign/verify interface and removed repeated code * Misc review, fix sign/verify api, vote threshold * Deregister/tx edge case handling for combinatoric votes * core, service_node_list: separated address from service node pubkey * Retrieve quorum list from height, reviewed * Setup data structures for de/register TX * Submit and validate partial/full deregisters * Add P2P relaying of partial deregistration votes * Code review adjustments for deregistration part 1 - Fix check_tx_semantic - Remove signature_pod as votes are now stored as blobs. Serialization overrides don't intefere with crypto::signature anymore. * deregistration_vote_pool - changed sign/verify interface and removed repeated code * Misc review, fix sign/verify api, vote threshold * Deregister/tx edge case handling for combinatoric votes * Store service node lists for the duration of deregister lifetimes * Quorum min/max bug, sort node list, fix node to test list * Change quorum to store acc pub address, fix oob bug * Code review for expiring votes, acc keys to pub_key, improve err msgs * Add early out for is_deregistration_tx and protect against quorum changes * Remove debug code, fix segfault * Remove irrelevant check for tx v3 in blockchain, fix >= height for pruning quorum states Incorrect assumption that a transaction can be kept in the chain if it could eventually become invalid, because if it were the chain would be split and eventually these transaction would be dropped. But also that we should not override the pre-existing logic which handles this case anyway.
2018-07-18 04:42:47 +02:00
{
bool result = m_quorum_cop.handle_vote(vote, vvc);
Service Node Deregister Part 5 (#89) * Retrieve quorum list from height, reviewed * Setup data structures for de/register TX * Submit and validate partial/full deregisters * Add P2P relaying of partial deregistration votes * Code review adjustments for deregistration part 1 - Fix check_tx_semantic - Remove signature_pod as votes are now stored as blobs. Serialization overrides don't intefere with crypto::signature anymore. * deregistration_vote_pool - changed sign/verify interface and removed repeated code * Misc review, fix sign/verify api, vote threshold * Deregister/tx edge case handling for combinatoric votes * core, service_node_list: separated address from service node pubkey * Retrieve quorum list from height, reviewed * Setup data structures for de/register TX * Submit and validate partial/full deregisters * Add P2P relaying of partial deregistration votes * Code review adjustments for deregistration part 1 - Fix check_tx_semantic - Remove signature_pod as votes are now stored as blobs. Serialization overrides don't intefere with crypto::signature anymore. * deregistration_vote_pool - changed sign/verify interface and removed repeated code * Misc review, fix sign/verify api, vote threshold * Deregister/tx edge case handling for combinatoric votes * Store service node lists for the duration of deregister lifetimes * Quorum min/max bug, sort node list, fix node to test list * Change quorum to store acc pub address, fix oob bug * Code review for expiring votes, acc keys to pub_key, improve err msgs * Add early out for is_deregistration_tx and protect against quorum changes * Remove debug code, fix segfault * Remove irrelevant check for tx v3 in blockchain, fix >= height for pruning quorum states Incorrect assumption that a transaction can be kept in the chain if it could eventually become invalid, because if it were the chain would be split and eventually these transaction would be dropped. But also that we should not override the pre-existing logic which handles this case anyway.
2018-07-18 04:42:47 +02:00
return result;
}
//-----------------------------------------------------------------------------------------------
bool core::get_service_node_keys(crypto::public_key &pub_key, crypto::secret_key &sec_key) const
{
if (m_service_node)
{
pub_key = m_service_node_pubkey;
sec_key = m_service_node_key;
}
return m_service_node;
}
uint32_t core::get_blockchain_pruning_seed() const
{
return get_blockchain_storage().get_blockchain_pruning_seed();
}
//-----------------------------------------------------------------------------------------------
bool core::prune_blockchain(uint32_t pruning_seed)
{
return get_blockchain_storage().prune_blockchain(pruning_seed);
}
Service Node Deregister Part 5 (#89) * Retrieve quorum list from height, reviewed * Setup data structures for de/register TX * Submit and validate partial/full deregisters * Add P2P relaying of partial deregistration votes * Code review adjustments for deregistration part 1 - Fix check_tx_semantic - Remove signature_pod as votes are now stored as blobs. Serialization overrides don't intefere with crypto::signature anymore. * deregistration_vote_pool - changed sign/verify interface and removed repeated code * Misc review, fix sign/verify api, vote threshold * Deregister/tx edge case handling for combinatoric votes * core, service_node_list: separated address from service node pubkey * Retrieve quorum list from height, reviewed * Setup data structures for de/register TX * Submit and validate partial/full deregisters * Add P2P relaying of partial deregistration votes * Code review adjustments for deregistration part 1 - Fix check_tx_semantic - Remove signature_pod as votes are now stored as blobs. Serialization overrides don't intefere with crypto::signature anymore. * deregistration_vote_pool - changed sign/verify interface and removed repeated code * Misc review, fix sign/verify api, vote threshold * Deregister/tx edge case handling for combinatoric votes * Store service node lists for the duration of deregister lifetimes * Quorum min/max bug, sort node list, fix node to test list * Change quorum to store acc pub address, fix oob bug * Code review for expiring votes, acc keys to pub_key, improve err msgs * Add early out for is_deregistration_tx and protect against quorum changes * Remove debug code, fix segfault * Remove irrelevant check for tx v3 in blockchain, fix >= height for pruning quorum states Incorrect assumption that a transaction can be kept in the chain if it could eventually become invalid, because if it were the chain would be split and eventually these transaction would be dropped. But also that we should not override the pre-existing logic which handles this case anyway.
2018-07-18 04:42:47 +02:00
//-----------------------------------------------------------------------------------------------
void core::get_all_service_nodes_public_keys(std::vector<crypto::public_key>& keys, bool active_nodes_only) const
{
m_service_node_list.get_all_service_nodes_public_keys(keys, active_nodes_only);
}
//-----------------------------------------------------------------------------------------------
std::time_t core::get_start_time() const
{
return start_time;
}
//-----------------------------------------------------------------------------------------------
void core::graceful_exit()
{
raise(SIGTERM);
}
2014-03-03 23:07:58 +01:00
}