2019-04-12 06:36:43 +02:00
// Copyright (c) 2014-2019, The Monero Project
2019-11-27 17:54:10 +01:00
// Copyright (c) 2018-2019, The Loki Project
2015-12-14 05:54:39 +01:00
//
2014-07-23 15:03:52 +02:00
// All rights reserved.
2015-12-14 05:54:39 +01:00
//
2014-07-23 15:03:52 +02:00
// Redistribution and use in source and binary forms, with or without modification, are
// permitted provided that the following conditions are met:
2015-12-14 05:54:39 +01:00
//
2014-07-23 15:03:52 +02:00
// 1. Redistributions of source code must retain the above copyright notice, this list of
// conditions and the following disclaimer.
2015-12-14 05:54:39 +01:00
//
2014-07-23 15:03:52 +02:00
// 2. Redistributions in binary form must reproduce the above copyright notice, this list
// of conditions and the following disclaimer in the documentation and/or other
// materials provided with the distribution.
2015-12-14 05:54:39 +01:00
//
2014-07-23 15:03:52 +02:00
// 3. Neither the name of the copyright holder nor the names of its contributors may be
// used to endorse or promote products derived from this software without specific
// prior written permission.
2015-12-14 05:54:39 +01:00
//
2014-07-23 15:03:52 +02:00
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
2015-12-14 05:54:39 +01:00
//
2014-07-23 15:03:52 +02:00
// Parts of this file are originally copyright (c) 2012-2013 The Cryptonote developers
2014-03-03 23:07:58 +01:00
2015-11-23 18:34:55 +01:00
// IP blocking adapted from Boolberry
2014-05-25 19:06:40 +02:00
#include <algorithm>
2020-06-02 00:30:19 +02:00
#include <optional>
2018-12-18 01:05:27 +01:00
#include <boost/uuid/uuid_io.hpp>
2014-12-15 23:23:42 +01:00
#include <atomic>
2019-04-09 10:07:13 +02:00
#include <functional>
#include <limits>
#include <memory>
#include <tuple>
#include <vector>
2014-05-25 19:06:40 +02:00
2020-10-22 19:55:33 +02:00
#include "cryptonote_config.h"
2014-03-03 23:07:58 +01:00
#include "version.h"
2020-10-24 00:49:42 +02:00
#include "epee/string_tools.h"
2020-05-30 06:06:16 +02:00
#include "common/file.h"
2014-09-17 23:25:19 +02:00
#include "common/dns_utils.h"
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-30 00:30:51 +02:00
#include "common/pruning.h"
2019-04-09 10:07:13 +02:00
#include "net/error.h"
2020-05-26 07:15:25 +02:00
#include "common/periodic_task.h"
2020-10-24 00:49:42 +02:00
#include "epee/misc_log_ex.h"
2014-03-03 23:07:58 +01:00
#include "p2p_protocol_defs.h"
2020-10-24 00:49:42 +02:00
#include "epee/net/local_ip.h"
2014-03-03 23:07:58 +01:00
#include "crypto/crypto.h"
2020-10-24 00:49:42 +02:00
#include "epee/storages/levin_abstract_invoke2.h"
2017-10-28 17:06:43 +02:00
#include "cryptonote_core/cryptonote_core.h"
2019-04-09 10:07:13 +02:00
#include "net/parse.h"
2014-09-10 20:01:30 +02:00
2020-10-24 20:29:47 +02:00
#ifndef WITHOUT_MINIUPNPC
2018-04-21 11:30:55 +02:00
#include <miniupnp/miniupnpc/miniupnpc.h>
#include <miniupnp/miniupnpc/upnpcommands.h>
#include <miniupnp/miniupnpc/upnperrors.h>
2020-10-24 20:29:47 +02:00
#endif
2014-04-09 14:14:35 +02:00
2021-01-04 04:19:42 +01:00
#undef OXEN_DEFAULT_LOG_CATEGORY
#define OXEN_DEFAULT_LOG_CATEGORY "net.p2p"
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 17:34:23 +01:00
2014-03-03 23:07:58 +01:00
#define NET_MAKE_IP(b1,b2,b3,b4) ((LPARAM)(((DWORD)(b1)<<24)+((DWORD)(b2)<<16)+((DWORD)(b3)<<8)+((DWORD)(b4))))
2017-03-18 00:39:47 +01:00
#define MIN_WANTED_SEED_NODES 12
2014-03-03 23:07:58 +01:00
namespace nodetool
{
2019-04-09 10:07:13 +02:00
template<class t_payload_net_handler>
node_server<t_payload_net_handler>::~node_server()
{
// tcp server uses io_service in destructor, and every zone uses
// io_service from public zone.
for (auto current = m_network_zones.begin(); current != m_network_zones.end(); /* below */)
{
if (current->first != epee::net_utils::zone::public_)
current = m_network_zones.erase(current);
else
++current;
}
}
//-----------------------------------------------------------------------------------
2018-06-11 05:43:18 +02:00
inline bool append_net_address(std::vector<epee::net_utils::network_address> & seed_nodes, std::string const & addr, uint16_t default_port);
2014-03-03 23:07:58 +01:00
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
void node_server<t_payload_net_handler>::init_options(boost::program_options::options_description& desc)
{
command_line::add_arg(desc, arg_p2p_bind_ip);
2019-04-11 00:34:30 +02:00
command_line::add_arg(desc, arg_p2p_bind_ipv6_address);
2017-02-09 22:29:35 +01:00
command_line::add_arg(desc, arg_p2p_bind_port, false);
2019-04-11 00:34:30 +02:00
command_line::add_arg(desc, arg_p2p_bind_port_ipv6, false);
command_line::add_arg(desc, arg_p2p_use_ipv6);
2019-10-13 15:27:46 +02:00
command_line::add_arg(desc, arg_p2p_ignore_ipv4);
2014-03-03 23:07:58 +01:00
command_line::add_arg(desc, arg_p2p_external_port);
command_line::add_arg(desc, arg_p2p_allow_local_ip);
command_line::add_arg(desc, arg_p2p_add_peer);
command_line::add_arg(desc, arg_p2p_add_priority_node);
2014-05-25 19:06:40 +02:00
command_line::add_arg(desc, arg_p2p_add_exclusive_node);
2015-12-14 05:54:39 +01:00
command_line::add_arg(desc, arg_p2p_seed_node);
2019-10-25 03:06:31 +02:00
command_line::add_arg(desc, arg_tx_proxy);
2019-04-09 10:07:13 +02:00
command_line::add_arg(desc, arg_anonymous_inbound);
2015-01-05 20:30:17 +01:00
command_line::add_arg(desc, arg_p2p_hide_my_port);
2019-02-25 02:31:45 +01:00
command_line::add_arg(desc, arg_no_sync);
2015-01-05 20:30:17 +01:00
command_line::add_arg(desc, arg_no_igd);
2019-06-06 12:28:02 +02:00
command_line::add_arg(desc, arg_igd);
2015-01-05 20:30:17 +01:00
command_line::add_arg(desc, arg_out_peers);
2018-01-20 22:44:23 +01:00
command_line::add_arg(desc, arg_in_peers);
2015-01-05 20:30:17 +01:00
command_line::add_arg(desc, arg_tos_flag);
command_line::add_arg(desc, arg_limit_rate_up);
2015-05-06 18:10:51 +02:00
command_line::add_arg(desc, arg_limit_rate_down);
command_line::add_arg(desc, arg_limit_rate);
2015-04-01 19:00:45 +02:00
}
2014-03-03 23:07:58 +01:00
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::init_config()
{
TRY_ENTRY();
2020-10-22 19:55:33 +02:00
auto storage = peerlist_storage::open(m_config_folder / P2P_NET_DATA_FILENAME);
2019-04-09 10:07:13 +02:00
if (storage)
m_peerlist_storage = std::move(*storage);
2014-03-03 23:07:58 +01:00
2019-04-09 10:07:13 +02:00
m_network_zones[epee::net_utils::zone::public_].m_config.m_support_flags = P2P_SUPPORT_FLAGS;
2014-03-03 23:07:58 +01:00
m_first_connection_maker_call = true;
2019-04-09 10:07:13 +02:00
2014-03-03 23:07:58 +01:00
CATCH_ENTRY_L0("node_server::init_config", false);
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2016-10-26 21:00:08 +02:00
void node_server<t_payload_net_handler>::for_each_connection(std::function<bool(typename t_payload_net_handler::connection_context&, peerid_type, uint32_t)> f)
2014-03-03 23:07:58 +01:00
{
2019-04-09 10:07:13 +02:00
for(auto& zone : m_network_zones)
{
zone.second.m_net_server.get_config_object().foreach_connection([&](p2p_connection_context& cntx){
return f(cntx, cntx.peer_id, cntx.support_flags);
});
}
2014-03-03 23:07:58 +01:00
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2017-07-02 23:41:15 +02:00
bool node_server<t_payload_net_handler>::for_connection(const boost::uuids::uuid &connection_id, std::function<bool(typename t_payload_net_handler::connection_context&, peerid_type, uint32_t)> f)
{
2019-04-09 10:07:13 +02:00
for(auto& zone : m_network_zones)
{
const bool result = zone.second.m_net_server.get_config_object().for_connection(connection_id, [&](p2p_connection_context& cntx){
return f(cntx, cntx.peer_id, cntx.support_flags);
});
if (result)
return true;
}
return false;
2017-07-02 23:41:15 +02:00
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2019-03-29 11:47:53 +01:00
bool node_server<t_payload_net_handler>::is_remote_host_allowed(const epee::net_utils::network_address &address, time_t *t)
2015-11-23 18:34:55 +01:00
{
Purge epee::critical_crap and CRITICAL_CRAP
This purges epee::critical_region/epee::critical_section and the awful
CRITICAL_REGION_LOCAL and CRITICAL_REGION_LOCAL1 and
CRITICAL_REGION_BEGIN1 and all that crap from epee code.
This wrapper class around a mutex is just painful, macro-infested
indirection that accomplishes nothing (and, worse, forces all using code
to use a std::recursive_mutex even when a different mutex type is more
appropriate).
This commit purges it, replacing the "critical_section" mutex wrappers
with either std::mutex, std::recursive_mutex, or std::shared_mutex as
appropriate. I kept anything that looked uncertain as a
recursive_mutex, simple cases that obviously don't recurse as
std::mutex, and simple cases with reader/writing mechanics as a
shared_mutex.
Ideally all the recursive_mutexes should be eliminated because a
recursive_mutex is almost always a design flaw where someone has let the
locking code get impossibly tangled, but that requires a lot more time
to properly trace down all the ways the mutexes are used.
Other notable changes:
- There was one NIH promise/future-like class here that was used in
example one place in p2p/net_node; I replaced it with a
std::promise/future.
- moved the mutex for LMDB resizing into LMDB itself; having it in the
abstract base class is bad design, and also made it impossible to make a
moveable base class (which gets used for the fake db classes in the test
code).
2020-06-22 21:40:31 +02:00
std::unique_lock lock{m_blocked_hosts_lock};
2019-03-29 11:47:53 +01:00
const time_t now = time(nullptr);
// look in the hosts list
2019-09-16 21:20:23 +02:00
auto it = m_blocked_hosts.find(address.host_str());
2019-03-29 11:47:53 +01:00
if (it != m_blocked_hosts.end())
2015-11-23 18:34:55 +01:00
{
2019-03-29 11:47:53 +01:00
if (now >= it->second)
{
m_blocked_hosts.erase(it);
MCLOG_CYAN(el::Level::Info, "global", "Host " << address.host_str() << " unblocked.");
it = m_blocked_hosts.end();
}
else
{
if (t)
*t = it->second - now;
return false;
}
2015-11-23 18:34:55 +01:00
}
2019-03-29 11:47:53 +01:00
// manually loop in subnets
if (address.get_type_id() == epee::net_utils::address_type::ipv4)
{
auto ipv4_address = address.template as<epee::net_utils::ipv4_network_address>();
std::map<epee::net_utils::ipv4_network_subnet, time_t>::iterator it;
for (it = m_blocked_subnets.begin(); it != m_blocked_subnets.end(); )
{
if (now >= it->second)
{
it = m_blocked_subnets.erase(it);
MCLOG_CYAN(el::Level::Info, "global", "Subnet " << it->first.host_str() << " unblocked.");
continue;
}
if (it->first.matches(ipv4_address))
{
if (t)
*t = it->second - now;
return false;
}
++it;
}
}
// not found in hosts or subnets, allowed
return true;
2015-11-23 18:34:55 +01:00
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2017-05-27 12:35:54 +02:00
bool node_server<t_payload_net_handler>::block_host(const epee::net_utils::network_address &addr, time_t seconds)
2015-11-23 18:34:55 +01:00
{
2019-04-09 10:07:13 +02:00
if(!addr.is_blockable())
return false;
2019-04-11 23:57:51 +02:00
const time_t now = time(nullptr);
Purge epee::critical_crap and CRITICAL_CRAP
This purges epee::critical_region/epee::critical_section and the awful
CRITICAL_REGION_LOCAL and CRITICAL_REGION_LOCAL1 and
CRITICAL_REGION_BEGIN1 and all that crap from epee code.
This wrapper class around a mutex is just painful, macro-infested
indirection that accomplishes nothing (and, worse, forces all using code
to use a std::recursive_mutex even when a different mutex type is more
appropriate).
This commit purges it, replacing the "critical_section" mutex wrappers
with either std::mutex, std::recursive_mutex, or std::shared_mutex as
appropriate. I kept anything that looked uncertain as a
recursive_mutex, simple cases that obviously don't recurse as
std::mutex, and simple cases with reader/writing mechanics as a
shared_mutex.
Ideally all the recursive_mutexes should be eliminated because a
recursive_mutex is almost always a design flaw where someone has let the
locking code get impossibly tangled, but that requires a lot more time
to properly trace down all the ways the mutexes are used.
Other notable changes:
- There was one NIH promise/future-like class here that was used in
example one place in p2p/net_node; I replaced it with a
std::promise/future.
- moved the mutex for LMDB resizing into LMDB itself; having it in the
abstract base class is bad design, and also made it impossible to make a
moveable base class (which gets used for the fake db classes in the test
code).
2020-06-22 21:40:31 +02:00
std::unique_lock lock{m_blocked_hosts_lock};
2019-04-11 23:57:51 +02:00
time_t limit;
if (now > std::numeric_limits<time_t>::max() - seconds)
limit = std::numeric_limits<time_t>::max();
else
limit = now + seconds;
2019-09-16 21:20:23 +02:00
m_blocked_hosts[addr.host_str()] = limit;
2016-10-02 18:39:21 +02:00
2019-04-09 10:07:13 +02:00
// drop any connection to that address. This should only have to look into
// the zone related to the connection, but really make sure everything is
// swept ...
std::vector<boost::uuids::uuid> conns;
for(auto& zone : m_network_zones)
2016-10-02 18:39:21 +02:00
{
2019-04-09 10:07:13 +02:00
zone.second.m_net_server.get_config_object().foreach_connection([&](const p2p_connection_context& cntxt)
2016-10-02 18:39:21 +02:00
{
2019-04-09 10:07:13 +02:00
if (cntxt.m_remote_address.is_same_host(addr))
{
conns.push_back(cntxt.m_connection_id);
}
return true;
});
for (const auto &c: conns)
zone.second.m_net_server.get_config_object().close(c);
conns.clear();
}
2016-10-02 18:39:21 +02:00
2017-05-27 12:35:54 +02:00
MCLOG_CYAN(el::Level::Info, "global", "Host " << addr.host_str() << " blocked.");
2015-11-23 18:34:55 +01:00
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2017-05-27 12:35:54 +02:00
bool node_server<t_payload_net_handler>::unblock_host(const epee::net_utils::network_address &address)
2015-11-26 01:04:22 +01:00
{
Purge epee::critical_crap and CRITICAL_CRAP
This purges epee::critical_region/epee::critical_section and the awful
CRITICAL_REGION_LOCAL and CRITICAL_REGION_LOCAL1 and
CRITICAL_REGION_BEGIN1 and all that crap from epee code.
This wrapper class around a mutex is just painful, macro-infested
indirection that accomplishes nothing (and, worse, forces all using code
to use a std::recursive_mutex even when a different mutex type is more
appropriate).
This commit purges it, replacing the "critical_section" mutex wrappers
with either std::mutex, std::recursive_mutex, or std::shared_mutex as
appropriate. I kept anything that looked uncertain as a
recursive_mutex, simple cases that obviously don't recurse as
std::mutex, and simple cases with reader/writing mechanics as a
shared_mutex.
Ideally all the recursive_mutexes should be eliminated because a
recursive_mutex is almost always a design flaw where someone has let the
locking code get impossibly tangled, but that requires a lot more time
to properly trace down all the ways the mutexes are used.
Other notable changes:
- There was one NIH promise/future-like class here that was used in
example one place in p2p/net_node; I replaced it with a
std::promise/future.
- moved the mutex for LMDB resizing into LMDB itself; having it in the
abstract base class is bad design, and also made it impossible to make a
moveable base class (which gets used for the fake db classes in the test
code).
2020-06-22 21:40:31 +02:00
std::unique_lock lock{m_blocked_hosts_lock};
2019-09-16 21:20:23 +02:00
auto i = m_blocked_hosts.find(address.host_str());
2017-05-27 12:35:54 +02:00
if (i == m_blocked_hosts.end())
2015-11-26 01:04:22 +01:00
return false;
2017-05-27 12:35:54 +02:00
m_blocked_hosts.erase(i);
MCLOG_CYAN(el::Level::Info, "global", "Host " << address.host_str() << " unblocked.");
2015-11-26 01:04:22 +01:00
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2019-03-29 11:47:53 +01:00
bool node_server<t_payload_net_handler>::block_subnet(const epee::net_utils::ipv4_network_subnet &subnet, time_t seconds)
{
const time_t now = time(nullptr);
Purge epee::critical_crap and CRITICAL_CRAP
This purges epee::critical_region/epee::critical_section and the awful
CRITICAL_REGION_LOCAL and CRITICAL_REGION_LOCAL1 and
CRITICAL_REGION_BEGIN1 and all that crap from epee code.
This wrapper class around a mutex is just painful, macro-infested
indirection that accomplishes nothing (and, worse, forces all using code
to use a std::recursive_mutex even when a different mutex type is more
appropriate).
This commit purges it, replacing the "critical_section" mutex wrappers
with either std::mutex, std::recursive_mutex, or std::shared_mutex as
appropriate. I kept anything that looked uncertain as a
recursive_mutex, simple cases that obviously don't recurse as
std::mutex, and simple cases with reader/writing mechanics as a
shared_mutex.
Ideally all the recursive_mutexes should be eliminated because a
recursive_mutex is almost always a design flaw where someone has let the
locking code get impossibly tangled, but that requires a lot more time
to properly trace down all the ways the mutexes are used.
Other notable changes:
- There was one NIH promise/future-like class here that was used in
example one place in p2p/net_node; I replaced it with a
std::promise/future.
- moved the mutex for LMDB resizing into LMDB itself; having it in the
abstract base class is bad design, and also made it impossible to make a
moveable base class (which gets used for the fake db classes in the test
code).
2020-06-22 21:40:31 +02:00
std::unique_lock lock{m_blocked_hosts_lock};
2019-03-29 11:47:53 +01:00
time_t limit;
if (now > std::numeric_limits<time_t>::max() - seconds)
limit = std::numeric_limits<time_t>::max();
else
limit = now + seconds;
m_blocked_subnets[subnet] = limit;
// drop any connection to that subnet. This should only have to look into
// the zone related to the connection, but really make sure everything is
// swept ...
std::vector<boost::uuids::uuid> conns;
for(auto& zone : m_network_zones)
{
zone.second.m_net_server.get_config_object().foreach_connection([&](const p2p_connection_context& cntxt)
{
if (cntxt.m_remote_address.get_type_id() != epee::net_utils::ipv4_network_address::get_type_id())
return true;
auto ipv4_address = cntxt.m_remote_address.template as<epee::net_utils::ipv4_network_address>();
if (subnet.matches(ipv4_address))
{
conns.push_back(cntxt.m_connection_id);
}
return true;
});
for (const auto &c: conns)
zone.second.m_net_server.get_config_object().close(c);
conns.clear();
}
MCLOG_CYAN(el::Level::Info, "global", "Subnet " << subnet.host_str() << " blocked.");
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::unblock_subnet(const epee::net_utils::ipv4_network_subnet &subnet)
{
Purge epee::critical_crap and CRITICAL_CRAP
This purges epee::critical_region/epee::critical_section and the awful
CRITICAL_REGION_LOCAL and CRITICAL_REGION_LOCAL1 and
CRITICAL_REGION_BEGIN1 and all that crap from epee code.
This wrapper class around a mutex is just painful, macro-infested
indirection that accomplishes nothing (and, worse, forces all using code
to use a std::recursive_mutex even when a different mutex type is more
appropriate).
This commit purges it, replacing the "critical_section" mutex wrappers
with either std::mutex, std::recursive_mutex, or std::shared_mutex as
appropriate. I kept anything that looked uncertain as a
recursive_mutex, simple cases that obviously don't recurse as
std::mutex, and simple cases with reader/writing mechanics as a
shared_mutex.
Ideally all the recursive_mutexes should be eliminated because a
recursive_mutex is almost always a design flaw where someone has let the
locking code get impossibly tangled, but that requires a lot more time
to properly trace down all the ways the mutexes are used.
Other notable changes:
- There was one NIH promise/future-like class here that was used in
example one place in p2p/net_node; I replaced it with a
std::promise/future.
- moved the mutex for LMDB resizing into LMDB itself; having it in the
abstract base class is bad design, and also made it impossible to make a
moveable base class (which gets used for the fake db classes in the test
code).
2020-06-22 21:40:31 +02:00
std::unique_lock lock{m_blocked_hosts_lock};
2019-03-29 11:47:53 +01:00
auto i = m_blocked_subnets.find(subnet);
if (i == m_blocked_subnets.end())
return false;
m_blocked_subnets.erase(i);
MCLOG_CYAN(el::Level::Info, "global", "Subnet " << subnet.host_str() << " unblocked.");
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2017-05-27 12:35:54 +02:00
bool node_server<t_payload_net_handler>::add_host_fail(const epee::net_utils::network_address &address)
2015-11-23 18:34:55 +01:00
{
2019-04-09 10:07:13 +02:00
if(!address.is_blockable())
return false;
Purge epee::critical_crap and CRITICAL_CRAP
This purges epee::critical_region/epee::critical_section and the awful
CRITICAL_REGION_LOCAL and CRITICAL_REGION_LOCAL1 and
CRITICAL_REGION_BEGIN1 and all that crap from epee code.
This wrapper class around a mutex is just painful, macro-infested
indirection that accomplishes nothing (and, worse, forces all using code
to use a std::recursive_mutex even when a different mutex type is more
appropriate).
This commit purges it, replacing the "critical_section" mutex wrappers
with either std::mutex, std::recursive_mutex, or std::shared_mutex as
appropriate. I kept anything that looked uncertain as a
recursive_mutex, simple cases that obviously don't recurse as
std::mutex, and simple cases with reader/writing mechanics as a
shared_mutex.
Ideally all the recursive_mutexes should be eliminated because a
recursive_mutex is almost always a design flaw where someone has let the
locking code get impossibly tangled, but that requires a lot more time
to properly trace down all the ways the mutexes are used.
Other notable changes:
- There was one NIH promise/future-like class here that was used in
example one place in p2p/net_node; I replaced it with a
std::promise/future.
- moved the mutex for LMDB resizing into LMDB itself; having it in the
abstract base class is bad design, and also made it impossible to make a
moveable base class (which gets used for the fake db classes in the test
code).
2020-06-22 21:40:31 +02:00
std::lock_guard lock{m_host_fails_score_lock};
2017-05-27 12:35:54 +02:00
uint64_t fails = ++m_host_fails_score[address.host_str()];
MDEBUG("Host " << address.host_str() << " fail score=" << fails);
2015-11-23 18:34:55 +01:00
if(fails > P2P_IP_FAILS_BEFORE_BLOCK)
{
2017-05-27 12:35:54 +02:00
auto it = m_host_fails_score.find(address.host_str());
CHECK_AND_ASSERT_MES(it != m_host_fails_score.end(), false, "internal error");
2015-11-23 18:34:55 +01:00
it->second = P2P_IP_FAILS_BEFORE_BLOCK/2;
2017-05-27 12:35:54 +02:00
block_host(address);
2015-11-23 18:34:55 +01:00
}
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2014-09-08 18:51:04 +02:00
bool node_server<t_payload_net_handler>::handle_command_line(
const boost::program_options::variables_map& vm
)
2014-03-03 23:07:58 +01:00
{
2018-02-16 12:04:04 +01:00
bool testnet = command_line::get_arg(vm, cryptonote::arg_testnet_on);
2020-08-11 23:41:54 +02:00
bool devnet = command_line::get_arg(vm, cryptonote::arg_devnet_on);
2019-12-24 04:06:30 +01:00
bool fakenet = command_line::get_arg(vm, cryptonote::arg_regtest_on);
m_nettype =
testnet ? cryptonote::TESTNET :
2020-08-11 23:41:54 +02:00
devnet ? cryptonote::DEVNET :
2019-12-24 04:06:30 +01:00
fakenet ? cryptonote::FAKECHAIN :
cryptonote::MAINNET;
2018-02-16 12:04:04 +01:00
2019-04-09 10:07:13 +02:00
network_zone& public_zone = m_network_zones[epee::net_utils::zone::public_];
public_zone.m_connect = &public_connect;
public_zone.m_bind_ip = command_line::get_arg(vm, arg_p2p_bind_ip);
2019-04-11 00:34:30 +02:00
public_zone.m_bind_ipv6_address = command_line::get_arg(vm, arg_p2p_bind_ipv6_address);
2019-04-09 10:07:13 +02:00
public_zone.m_port = command_line::get_arg(vm, arg_p2p_bind_port);
2019-04-11 00:34:30 +02:00
public_zone.m_port_ipv6 = command_line::get_arg(vm, arg_p2p_bind_port_ipv6);
2019-04-09 10:07:13 +02:00
public_zone.m_can_pingback = true;
2014-03-03 23:07:58 +01:00
m_external_port = command_line::get_arg(vm, arg_p2p_external_port);
m_allow_local_ip = command_line::get_arg(vm, arg_p2p_allow_local_ip);
2019-06-06 12:28:02 +02:00
const bool has_no_igd = command_line::get_arg(vm, arg_no_igd);
const std::string sigd = command_line::get_arg(vm, arg_igd);
if (sigd == "enabled")
{
if (has_no_igd)
{
MFATAL("Cannot have both --" << arg_no_igd.name << " and --" << arg_igd.name << " enabled");
return false;
}
m_igd = igd;
}
else if (sigd == "disabled")
{
m_igd = no_igd;
}
else if (sigd == "delayed")
{
if (has_no_igd && !command_line::is_arg_defaulted(vm, arg_igd))
{
MFATAL("Cannot have both --" << arg_no_igd.name << " and --" << arg_igd.name << " delayed");
return false;
}
m_igd = has_no_igd ? no_igd : delayed_igd;
}
else
{
MFATAL("Invalid value for --" << arg_igd.name << ", expected enabled, disabled or delayed");
return false;
}
2017-11-30 16:35:52 +01:00
m_offline = command_line::get_arg(vm, cryptonote::arg_offline);
2019-04-11 00:34:30 +02:00
m_use_ipv6 = command_line::get_arg(vm, arg_p2p_use_ipv6);
2019-10-13 15:27:46 +02:00
m_require_ipv4 = !command_line::get_arg(vm, arg_p2p_ignore_ipv4);
2019-05-16 22:34:22 +02:00
public_zone.m_notifier = cryptonote::levin::notify{
2020-06-22 03:17:12 +02:00
public_zone.m_net_server.get_io_service(), public_zone.m_net_server.get_config_shared(), {}, true
2019-05-16 22:34:22 +02:00
};
2014-03-03 23:07:58 +01:00
if (command_line::has_arg(vm, arg_p2p_add_peer))
2015-12-14 05:54:39 +01:00
{
2014-03-03 23:07:58 +01:00
std::vector<std::string> perrs = command_line::get_arg(vm, arg_p2p_add_peer);
for(const std::string& pr_str: perrs)
{
2019-10-31 23:26:58 +01:00
nodetool::peerlist_entry pe{};
2014-03-03 23:07:58 +01:00
pe.id = crypto::rand<uint64_t>();
2018-06-11 05:16:29 +02:00
const uint16_t default_port = cryptonote::get_config(m_nettype).P2P_DEFAULT_PORT;
2019-04-09 10:07:13 +02:00
expect<epee::net_utils::network_address> adr = net::get_network_address(pr_str, default_port);
if (adr)
2018-06-11 05:43:18 +02:00
{
2019-04-09 10:07:13 +02:00
add_zone(adr->get_zone());
pe.adr = std::move(*adr);
m_command_line_peers.push_back(std::move(pe));
2018-06-11 05:43:18 +02:00
continue;
}
2019-04-09 10:07:13 +02:00
CHECK_AND_ASSERT_MES(
adr == net::error::unsupported_address, false, "Bad address (\"" << pr_str << "\"): " << adr.error().message()
);
2018-06-11 05:43:18 +02:00
std::vector<epee::net_utils::network_address> resolved_addrs;
2019-04-09 10:07:13 +02:00
bool r = append_net_address(resolved_addrs, pr_str, default_port);
2018-06-11 05:43:18 +02:00
CHECK_AND_ASSERT_MES(r, false, "Failed to parse or resolve address from string: " << pr_str);
for (const epee::net_utils::network_address& addr : resolved_addrs)
{
pe.id = crypto::rand<uint64_t>();
pe.adr = addr;
m_command_line_peers.push_back(pe);
}
2014-03-03 23:07:58 +01:00
}
}
2015-12-14 05:54:39 +01:00
2014-05-25 19:06:40 +02:00
if (command_line::has_arg(vm,arg_p2p_add_exclusive_node))
{
if (!parse_peers_and_add_to_container(vm, arg_p2p_add_exclusive_node, m_exclusive_peers))
return false;
}
2015-12-14 05:54:39 +01:00
2014-06-27 19:21:48 +02:00
if (command_line::has_arg(vm, arg_p2p_add_priority_node))
2014-05-25 19:06:40 +02:00
{
if (!parse_peers_and_add_to_container(vm, arg_p2p_add_priority_node, m_priority_peers))
return false;
2014-03-03 23:07:58 +01:00
}
2015-12-14 05:54:39 +01:00
2014-03-03 23:07:58 +01:00
if (command_line::has_arg(vm, arg_p2p_seed_node))
{
2020-06-22 02:42:23 +02:00
std::unique_lock lock{m_seed_nodes_mutex};
2020-04-10 04:24:00 +02:00
2014-05-25 19:06:40 +02:00
if (!parse_peers_and_add_to_container(vm, arg_p2p_seed_node, m_seed_nodes))
return false;
2014-03-03 23:07:58 +01:00
}
2014-05-25 19:06:40 +02:00
2014-03-03 23:07:58 +01:00
if(command_line::has_arg(vm, arg_p2p_hide_my_port))
2014-05-25 19:06:40 +02:00
m_hide_my_port = true;
2015-12-14 05:54:39 +01:00
2019-02-25 02:31:45 +01:00
if (command_line::has_arg(vm, arg_no_sync))
m_payload_handler.set_no_sync(true);
2019-04-09 10:07:13 +02:00
if ( !set_max_out_peers(public_zone, command_line::get_arg(vm, arg_out_peers) ) )
2015-12-14 05:54:39 +01:00
return false;
2019-04-09 10:07:13 +02:00
else
m_payload_handler.set_max_out_peers(public_zone.m_config.m_net_config.max_out_connection_count);
2015-01-05 20:30:17 +01:00
2019-04-09 10:07:13 +02:00
if ( !set_max_in_peers(public_zone, command_line::get_arg(vm, arg_in_peers) ) )
2018-01-20 22:44:23 +01:00
return false;
2015-12-14 05:54:39 +01:00
if ( !set_tos_flag(vm, command_line::get_arg(vm, arg_tos_flag) ) )
return false;
2015-01-05 20:30:17 +01:00
2015-12-14 05:54:39 +01:00
if ( !set_rate_up_limit(vm, command_line::get_arg(vm, arg_limit_rate_up) ) )
return false;
2015-01-05 20:30:17 +01:00
2015-12-14 05:54:39 +01:00
if ( !set_rate_down_limit(vm, command_line::get_arg(vm, arg_limit_rate_down) ) )
return false;
if ( !set_rate_limit(vm, command_line::get_arg(vm, arg_limit_rate) ) )
return false;
2014-05-25 19:06:40 +02:00
2019-04-09 10:07:13 +02:00
2020-06-22 03:17:12 +02:00
epee::shared_sv noise;
2019-04-09 10:07:13 +02:00
auto proxies = get_proxies(vm);
if (!proxies)
return false;
for (auto& proxy : *proxies)
{
network_zone& zone = add_zone(proxy.zone);
if (zone.m_connect != nullptr)
{
2019-10-25 03:06:31 +02:00
MERROR("Listed --" << arg_tx_proxy.name << " twice with " << epee::net_utils::zone_to_string(proxy.zone));
2019-04-09 10:07:13 +02:00
return false;
}
zone.m_connect = &socks_connect;
zone.m_proxy_address = std::move(proxy.address);
if (!set_max_out_peers(zone, proxy.max_connections))
return false;
2019-05-16 22:34:22 +02:00
2020-06-22 03:17:12 +02:00
epee::shared_sv this_noise;
2019-05-16 22:34:22 +02:00
if (proxy.noise)
{
static_assert(sizeof(epee::levin::bucket_head2) < CRYPTONOTE_NOISE_BYTES, "noise bytes too small");
2020-06-22 03:17:12 +02:00
if (noise.view.empty())
noise = epee::shared_sv{epee::levin::make_noise_notify(CRYPTONOTE_NOISE_BYTES)};
2019-05-16 22:34:22 +02:00
2020-06-22 03:17:12 +02:00
this_noise = noise;
2019-05-16 22:34:22 +02:00
}
zone.m_notifier = cryptonote::levin::notify{
2019-09-20 17:16:18 +02:00
zone.m_net_server.get_io_service(), zone.m_net_server.get_config_shared(), std::move(this_noise), false
2019-05-16 22:34:22 +02:00
};
2019-04-09 10:07:13 +02:00
}
for (const auto& zone : m_network_zones)
{
if (zone.second.m_connect == nullptr)
{
2019-10-25 03:06:31 +02:00
MERROR("Set outgoing peer for " << epee::net_utils::zone_to_string(zone.first) << " but did not set --" << arg_tx_proxy.name);
2019-04-09 10:07:13 +02:00
return false;
}
}
auto inbounds = get_anonymous_inbounds(vm);
if (!inbounds)
return false;
2019-05-16 22:34:22 +02:00
const std::size_t tx_relay_zones = m_network_zones.size();
2019-04-09 10:07:13 +02:00
for (auto& inbound : *inbounds)
{
network_zone& zone = add_zone(inbound.our_address.get_zone());
if (!zone.m_bind_ip.empty())
{
MERROR("Listed --" << arg_anonymous_inbound.name << " twice with " << epee::net_utils::zone_to_string(inbound.our_address.get_zone()) << " network");
return false;
}
2019-05-16 22:34:22 +02:00
if (zone.m_connect == nullptr && tx_relay_zones <= 1)
{
2019-10-25 03:06:31 +02:00
MERROR("Listed --" << arg_anonymous_inbound.name << " without listing any --" << arg_tx_proxy.name << ". The latter is necessary for sending local txes over anonymity networks");
2019-05-16 22:34:22 +02:00
return false;
}
2019-04-09 10:07:13 +02:00
zone.m_bind_ip = std::move(inbound.local_ip);
zone.m_port = std::move(inbound.local_port);
zone.m_net_server.set_default_remote(std::move(inbound.default_remote));
zone.m_our_address = std::move(inbound.our_address);
if (!set_max_in_peers(zone, inbound.max_connections))
return false;
}
2014-05-25 19:06:40 +02:00
return true;
2014-03-03 23:07:58 +01:00
}
//-----------------------------------------------------------------------------------
2018-06-11 05:43:18 +02:00
inline bool append_net_address(
2017-05-27 12:35:54 +02:00
std::vector<epee::net_utils::network_address> & seed_nodes
2014-09-24 14:45:34 +02:00
, std::string const & addr
2018-06-11 05:43:18 +02:00
, uint16_t default_port
2014-09-24 14:45:34 +02:00
)
2014-03-20 12:46:11 +01:00
{
2014-09-24 14:45:34 +02:00
using namespace boost::asio;
2020-06-02 06:03:44 +02:00
bool has_colon = addr.find_last_of(':') != std::string::npos;
bool has_dot = addr.find_last_of('.') != std::string::npos;
bool has_square_bracket = addr.find('[') != std::string::npos;
2019-04-11 00:34:30 +02:00
2020-06-02 06:03:44 +02:00
std::string host, port;
2019-04-11 00:34:30 +02:00
// IPv6 will have colons regardless. IPv6 and IPv4 address:port will have a colon but also either a . or a [
// as IPv6 addresses specified as address:port are to be specified as "[addr:addr:...:addr]:port"
// One may also specify an IPv6 address as simply "[addr:addr:...:addr]" without the port; in that case
// the square braces will be stripped here.
2020-06-02 06:03:44 +02:00
if ((has_colon && has_dot) || has_square_bracket)
2018-06-11 05:43:18 +02:00
{
2020-06-02 06:03:44 +02:00
std::tie(host, port) = net::get_network_address_host_and_port(addr);
2018-06-11 05:43:18 +02:00
}
2020-06-02 06:03:44 +02:00
else
{
host = addr;
port = std::to_string(default_port);
}
2018-06-11 05:43:18 +02:00
MINFO("Resolving node address: host=" << host << ", port=" << port);
2014-09-24 14:45:34 +02:00
io_service io_srv;
ip::tcp::resolver resolver(io_srv);
2016-11-24 00:08:53 +01:00
ip::tcp::resolver::query query(host, port, boost::asio::ip::tcp::resolver::query::canonical_name);
2014-09-24 14:45:34 +02:00
boost::system::error_code ec;
ip::tcp::resolver::iterator i = resolver.resolve(query, ec);
2018-06-11 05:43:18 +02:00
CHECK_AND_ASSERT_MES(!ec, false, "Failed to resolve host name '" << host << "': " << ec.message() << ':' << ec.value());
2014-09-24 14:45:34 +02:00
ip::tcp::resolver::iterator iend;
for (; i != iend; ++i)
2014-09-09 18:15:42 +02:00
{
2014-09-24 14:45:34 +02:00
ip::tcp::endpoint endpoint = *i;
if (endpoint.address().is_v4())
2014-03-20 12:46:11 +01:00
{
2017-08-25 17:14:46 +02:00
epee::net_utils::network_address na{epee::net_utils::ipv4_network_address{boost::asio::detail::socket_ops::host_to_network_long(endpoint.address().to_v4().to_ulong()), endpoint.port()}};
2014-09-24 14:45:34 +02:00
seed_nodes.push_back(na);
2018-06-11 05:43:18 +02:00
MINFO("Added node: " << na.str());
2014-09-09 18:15:42 +02:00
}
else
{
2019-04-11 00:34:30 +02:00
epee::net_utils::network_address na{epee::net_utils::ipv6_network_address{endpoint.address().to_v6(), endpoint.port()}};
seed_nodes.push_back(na);
MINFO("Added node: " << na.str());
2014-03-20 12:46:11 +01:00
}
}
2018-06-11 05:43:18 +02:00
return true;
2014-03-20 12:46:11 +01:00
}
//-----------------------------------------------------------------------------------
2014-03-03 23:07:58 +01:00
template<class t_payload_net_handler>
2018-02-16 12:04:04 +01:00
std::set<std::string> node_server<t_payload_net_handler>::get_seed_nodes(cryptonote::network_type nettype) const
2014-03-03 23:07:58 +01:00
{
2015-05-26 07:11:44 +02:00
std::set<std::string> full_addrs;
2018-02-16 12:04:04 +01:00
if (nettype == cryptonote::TESTNET)
2014-09-09 15:10:30 +02:00
{
2019-08-29 06:38:28 +02:00
full_addrs.insert("159.69.109.145:38156");
2014-09-09 15:10:30 +02:00
}
2020-08-11 23:41:54 +02:00
else if (nettype == cryptonote::DEVNET)
2018-02-16 12:04:04 +01:00
{
2020-08-11 23:47:19 +02:00
full_addrs.insert("144.76.164.202:38856");
2018-02-16 12:04:04 +01:00
}
2018-06-14 21:11:49 +02:00
else if (nettype == cryptonote::FAKECHAIN)
{
}
2014-09-09 15:10:30 +02:00
else
2017-03-18 00:39:47 +01:00
{
2019-08-12 06:16:51 +02:00
full_addrs.insert("116.203.196.12:22022"); // Hetzner seed node
full_addrs.insert("149.56.165.115:22022"); // Jason's seed node
full_addrs.insert("192.250.236.196:22022"); // Rangeproof Test VPSC Box
2019-03-19 06:18:33 +01:00
full_addrs.insert("144.217.243.15:22022"); // OVH(1)
2019-08-12 06:16:51 +02:00
full_addrs.insert("51.38.133.145:22022"); // OVH(2)
2017-03-18 00:39:47 +01:00
}
return full_addrs;
}
2019-04-09 10:07:13 +02:00
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2020-04-10 04:24:00 +02:00
std::set<std::string> node_server<t_payload_net_handler>::get_seed_nodes()
{
if (!m_exclusive_peers.empty() || m_offline)
{
return {};
}
if (m_nettype == cryptonote::TESTNET)
{
return get_seed_nodes(cryptonote::TESTNET);
}
2020-08-11 23:41:54 +02:00
if (m_nettype == cryptonote::DEVNET)
2020-04-10 04:24:00 +02:00
{
2020-08-11 23:41:54 +02:00
return get_seed_nodes(cryptonote::DEVNET);
2020-04-10 04:24:00 +02:00
}
std::set<std::string> full_addrs;
// for each hostname in the seed nodes list, attempt to DNS resolve and
// add the result addresses as seed nodes
// TODO: at some point add IPv6 support, but that won't be relevant
// for some time yet.
2020-06-22 03:43:50 +02:00
auto dns_results = tools::DNSResolver::instance().get_many(tools::DNS_TYPE_A, m_seed_nodes_list, ::config::DNS_TIMEOUT);
2020-04-10 04:24:00 +02:00
2020-06-22 03:43:50 +02:00
for (size_t i = 0; i < dns_results.size(); i++)
2020-04-10 04:24:00 +02:00
{
2020-06-22 03:43:50 +02:00
const auto& result = dns_results[i];
2020-04-10 04:24:00 +02:00
MDEBUG("DNS lookup for " << m_seed_nodes_list[i] << ": " << result.size() << " results");
2020-06-22 03:43:50 +02:00
// if no results for seed node then lookup failed or timed out
for (const auto& addr_string : result)
full_addrs.insert(addr_string + ":" + std::to_string(cryptonote::get_config(m_nettype).P2P_DEFAULT_PORT));
2020-04-10 04:24:00 +02:00
}
// append the fallback nodes if we have too few seed nodes to start with
if (full_addrs.size() < MIN_WANTED_SEED_NODES)
{
if (full_addrs.empty())
MINFO("DNS seed node lookup either timed out or failed, falling back to defaults");
else
MINFO("Not enough DNS seed nodes found, using fallback defaults too");
for (const auto &peer: get_seed_nodes(cryptonote::MAINNET))
full_addrs.insert(peer);
m_fallback_seed_nodes_added.test_and_set();
}
return full_addrs;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2019-04-09 10:07:13 +02:00
typename node_server<t_payload_net_handler>::network_zone& node_server<t_payload_net_handler>::add_zone(const epee::net_utils::zone zone)
{
const auto zone_ = m_network_zones.lower_bound(zone);
if (zone_ != m_network_zones.end() && zone_->first == zone)
return zone_->second;
2017-03-18 00:39:47 +01:00
2019-04-09 10:07:13 +02:00
network_zone& public_zone = m_network_zones[epee::net_utils::zone::public_];
return m_network_zones.emplace_hint(zone_, std::piecewise_construct, std::make_tuple(zone), std::tie(public_zone.m_net_server.get_io_service()))->second;
}
2017-03-18 00:39:47 +01:00
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::init(const boost::program_options::variables_map& vm)
{
2018-01-17 13:01:42 +01:00
bool res = handle_command_line(vm);
CHECK_AND_ASSERT_MES(res, false, "Failed to handle command line");
2017-03-18 00:39:47 +01:00
2018-02-16 12:04:04 +01:00
if (m_nettype == cryptonote::TESTNET)
2017-03-18 00:39:47 +01:00
{
memcpy(&m_network_id, &::config::testnet::NETWORK_ID, 16);
2018-02-16 12:04:04 +01:00
}
2020-08-11 23:41:54 +02:00
else if (m_nettype == cryptonote::DEVNET)
2018-02-16 12:04:04 +01:00
{
2020-08-11 23:41:54 +02:00
memcpy(&m_network_id, &::config::devnet::NETWORK_ID, 16);
2017-03-18 00:39:47 +01:00
}
2018-03-13 12:20:49 +01:00
else
2014-09-09 15:10:30 +02:00
{
2015-01-29 23:10:53 +01:00
memcpy(&m_network_id, &::config::NETWORK_ID, 16);
2018-03-13 12:20:49 +01:00
}
2014-03-03 23:07:58 +01:00
2020-10-22 19:55:33 +02:00
m_config_folder = fs::u8path(command_line::get_arg(vm, cryptonote::arg_data_dir));
2019-04-09 10:07:13 +02:00
network_zone& public_zone = m_network_zones.at(epee::net_utils::zone::public_);
2014-03-03 23:07:58 +01:00
2020-10-22 19:55:33 +02:00
if (public_zone.m_port != std::to_string(cryptonote::get_config(m_nettype).P2P_DEFAULT_PORT))
m_config_folder /= public_zone.m_port;
2014-03-03 23:07:58 +01:00
res = init_config();
CHECK_AND_ASSERT_MES(res, false, "Failed to init config.");
2019-04-09 10:07:13 +02:00
for (auto& zone : m_network_zones)
{
res = zone.second.m_peerlist.init(m_peerlist_storage.take_zone(zone.first), m_allow_local_ip);
CHECK_AND_ASSERT_MES(res, false, "Failed to init peerlist.");
}
2014-03-03 23:07:58 +01:00
2019-04-09 10:07:13 +02:00
for(const auto& p: m_command_line_peers)
m_network_zones.at(p.adr.get_zone()).m_peerlist.append_with_peer_white(p);
2014-03-03 23:07:58 +01:00
2019-04-09 10:07:13 +02:00
// all peers are now setup
#ifdef CRYPTONOTE_PRUNING_DEBUG_SPOOF_SEED
for (auto& zone : m_network_zones)
{
std::list<peerlist_entry> plw;
while (zone.second.m_peerlist.get_white_peers_count())
{
plw.push_back(peerlist_entry());
zone.second.m_peerlist.get_white_peer_by_index(plw.back(), 0);
zone.second.m_peerlist.remove_from_peer_white(plw.back());
}
for (auto &e:plw)
zone.second.m_peerlist.append_with_peer_white(e);
std::list<peerlist_entry> plg;
while (zone.second.m_peerlist.get_gray_peers_count())
{
plg.push_back(peerlist_entry());
zone.second.m_peerlist.get_gray_peer_by_index(plg.back(), 0);
zone.second.m_peerlist.remove_from_peer_gray(plg.back());
}
for (auto &e:plg)
zone.second.m_peerlist.append_with_peer_gray(e);
}
#endif
2015-12-14 05:54:39 +01:00
2014-03-03 23:07:58 +01:00
//only in case if we really sure that we have external visible ip
m_have_address = true;
//configure self
2019-04-09 10:07:13 +02:00
public_zone.m_net_server.set_threads_prefix("P2P"); // all zones use these threads/asio::io_service
2014-03-03 23:07:58 +01:00
2015-12-07 21:21:45 +01:00
// from here onwards, it's online stuff
if (m_offline)
return res;
2014-03-03 23:07:58 +01:00
//try to bind
2019-04-09 10:07:13 +02:00
for (auto& zone : m_network_zones)
{
zone.second.m_net_server.get_config_object().set_handler(this);
zone.second.m_net_server.get_config_object().m_invoke_timeout = P2P_DEFAULT_INVOKE_TIMEOUT;
if (!zone.second.m_bind_ip.empty())
{
2019-04-11 00:34:30 +02:00
std::string ipv6_addr = "";
std::string ipv6_port = "";
2019-04-09 10:07:13 +02:00
zone.second.m_net_server.set_connection_filter(this);
2019-04-11 00:34:30 +02:00
MINFO("Binding (IPv4) on " << zone.second.m_bind_ip << ":" << zone.second.m_port);
if (!zone.second.m_bind_ipv6_address.empty() && m_use_ipv6)
{
ipv6_addr = zone.second.m_bind_ipv6_address;
ipv6_port = zone.second.m_port_ipv6;
MINFO("Binding (IPv6) on " << zone.second.m_bind_ipv6_address << ":" << zone.second.m_port_ipv6);
}
Replace epee http client with curl-based client
In short: epee's http client is garbage, standard violating, and
unreliable.
This completely removes the epee http client support and replaces it
with cpr, a curl-based C++ wrapper. rpc/http_client.h wraps cpr for RPC
requests specifically, but it is also usable directly.
This replacement has a number of advantages:
- requests are considerably more reliable. The epee http client code
assumes that a connection will be kept alive forever, and returns a
failure if a connection is ever closed. This results in some very
annoying things: for example, preparing a transaction and then waiting
a long tim before confirming it will usually result in an error
communication with the daemon. This is just terribly behaviour: the
right thing to do on a connection failure is to resubmit the request.
- epee's http client is broken in lots of other ways: for example, it
tries throwing SSL at the port to see if it is HTTPS, but this is
protocol violating and just breaks (with a several second timeout) on
anything that *isn't* epee http server (for example, when lokid is
behind a proxying server).
- even when it isn't doing the above, the client breaks in other ways:
for example, there is a comment (replaced in this PR) in the Trezor PR
code that forces a connection close after every request because epee's
http client doesn't do proper keep-alive request handling.
- it seems noticeably faster to me in practical use in this PR; both
simple requests (for example, when running `lokid status`) and
wallet<->daemon connections are faster, probably because of crappy
code in epee. (I think this is also related to the throw-ssl-at-it
junk above: the epee client always generates an ssl certificate during
static initialization because it might need one at some point).
- significantly reduces the amount of code we have to maintain.
- removes all the epee ssl option code: curl can handle all of that just
fine.
- removes the epee socks proxy code; curl can handle that just fine.
(And can do more: it also supports using HTTP/HTTPS proxies).
- When a cli wallet connection fails we know show why it failed (which
now is an error message from curl), which could have all sorts of
reasons like hostname resolution failure, bad ssl certificate, etc.
Previously you just got a useless generic error that tells you
nothing.
Other related changes in this PR:
- Drops the check-for-update and download-update code. To the best of
my knowledge these have never been supported in loki-core and so it
didn't seem worth the trouble to convert them to use cpr for the
requests.
- Cleaned up node_rpc_proxy return values: there was an inconsistent mix
of ways to return errors and how the returned strings were handled.
Instead this cleans it up to return a pair<bool, val>, which (with
C++17) can be transparently captured as:
auto [success, val] = node.whatever(req);
This drops the failure message string, but it was almost always set to
something fairly useless (if we want to resurrect it we could easily
change the first element to be a custom type with a bool operator for
success, and a `.error` attribute containing some error string, but
for the most part the current code wasn't doing much useful with the
failure string).
- changed local detection (for automatic trusted daemon determination)
to just look for localhost, and to not try to resolve anything.
Trusting non-public IPs does not work well (e.g. with lokinet where
all .loki addresses resolve to a local IP).
- ssl fingerprint option is removed; this isn't supported by curl
(because it is essentially just duplicating what a custom cainfo
bundle does)
- --daemon-ssl-allow-chained is removed; it wasn't a useful option (if
you don't want chaining, don't specify a cainfo chain).
- --daemon-address is now a URL instead of just host:port. (If you omit
the protocol, http:// is prepended).
- --daemon-host and --daemon-port are now deprecated and produce a
warning (in simplewallet) if used; the replacement is to use
--daemon-address.
- --daemon-ssl is deprecated; specify --daemon-address=https://whatever
instead.
- the above three are now hidden from --help
- reordered the wallet connection options to make more logical sense.
2020-07-26 22:29:49 +02:00
res = zone.second.m_net_server.init_server(zone.second.m_port, zone.second.m_bind_ip, ipv6_port, ipv6_addr, m_use_ipv6, m_require_ipv4);
2019-04-09 10:07:13 +02:00
CHECK_AND_ASSERT_MES(res, false, "Failed to bind server");
}
}
2014-03-03 23:07:58 +01:00
2019-04-09 10:07:13 +02:00
m_listening_port = public_zone.m_net_server.get_binded_port();
2019-04-11 00:34:30 +02:00
MLOG_GREEN(el::Level::Info, "Net service bound (IPv4) to " << public_zone.m_bind_ip << ":" << m_listening_port);
if (m_use_ipv6)
{
m_listening_port_ipv6 = public_zone.m_net_server.get_binded_port_ipv6();
MLOG_GREEN(el::Level::Info, "Net service bound (IPv6) to " << public_zone.m_bind_ipv6_address << ":" << m_listening_port_ipv6);
}
2014-03-03 23:07:58 +01:00
if(m_external_port)
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 17:34:23 +01:00
MDEBUG("External port defined as " << m_external_port);
2014-04-09 14:14:35 +02:00
2017-08-29 23:28:23 +02:00
// add UPnP port mapping
2019-06-06 12:28:02 +02:00
if(m_igd == igd)
2019-04-11 00:34:30 +02:00
{
add_upnp_port_mapping_v4(m_listening_port);
if (m_use_ipv6)
{
add_upnp_port_mapping_v6(m_listening_port_ipv6);
}
}
2015-12-14 05:54:39 +01:00
2014-03-03 23:07:58 +01:00
return res;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
typename node_server<t_payload_net_handler>::payload_net_handler& node_server<t_payload_net_handler>::get_payload_object()
{
return m_payload_handler;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::run()
{
2015-12-14 05:54:39 +01:00
// creating thread to log number of connections
2020-06-03 04:51:35 +02:00
mPeersLoggerThread.emplace([&]()
2015-12-14 05:54:39 +01:00
{
2020-06-02 20:37:36 +02:00
MDEBUG("Thread monitor number of peers - start");
2019-04-09 10:07:13 +02:00
const network_zone& public_zone = m_network_zones.at(epee::net_utils::zone::public_);
while (!is_closing && !public_zone.m_net_server.is_stop_signal_sent())
2015-12-14 05:54:39 +01:00
{ // main loop of thread
//number_of_peers = m_net_server.get_config_object().get_connections_count();
2019-04-09 10:07:13 +02:00
for (auto& zone : m_network_zones)
2015-12-14 05:54:39 +01:00
{
2019-04-09 10:07:13 +02:00
unsigned int number_of_in_peers = 0;
unsigned int number_of_out_peers = 0;
zone.second.m_net_server.get_config_object().foreach_connection([&](const p2p_connection_context& cntxt)
2018-01-20 22:44:23 +01:00
{
2019-04-09 10:07:13 +02:00
if (cntxt.m_is_income)
{
++number_of_in_peers;
}
else
{
2019-08-13 07:06:43 +02:00
// If this is a new (<10s) connection and we're still in before handshake mode then
// don't count it yet: it is probably a back ping connection that will be closed soon.
2020-06-03 05:04:55 +02:00
if (!(cntxt.m_state == p2p_connection_context::state_before_handshake && std::chrono::steady_clock::now() < cntxt.m_started + 10s))
2019-08-13 07:06:43 +02:00
++number_of_out_peers;
2019-04-09 10:07:13 +02:00
}
return true;
}); // lambda
zone.second.m_current_number_of_in_peers = number_of_in_peers;
zone.second.m_current_number_of_out_peers = number_of_out_peers;
}
2020-06-03 04:51:35 +02:00
std::this_thread::sleep_for(1s);
2015-12-14 05:54:39 +01:00
} // main loop of thread
2020-06-02 20:37:36 +02:00
MDEBUG("Thread monitor number of peers - done");
2020-06-03 04:51:35 +02:00
}); // lambda
2015-12-14 05:54:39 +01:00
2019-04-09 10:07:13 +02:00
network_zone& public_zone = m_network_zones.at(epee::net_utils::zone::public_);
2020-06-03 05:10:08 +02:00
public_zone.m_net_server.add_idle_handler([this] { return idle_worker(); }, 1s);
public_zone.m_net_server.add_idle_handler([this] { return m_payload_handler.on_idle(); }, 1s);
2019-04-09 10:07:13 +02:00
2014-03-03 23:07:58 +01:00
//here you can set worker threads count
int thrds_count = 10;
//go to loop
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 17:34:23 +01:00
MINFO("Run net_service loop( " << thrds_count << " threads)...");
2020-06-03 04:51:35 +02:00
if(!public_zone.m_net_server.run_server(thrds_count, true))
2014-03-03 23:07:58 +01:00
{
LOG_ERROR("Failed to run net tcp server!");
}
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 17:34:23 +01:00
MINFO("net_service loop stopped.");
2014-03-03 23:07:58 +01:00
return true;
}
2019-04-09 10:07:13 +02:00
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
uint64_t node_server<t_payload_net_handler>::get_public_connections_count()
{
auto public_zone = m_network_zones.find(epee::net_utils::zone::public_);
if (public_zone == m_network_zones.end())
return 0;
return public_zone->second.m_net_server.get_config_object().get_connections_count();
}
2014-03-03 23:07:58 +01:00
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::deinit()
{
2015-12-14 05:54:39 +01:00
kill();
2018-10-16 11:17:21 +02:00
if (!m_offline)
{
2019-04-09 10:07:13 +02:00
for(auto& zone : m_network_zones)
zone.second.m_net_server.deinit_server();
2018-10-16 11:17:21 +02:00
// remove UPnP port mapping
2019-06-06 12:28:02 +02:00
if(m_igd == igd)
2018-10-16 11:17:21 +02:00
delete_upnp_port_mapping(m_listening_port);
}
2014-03-03 23:07:58 +01:00
return store_config();
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::store_config()
{
TRY_ENTRY();
2019-04-09 10:07:13 +02:00
2014-03-03 23:07:58 +01:00
if (!tools::create_directories_if_necessary(m_config_folder))
{
2020-10-22 19:55:33 +02:00
MWARNING("Failed to create data directory " << m_config_folder);
2014-03-03 23:07:58 +01:00
return false;
}
2019-04-09 10:07:13 +02:00
peerlist_types active{};
for (auto& zone : m_network_zones)
zone.second.m_peerlist.get_peerlist(active);
2020-10-22 19:55:33 +02:00
const auto state_file_path = m_config_folder / P2P_NET_DATA_FILENAME;
2019-04-09 10:07:13 +02:00
if (!m_peerlist_storage.store(state_file_path, active))
2014-03-03 23:07:58 +01:00
{
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 17:34:23 +01:00
MWARNING("Failed to save config to file " << state_file_path);
2014-03-03 23:07:58 +01:00
return false;
2019-04-09 10:07:13 +02:00
}
CATCH_ENTRY_L0("node_server::store", false);
2014-03-03 23:07:58 +01:00
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::send_stop_signal()
{
2017-12-15 11:28:15 +01:00
MDEBUG("[node] sending stop signal");
2019-04-09 10:07:13 +02:00
for (auto& zone : m_network_zones)
zone.second.m_net_server.send_stop_signal();
2017-12-15 11:28:15 +01:00
MDEBUG("[node] Stop signal sent");
2019-04-09 10:07:13 +02:00
for (auto& zone : m_network_zones)
{
std::list<boost::uuids::uuid> connection_ids;
zone.second.m_net_server.get_config_object().foreach_connection([&](const p2p_connection_context& cntxt) {
connection_ids.push_back(cntxt.m_connection_id);
return true;
});
for (const auto &connection_id: connection_ids)
zone.second.m_net_server.get_config_object().close(connection_id);
}
2016-12-04 13:27:45 +01:00
m_payload_handler.stop();
2014-03-03 23:07:58 +01:00
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::do_handshake_with_peer(peerid_type& pi, p2p_connection_context& context_, bool just_take_peerlist)
{
2019-04-09 10:07:13 +02:00
network_zone& zone = m_network_zones.at(context_.m_remote_address.get_zone());
RPC overhaul
High-level details:
This redesigns the RPC layer to make it much easier to work with,
decouples it from an embedded HTTP server, and gets the vast majority of
the RPC serialization and dispatch code out of a very commonly included
header.
There is unfortunately rather a lot of interconnected code here that
cannot be easily separated out into separate commits. The full details
of what happens here are as follows:
Major details:
- All of the RPC code is now in a `cryptonote::rpc` namespace; this
renames quite a bit to be less verbose: e.g. CORE_RPC_STATUS_OK
becomes `rpc::STATUS_OK`, and `cryptonote::COMMAND_RPC_SOME_LONG_NAME`
becomes `rpc::SOME_LONG_NAME` (or just SOME_LONG_NAME for code already
working in the `rpc` namespace).
- `core_rpc_server` is now completely decoupled from providing any
request protocol: it is now *just* the core RPC call handler.
- The HTTP RPC interface now lives in a new rpc/http_server.h; this code
handles listening for HTTP requests and dispatching them to
core_rpc_server, then sending the results back to the caller.
- There is similarly a rpc/lmq_server.h for LMQ RPC code; more details
on this (and other LMQ specifics) below.
- RPC implementing code now returns the response object and throws when
things go wrong which simplifies much of the rpc error handling. They
can throw anything; generic exceptions get logged and a generic
"internal error" message gets returned to the caller, but there is
also an `rpc_error` class to return an error code and message used by
some json-rpc commands.
- RPC implementing functions now overload `core_rpc_server::invoke`
following the pattern:
RPC_BLAH_BLAH::response core_rpc_server::invoke(RPC_BLAH_BLAH::request&& req, rpc_context context);
This overloading makes the code vastly simpler: all instantiations are
now done with a small amount of generic instantiation code in a single
.cpp rather than needing to go to hell and back with a nest of epee
macros in a core header.
- each RPC endpoint is now defined by the RPC types themselves,
including its accessible names and permissions, in
core_rpc_server_commands_defs.h:
- every RPC structure now has a static `names()` function that returns
the names by which the end point is accessible. (The first one is
the primary, the others are for deprecated aliases).
- RPC command wrappers define their permissions and type by inheriting
from special tag classes:
- rpc::RPC_COMMAND is a basic, admin-only, JSON command, available
via JSON RPC. *All* JSON commands are now available via JSON RPC,
instead of the previous mix of some being at /foo and others at
/json_rpc. (Ones that were previously at /foo are still there for
backwards compatibility; see `rpc::LEGACY` below).
- rpc::PUBLIC specifies that the command should be available via a
restricted RPC connection.
- rpc::BINARY specifies that the command is not JSON, but rather is
accessible as /name and takes and returns values in the magic epee
binary "portable storage" (lol) data format.
- rpc::LEGACY specifies that the command should be available via the
non-json-rpc interface at `/name` for backwards compatibility (in
addition to the JSON-RPC interface).
- some epee serialization got unwrapped and de-templatized so that it
can be moved into a .cpp file with just declarations in the .h. (This
makes a *huge* difference for core_rpc_server_commands_defs.h and for
every compilation unit that includes it which previously had to
compile all the serialization code and then throw all by one copy away
at link time). This required some new macros so as to not break a ton
of places that will use the old way putting everything in the headers;
The RPC code uses this as does a few other places; there are comments
in contrib/epee/include/serialization/keyvalue_serialization.h as to
how to use it.
- Detemplatized a bunch of epee/storages code. Most of it should have
have been using templates at all (because it can only ever be called
with one type!), and now it isn't. This broke some things that didn't
properly compile because of missing headers or (in one case) a messed
up circular dependency.
- Significantly simplified a bunch of over-templatized serialization
code.
- All RPC serialization definitions is now out of
core_rpc_server_commands_defs.h and into a single .cpp file
(core_rpc_server_commands_defs.cpp).
- core RPC no longer uses the disgusting
BEGIN_URI_MAP2/MAP_URI_BLAH_BLAH macros. This was a terrible design
that forced slamming tons of code into a common header that didn't
need to be there.
- epee::struct_init is gone. It was a horrible hack that instiated
multiple templates just so the coder could be so lazy and write
`some_type var;` instead of properly value initializing with
`some_type var{};`.
- Removed a bunch of useless crap from epee. In particular, forcing
extra template instantiations all over the place in order to nest
return objects inside JSON RPC values is no longer needed, as are a
bunch of stuff related to the above de-macroization of the code.
- get_all_service_nodes, get_service_nodes, and get_n_service_nodes are
now combined into a single `get_service_nodes` (with deprecated
aliases for the others), which eliminates a fair amount of
duplication. The biggest obstacle here was getting the requested
fields reference passed through: this is now done by a new ability to
stash a context in the serialization object that can be retrieved by a
sub-serialized type.
LMQ-specifics:
- The LokiMQ instance moves into `cryptonote::core` rather than being
inside cryptonote_protocol. Currently the instance is used both for
qnet and rpc calls (and so needs to be in a common place), but I also
intend future PRs to use the batching code for job processing
(replacing the current threaded job queue).
- rpc/lmq_server.h handles the actual LMQ-request-to-core-RPC glue.
Unlike http_server it isn't technically running the whole LMQ stack
from here, but the parallel name with http_server seemed appropriate.
- All RPC endpoints are supported by LMQ under the same names as defined
generically, but prefixed with `rpc.` for public commands and `admin.`
for restricted ones.
- service node keys are now always available, even when not running in
`--service-node` mode: this is because we want the x25519 key for
being able to offer CURVE encryption for lmq RPC end-points, and
because it doesn't hurt to have them available all the time. In the
RPC layer this is now called "get_service_keys" (with
"get_service_node_key" as an alias) since they aren't strictly only
for service nodes. This also means code needs to check
m_service_node, and not m_service_node_keys, to tell if it is running
as a service node. (This is also easier to notice because
m_service_node_keys got renamed to `m_service_keys`).
- Added block and mempool monitoring LMQ RPC endpoints: `sub.block` and
`sub.mempool` subscribes the connection for new block and new mempool
TX notifications. The latter can notify on just blink txes, or all
new mempool txes (but only new ones -- txes dumped from a block don't
trigger it). The client gets pushed a [`notify.block`, `height`,
`hash`] or [`notify.tx`, `txhash`, `blob`] message when something
arrives.
Minor details:
- rpc::version_t is now a {major,minor} pair. Forcing everyone to pack
and unpack a uint32_t was gross.
- Changed some macros to constexprs (e.g. CORE_RPC_ERROR_CODE_...).
(This immediately revealed a couple of bugs in the RPC code that was
assigning CORE_RPC_ERROR_CODE_... to a string, and it worked because
the macro allows implicit conversion to a char).
- De-templatizing useless templates in epee (i.e. a bunch of templated
types that were never invoked with different types) revealed a painful
circular dependency between epee and non-epee code for tor_address and
i2p_address. This crap is now handled in a suitably named
`net/epee_network_address_hack.cpp` hack because it really isn't
trivial to extricate this mess.
- Removed `epee/include/serialization/serialize_base.h`. Amazingly the
code somehow still all works perfectly with this previously vital
header removed.
- Removed bitrotted, unused epee "crypted_storage" and
"gzipped_inmemstorage" code.
- Replaced a bunch of epee::misc_utils::auto_scope_leave_caller with
LOKI_DEFERs. The epee version involves quite a bit more instantiation
and is ugly as sin. Also made the `loki::defer` class invokable for
some edge cases that need calling before destruction in particular
conditions.
- Moved the systemd code around; it makes much more sense to do the
systemd started notification as in daemon.cpp as late as possible
rather than in core (when we can still have startup failures, e.g. if
the RPC layer can't start).
- Made the systemd short status string available in the get_info RPC
(and no longer require building with systemd).
- during startup, print (only) the x25519 when not in SN mode, and
continue to print all three when in SN mode.
- DRYed out some RPC implementation code (such as set_limit)
- Made wallet_rpc stop using a raw m_wallet pointer
2020-04-28 01:25:43 +02:00
typename COMMAND_HANDSHAKE::request arg{};
typename COMMAND_HANDSHAKE::response rsp{};
2019-04-09 10:07:13 +02:00
get_local_node_data(arg.node_data, zone);
2014-03-03 23:07:58 +01:00
m_payload_handler.get_payload_sync_data(arg.payload_data);
2015-12-14 05:54:39 +01:00
Purge epee::critical_crap and CRITICAL_CRAP
This purges epee::critical_region/epee::critical_section and the awful
CRITICAL_REGION_LOCAL and CRITICAL_REGION_LOCAL1 and
CRITICAL_REGION_BEGIN1 and all that crap from epee code.
This wrapper class around a mutex is just painful, macro-infested
indirection that accomplishes nothing (and, worse, forces all using code
to use a std::recursive_mutex even when a different mutex type is more
appropriate).
This commit purges it, replacing the "critical_section" mutex wrappers
with either std::mutex, std::recursive_mutex, or std::shared_mutex as
appropriate. I kept anything that looked uncertain as a
recursive_mutex, simple cases that obviously don't recurse as
std::mutex, and simple cases with reader/writing mechanics as a
shared_mutex.
Ideally all the recursive_mutexes should be eliminated because a
recursive_mutex is almost always a design flaw where someone has let the
locking code get impossibly tangled, but that requires a lot more time
to properly trace down all the ways the mutexes are used.
Other notable changes:
- There was one NIH promise/future-like class here that was used in
example one place in p2p/net_node; I replaced it with a
std::promise/future.
- moved the mutex for LMDB resizing into LMDB itself; having it in the
abstract base class is bad design, and also made it impossible to make a
moveable base class (which gets used for the fake db classes in the test
code).
2020-06-22 21:40:31 +02:00
std::promise<void> ev;
2014-03-03 23:07:58 +01:00
std::atomic<bool> hsh_result(false);
2019-12-30 18:24:42 +01:00
bool timeout = false;
2015-12-14 05:54:39 +01:00
2019-04-09 10:07:13 +02:00
bool r = epee::net_utils::async_invoke_remote_command2<typename COMMAND_HANDSHAKE::response>(context_.m_connection_id, COMMAND_HANDSHAKE::ID, arg, zone.m_net_server.get_config_object(),
2020-05-28 07:29:00 +02:00
[this, &pi, &ev, &hsh_result, &just_take_peerlist, &context_, &timeout](int code, typename COMMAND_HANDSHAKE::response&& rsp, p2p_connection_context& context)
2014-03-03 23:07:58 +01:00
{
2021-01-04 04:19:42 +01:00
OXEN_DEFER { ev.set_value(); };
2014-03-03 23:07:58 +01:00
if(code < 0)
{
2017-08-01 11:39:36 +02:00
LOG_WARNING_CC(context, "COMMAND_HANDSHAKE invoke failed. (" << code << ", " << epee::levin::get_err_descr(code) << ")");
2019-12-30 18:24:42 +01:00
if (code == LEVIN_ERROR_CONNECTION_TIMEDOUT || code == LEVIN_ERROR_CONNECTION_DESTROYED)
timeout = true;
2014-03-03 23:07:58 +01:00
return;
}
2014-07-16 19:30:15 +02:00
if(rsp.node_data.network_id != m_network_id)
2014-03-03 23:07:58 +01:00
{
2018-12-18 01:05:27 +01:00
LOG_WARNING_CC(context, "COMMAND_HANDSHAKE Failed, wrong network! (" << rsp.node_data.network_id << "), closing connection.");
2014-03-03 23:07:58 +01:00
return;
}
2019-12-04 13:51:45 +01:00
if(!handle_remote_peerlist(rsp.local_peerlist_new, context))
2014-03-03 23:07:58 +01:00
{
2017-08-01 11:39:36 +02:00
LOG_WARNING_CC(context, "COMMAND_HANDSHAKE: failed to handle_remote_peerlist(...), closing connection.");
2017-05-27 12:35:54 +02:00
add_host_fail(context.m_remote_address);
2014-03-03 23:07:58 +01:00
return;
}
hsh_result = true;
if(!just_take_peerlist)
{
2019-11-06 07:28:33 +01:00
if(!m_payload_handler.process_payload_sync_data(std::move(rsp.payload_data), context, true))
2014-03-03 23:07:58 +01:00
{
2017-08-01 11:39:36 +02:00
LOG_WARNING_CC(context, "COMMAND_HANDSHAKE invoked, but process_payload_sync_data returned false, dropping connection.");
2014-03-03 23:07:58 +01:00
hsh_result = false;
return;
}
pi = context.peer_id = rsp.node_data.peer_id;
2019-02-24 09:47:49 +01:00
context.m_rpc_port = rsp.node_data.rpc_port;
p2p: plug tor to clearnet association vector
During the handshake for an incoming connection, the peer id is checked against the local node's peer id only for the specific zone of the incoming peer, in order to avoid linking public addresses to tor addresses:
https://github.com/monero-project/monero/blob/5d7ae2d2791c0244a221872a7ac62627abb81896/src/p2p/net_node.inl#L2343
However, on handshakes for outgoing connections, all zones are checked:
https://github.com/monero-project/monero/blob/5d7ae2d2791c0244a221872a7ac62627abb81896/src/p2p/net_node.inl#L1064
If an attacker wanted to link a specific tor node to a public node, they could potentially connect to as many public nodes as possible, get themselves added to the peer whitelist, maybe stuff some more attacker-owned addresses into the greylist, then disconnect, and for any future incoming connections, respond with the tor node's id in an attempt to link the public/tor addresses.
2020-03-07 18:38:22 +01:00
network_zone& zone = m_network_zones.at(context.m_remote_address.get_zone());
2020-05-28 07:59:14 +02:00
zone.m_peerlist.set_peer_just_seen(rsp.node_data.peer_id, context.m_remote_address, context.m_pruning_seed, context.m_rpc_port);
2014-03-03 23:07:58 +01:00
2019-04-09 10:07:13 +02:00
// move
p2p: plug tor to clearnet association vector
During the handshake for an incoming connection, the peer id is checked against the local node's peer id only for the specific zone of the incoming peer, in order to avoid linking public addresses to tor addresses:
https://github.com/monero-project/monero/blob/5d7ae2d2791c0244a221872a7ac62627abb81896/src/p2p/net_node.inl#L2343
However, on handshakes for outgoing connections, all zones are checked:
https://github.com/monero-project/monero/blob/5d7ae2d2791c0244a221872a7ac62627abb81896/src/p2p/net_node.inl#L1064
If an attacker wanted to link a specific tor node to a public node, they could potentially connect to as many public nodes as possible, get themselves added to the peer whitelist, maybe stuff some more attacker-owned addresses into the greylist, then disconnect, and for any future incoming connections, respond with the tor node's id in an attempt to link the public/tor addresses.
2020-03-07 18:38:22 +01:00
if(rsp.node_data.peer_id == zone.m_config.m_peer_id)
2014-03-03 23:07:58 +01:00
{
p2p: plug tor to clearnet association vector
During the handshake for an incoming connection, the peer id is checked against the local node's peer id only for the specific zone of the incoming peer, in order to avoid linking public addresses to tor addresses:
https://github.com/monero-project/monero/blob/5d7ae2d2791c0244a221872a7ac62627abb81896/src/p2p/net_node.inl#L2343
However, on handshakes for outgoing connections, all zones are checked:
https://github.com/monero-project/monero/blob/5d7ae2d2791c0244a221872a7ac62627abb81896/src/p2p/net_node.inl#L1064
If an attacker wanted to link a specific tor node to a public node, they could potentially connect to as many public nodes as possible, get themselves added to the peer whitelist, maybe stuff some more attacker-owned addresses into the greylist, then disconnect, and for any future incoming connections, respond with the tor node's id in an attempt to link the public/tor addresses.
2020-03-07 18:38:22 +01:00
LOG_DEBUG_CC(context, "Connection to self detected, dropping connection");
hsh_result = false;
return;
2014-03-03 23:07:58 +01:00
}
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-30 00:30:51 +02:00
LOG_INFO_CC(context, "New connection handshaked, pruning seed " << epee::string_tools::to_string_hex(context.m_pruning_seed));
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 17:34:23 +01:00
LOG_DEBUG_CC(context, " COMMAND_HANDSHAKE INVOKED OK");
2014-03-03 23:07:58 +01:00
}else
{
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 17:34:23 +01:00
LOG_DEBUG_CC(context, " COMMAND_HANDSHAKE(AND CLOSE) INVOKED OK");
2014-03-03 23:07:58 +01:00
}
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-30 00:30:51 +02:00
context_ = context;
2014-03-03 23:07:58 +01:00
}, P2P_DEFAULT_HANDSHAKE_INVOKE_TIMEOUT);
if(r)
{
Purge epee::critical_crap and CRITICAL_CRAP
This purges epee::critical_region/epee::critical_section and the awful
CRITICAL_REGION_LOCAL and CRITICAL_REGION_LOCAL1 and
CRITICAL_REGION_BEGIN1 and all that crap from epee code.
This wrapper class around a mutex is just painful, macro-infested
indirection that accomplishes nothing (and, worse, forces all using code
to use a std::recursive_mutex even when a different mutex type is more
appropriate).
This commit purges it, replacing the "critical_section" mutex wrappers
with either std::mutex, std::recursive_mutex, or std::shared_mutex as
appropriate. I kept anything that looked uncertain as a
recursive_mutex, simple cases that obviously don't recurse as
std::mutex, and simple cases with reader/writing mechanics as a
shared_mutex.
Ideally all the recursive_mutexes should be eliminated because a
recursive_mutex is almost always a design flaw where someone has let the
locking code get impossibly tangled, but that requires a lot more time
to properly trace down all the ways the mutexes are used.
Other notable changes:
- There was one NIH promise/future-like class here that was used in
example one place in p2p/net_node; I replaced it with a
std::promise/future.
- moved the mutex for LMDB resizing into LMDB itself; having it in the
abstract base class is bad design, and also made it impossible to make a
moveable base class (which gets used for the fake db classes in the test
code).
2020-06-22 21:40:31 +02:00
ev.get_future().wait();
2014-03-03 23:07:58 +01:00
}
if(!hsh_result)
{
2017-08-01 11:39:36 +02:00
LOG_WARNING_CC(context_, "COMMAND_HANDSHAKE Failed");
2019-12-30 18:24:42 +01:00
if (!timeout)
zone.m_net_server.get_config_object().close(context_.m_connection_id);
2014-03-03 23:07:58 +01:00
}
2019-11-19 02:54:49 +01:00
else if (!just_take_peerlist)
2016-10-26 21:00:08 +02:00
{
try_get_support_flags(context_, [](p2p_connection_context& flags_context, const uint32_t& support_flags)
{
flags_context.support_flags = support_flags;
});
}
2014-03-03 23:07:58 +01:00
return hsh_result;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2014-05-25 19:06:40 +02:00
bool node_server<t_payload_net_handler>::do_peer_timed_sync(const epee::net_utils::connection_context_base& context_, peerid_type peer_id)
2014-03-03 23:07:58 +01:00
{
2019-10-31 23:26:58 +01:00
typename COMMAND_TIMED_SYNC::request arg{};
2014-03-03 23:07:58 +01:00
m_payload_handler.get_payload_sync_data(arg.payload_data);
2019-04-09 10:07:13 +02:00
network_zone& zone = m_network_zones.at(context_.m_remote_address.get_zone());
bool r = epee::net_utils::async_invoke_remote_command2<typename COMMAND_TIMED_SYNC::response>(context_.m_connection_id, COMMAND_TIMED_SYNC::ID, arg, zone.m_net_server.get_config_object(),
2019-11-06 07:28:33 +01:00
[this](int code, typename COMMAND_TIMED_SYNC::response&& rsp, p2p_connection_context& context)
2014-03-03 23:07:58 +01:00
{
2017-06-04 23:37:53 +02:00
context.m_in_timedsync = false;
2014-03-03 23:07:58 +01:00
if(code < 0)
{
2017-08-01 11:39:36 +02:00
LOG_WARNING_CC(context, "COMMAND_TIMED_SYNC invoke failed. (" << code << ", " << epee::levin::get_err_descr(code) << ")");
2014-03-03 23:07:58 +01:00
return;
}
2019-12-04 13:51:45 +01:00
if(!handle_remote_peerlist(rsp.local_peerlist_new, context))
2014-03-03 23:07:58 +01:00
{
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 17:34:23 +01:00
LOG_WARNING_CC(context, "COMMAND_TIMED_SYNC: failed to handle_remote_peerlist(...), closing connection.");
2019-04-09 10:07:13 +02:00
m_network_zones.at(context.m_remote_address.get_zone()).m_net_server.get_config_object().close(context.m_connection_id );
2017-05-27 12:35:54 +02:00
add_host_fail(context.m_remote_address);
2014-03-03 23:07:58 +01:00
}
if(!context.m_is_income)
2019-02-24 09:47:49 +01:00
m_network_zones.at(context.m_remote_address.get_zone()).m_peerlist.set_peer_just_seen(context.peer_id, context.m_remote_address, context.m_pruning_seed, context.m_rpc_port);
2019-11-06 07:28:33 +01:00
if (!m_payload_handler.process_payload_sync_data(std::move(rsp.payload_data), context, false))
2019-07-04 23:44:28 +02:00
{
m_network_zones.at(context.m_remote_address.get_zone()).m_net_server.get_config_object().close(context.m_connection_id );
}
2014-03-03 23:07:58 +01:00
});
if(!r)
{
2017-08-01 11:39:36 +02:00
LOG_WARNING_CC(context_, "COMMAND_TIMED_SYNC Failed");
2014-03-03 23:07:58 +01:00
return false;
}
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
size_t node_server<t_payload_net_handler>::get_random_index_with_fixed_probability(size_t max_index)
{
//divide by zero workaround
if(!max_index)
return 0;
size_t x = crypto::rand<size_t>()%(max_index+1);
size_t res = (x*x*x)/(max_index*max_index); //parabola \/
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 17:34:23 +01:00
MDEBUG("Random connection index=" << res << "(x="<< x << ", max_index=" << max_index << ")");
2014-03-03 23:07:58 +01:00
return res;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::is_peer_used(const peerlist_entry& peer)
{
2019-04-09 10:07:13 +02:00
for(const auto& zone : m_network_zones)
if(zone.second.m_config.m_peer_id == peer.id)
return true;//dont make connections to ourself
2014-03-03 23:07:58 +01:00
bool used = false;
2019-04-09 10:07:13 +02:00
for(auto& zone : m_network_zones)
2014-03-03 23:07:58 +01:00
{
2019-04-09 10:07:13 +02:00
zone.second.m_net_server.get_config_object().foreach_connection([&](const p2p_connection_context& cntxt)
2014-03-03 23:07:58 +01:00
{
2019-04-09 10:07:13 +02:00
if(cntxt.peer_id == peer.id || (!cntxt.m_is_income && peer.adr == cntxt.m_remote_address))
{
used = true;
return false;//stop enumerating
}
return true;
});
2014-03-03 23:07:58 +01:00
2019-04-09 10:07:13 +02:00
if(used)
return true;
}
return false;
2014-03-03 23:07:58 +01:00
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2017-02-09 01:11:58 +01:00
bool node_server<t_payload_net_handler>::is_peer_used(const anchor_peerlist_entry& peer)
{
2019-04-09 10:07:13 +02:00
for(auto& zone : m_network_zones) {
if(zone.second.m_config.m_peer_id == peer.id) {
return true;//dont make connections to ourself
2017-02-09 01:11:58 +01:00
}
2019-04-09 10:07:13 +02:00
bool used = false;
zone.second.m_net_server.get_config_object().foreach_connection([&](const p2p_connection_context& cntxt)
{
if(cntxt.peer_id == peer.id || (!cntxt.m_is_income && peer.adr == cntxt.m_remote_address))
{
used = true;
return false;//stop enumerating
}
return true;
});
if (used)
return true;
}
return false;
2017-02-09 01:11:58 +01:00
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2017-05-27 12:35:54 +02:00
bool node_server<t_payload_net_handler>::is_addr_connected(const epee::net_utils::network_address& peer)
2014-03-03 23:07:58 +01:00
{
2019-04-09 10:07:13 +02:00
const auto zone = m_network_zones.find(peer.get_zone());
if (zone == m_network_zones.end())
return false;
2014-03-03 23:07:58 +01:00
bool connected = false;
2019-04-09 10:07:13 +02:00
zone->second.m_net_server.get_config_object().foreach_connection([&](const p2p_connection_context& cntxt)
2014-03-03 23:07:58 +01:00
{
2017-05-27 12:35:54 +02:00
if(!cntxt.m_is_income && peer == cntxt.m_remote_address)
2014-03-03 23:07:58 +01:00
{
connected = true;
return false;//stop enumerating
}
return true;
});
return connected;
}
2014-05-25 19:06:40 +02:00
#define LOG_PRINT_CC_PRIORITY_NODE(priority, con, msg) \
do { \
if (priority) {\
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 17:34:23 +01:00
LOG_INFO_CC(con, "[priority]" << msg); \
2014-05-25 19:06:40 +02:00
} else {\
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 17:34:23 +01:00
LOG_INFO_CC(con, msg); \
2014-05-25 19:06:40 +02:00
} \
} while(0)
2014-03-03 23:07:58 +01:00
template<class t_payload_net_handler>
2017-05-27 12:35:54 +02:00
bool node_server<t_payload_net_handler>::try_to_connect_and_handshake_with_new_peer(const epee::net_utils::network_address& na, bool just_take_peerlist, uint64_t last_seen_stamp, PeerType peer_type, uint64_t first_seen_stamp)
2014-03-03 23:07:58 +01:00
{
2019-04-09 10:07:13 +02:00
network_zone& zone = m_network_zones.at(na.get_zone());
if (zone.m_connect == nullptr) // outgoing connections in zone not possible
return false;
if (zone.m_current_number_of_out_peers == zone.m_config.m_net_config.max_out_connection_count) // out peers limit
2015-12-14 05:54:39 +01:00
{
return false;
}
2019-04-09 10:07:13 +02:00
else if (zone.m_current_number_of_out_peers > zone.m_config.m_net_config.max_out_connection_count)
2015-12-14 05:54:39 +01:00
{
2019-04-09 10:07:13 +02:00
zone.m_net_server.get_config_object().del_out_connections(1);
--(zone.m_current_number_of_out_peers); // atomic variable, update time = 1s
2015-12-14 05:54:39 +01:00
return false;
}
2019-04-09 10:07:13 +02:00
2017-05-27 12:35:54 +02:00
MDEBUG("Connecting to " << na.str() << "(peer_type=" << peer_type << ", last_seen: "
2014-05-25 19:06:40 +02:00
<< (last_seen_stamp ? epee::misc_utils::get_time_interval_string(time(NULL) - last_seen_stamp):"never")
<< ")...");
2014-03-03 23:07:58 +01:00
Replace epee http client with curl-based client
In short: epee's http client is garbage, standard violating, and
unreliable.
This completely removes the epee http client support and replaces it
with cpr, a curl-based C++ wrapper. rpc/http_client.h wraps cpr for RPC
requests specifically, but it is also usable directly.
This replacement has a number of advantages:
- requests are considerably more reliable. The epee http client code
assumes that a connection will be kept alive forever, and returns a
failure if a connection is ever closed. This results in some very
annoying things: for example, preparing a transaction and then waiting
a long tim before confirming it will usually result in an error
communication with the daemon. This is just terribly behaviour: the
right thing to do on a connection failure is to resubmit the request.
- epee's http client is broken in lots of other ways: for example, it
tries throwing SSL at the port to see if it is HTTPS, but this is
protocol violating and just breaks (with a several second timeout) on
anything that *isn't* epee http server (for example, when lokid is
behind a proxying server).
- even when it isn't doing the above, the client breaks in other ways:
for example, there is a comment (replaced in this PR) in the Trezor PR
code that forces a connection close after every request because epee's
http client doesn't do proper keep-alive request handling.
- it seems noticeably faster to me in practical use in this PR; both
simple requests (for example, when running `lokid status`) and
wallet<->daemon connections are faster, probably because of crappy
code in epee. (I think this is also related to the throw-ssl-at-it
junk above: the epee client always generates an ssl certificate during
static initialization because it might need one at some point).
- significantly reduces the amount of code we have to maintain.
- removes all the epee ssl option code: curl can handle all of that just
fine.
- removes the epee socks proxy code; curl can handle that just fine.
(And can do more: it also supports using HTTP/HTTPS proxies).
- When a cli wallet connection fails we know show why it failed (which
now is an error message from curl), which could have all sorts of
reasons like hostname resolution failure, bad ssl certificate, etc.
Previously you just got a useless generic error that tells you
nothing.
Other related changes in this PR:
- Drops the check-for-update and download-update code. To the best of
my knowledge these have never been supported in loki-core and so it
didn't seem worth the trouble to convert them to use cpr for the
requests.
- Cleaned up node_rpc_proxy return values: there was an inconsistent mix
of ways to return errors and how the returned strings were handled.
Instead this cleans it up to return a pair<bool, val>, which (with
C++17) can be transparently captured as:
auto [success, val] = node.whatever(req);
This drops the failure message string, but it was almost always set to
something fairly useless (if we want to resurrect it we could easily
change the first element to be a custom type with a bool operator for
success, and a `.error` attribute containing some error string, but
for the most part the current code wasn't doing much useful with the
failure string).
- changed local detection (for automatic trusted daemon determination)
to just look for localhost, and to not try to resolve anything.
Trusting non-public IPs does not work well (e.g. with lokinet where
all .loki addresses resolve to a local IP).
- ssl fingerprint option is removed; this isn't supported by curl
(because it is essentially just duplicating what a custom cainfo
bundle does)
- --daemon-ssl-allow-chained is removed; it wasn't a useful option (if
you don't want chaining, don't specify a cainfo chain).
- --daemon-address is now a URL instead of just host:port. (If you omit
the protocol, http:// is prepended).
- --daemon-host and --daemon-port are now deprecated and produce a
warning (in simplewallet) if used; the replacement is to use
--daemon-address.
- --daemon-ssl is deprecated; specify --daemon-address=https://whatever
instead.
- the above three are now hidden from --help
- reordered the wallet connection options to make more logical sense.
2020-07-26 22:29:49 +02:00
auto con = zone.m_connect(zone, na);
2019-04-09 10:07:13 +02:00
if(!con)
2014-03-03 23:07:58 +01:00
{
2014-05-25 19:06:40 +02:00
bool is_priority = is_priority_node(na);
2019-04-09 10:07:13 +02:00
LOG_PRINT_CC_PRIORITY_NODE(is_priority, bool(con), "Connect failed to " << na.str()
2014-03-03 23:07:58 +01:00
/*<< ", try " << try_count*/);
2020-01-15 15:42:52 +01:00
record_addr_failed(na);
2014-03-03 23:07:58 +01:00
return false;
}
2014-05-25 19:06:40 +02:00
2019-04-09 10:07:13 +02:00
con->m_anchor = peer_type == anchor;
2019-10-31 23:26:58 +01:00
peerid_type pi{};
2019-04-09 10:07:13 +02:00
bool res = do_handshake_with_peer(pi, *con, just_take_peerlist);
2014-05-25 19:06:40 +02:00
2014-03-03 23:07:58 +01:00
if(!res)
{
2014-05-25 19:06:40 +02:00
bool is_priority = is_priority_node(na);
2019-04-09 10:07:13 +02:00
LOG_PRINT_CC_PRIORITY_NODE(is_priority, *con, "Failed to HANDSHAKE with peer "
2017-05-27 12:35:54 +02:00
<< na.str()
2014-03-03 23:07:58 +01:00
/*<< ", try " << try_count*/);
2020-01-15 15:42:52 +01:00
record_addr_failed(na);
2014-03-03 23:07:58 +01:00
return false;
}
2014-05-25 19:06:40 +02:00
2014-03-03 23:07:58 +01:00
if(just_take_peerlist)
{
2019-04-09 10:07:13 +02:00
zone.m_net_server.get_config_object().close(con->m_connection_id);
LOG_DEBUG_CC(*con, "CONNECTION HANDSHAKED OK AND CLOSED.");
2014-03-03 23:07:58 +01:00
return true;
}
2019-10-31 23:26:58 +01:00
peerlist_entry pe_local{};
2014-03-03 23:07:58 +01:00
pe_local.adr = na;
pe_local.id = pi;
2014-08-20 17:57:29 +02:00
time_t last_seen;
time(&last_seen);
pe_local.last_seen = static_cast<int64_t>(last_seen);
2019-04-09 10:07:13 +02:00
pe_local.pruning_seed = con->m_pruning_seed;
2019-02-24 09:47:49 +01:00
pe_local.rpc_port = con->m_rpc_port;
2019-04-09 10:07:13 +02:00
zone.m_peerlist.append_with_peer_white(pe_local);
2014-03-03 23:07:58 +01:00
//update last seen and push it to peerlist manager
2019-10-31 23:26:58 +01:00
anchor_peerlist_entry ape{};
2017-02-09 01:11:58 +01:00
ape.adr = na;
ape.id = pi;
ape.first_seen = first_seen_stamp ? first_seen_stamp : time(nullptr);
2019-04-09 10:07:13 +02:00
zone.m_peerlist.append_with_peer_anchor(ape);
2019-05-16 22:34:22 +02:00
zone.m_notifier.new_out_connection();
2017-02-09 01:11:58 +01:00
2019-04-09 10:07:13 +02:00
LOG_DEBUG_CC(*con, "CONNECTION HANDSHAKED OK.");
2014-03-03 23:07:58 +01:00
return true;
}
2014-05-25 19:06:40 +02:00
2017-01-21 00:59:04 +01:00
template<class t_payload_net_handler>
2017-05-27 12:35:54 +02:00
bool node_server<t_payload_net_handler>::check_connection_and_handshake_with_peer(const epee::net_utils::network_address& na, uint64_t last_seen_stamp)
2017-01-21 00:59:04 +01:00
{
2019-04-09 10:07:13 +02:00
network_zone& zone = m_network_zones.at(na.get_zone());
if (zone.m_connect == nullptr)
return false;
2017-05-27 12:35:54 +02:00
LOG_PRINT_L1("Connecting to " << na.str() << "(last_seen: "
2017-01-21 00:59:04 +01:00
<< (last_seen_stamp ? epee::misc_utils::get_time_interval_string(time(NULL) - last_seen_stamp):"never")
<< ")...");
Replace epee http client with curl-based client
In short: epee's http client is garbage, standard violating, and
unreliable.
This completely removes the epee http client support and replaces it
with cpr, a curl-based C++ wrapper. rpc/http_client.h wraps cpr for RPC
requests specifically, but it is also usable directly.
This replacement has a number of advantages:
- requests are considerably more reliable. The epee http client code
assumes that a connection will be kept alive forever, and returns a
failure if a connection is ever closed. This results in some very
annoying things: for example, preparing a transaction and then waiting
a long tim before confirming it will usually result in an error
communication with the daemon. This is just terribly behaviour: the
right thing to do on a connection failure is to resubmit the request.
- epee's http client is broken in lots of other ways: for example, it
tries throwing SSL at the port to see if it is HTTPS, but this is
protocol violating and just breaks (with a several second timeout) on
anything that *isn't* epee http server (for example, when lokid is
behind a proxying server).
- even when it isn't doing the above, the client breaks in other ways:
for example, there is a comment (replaced in this PR) in the Trezor PR
code that forces a connection close after every request because epee's
http client doesn't do proper keep-alive request handling.
- it seems noticeably faster to me in practical use in this PR; both
simple requests (for example, when running `lokid status`) and
wallet<->daemon connections are faster, probably because of crappy
code in epee. (I think this is also related to the throw-ssl-at-it
junk above: the epee client always generates an ssl certificate during
static initialization because it might need one at some point).
- significantly reduces the amount of code we have to maintain.
- removes all the epee ssl option code: curl can handle all of that just
fine.
- removes the epee socks proxy code; curl can handle that just fine.
(And can do more: it also supports using HTTP/HTTPS proxies).
- When a cli wallet connection fails we know show why it failed (which
now is an error message from curl), which could have all sorts of
reasons like hostname resolution failure, bad ssl certificate, etc.
Previously you just got a useless generic error that tells you
nothing.
Other related changes in this PR:
- Drops the check-for-update and download-update code. To the best of
my knowledge these have never been supported in loki-core and so it
didn't seem worth the trouble to convert them to use cpr for the
requests.
- Cleaned up node_rpc_proxy return values: there was an inconsistent mix
of ways to return errors and how the returned strings were handled.
Instead this cleans it up to return a pair<bool, val>, which (with
C++17) can be transparently captured as:
auto [success, val] = node.whatever(req);
This drops the failure message string, but it was almost always set to
something fairly useless (if we want to resurrect it we could easily
change the first element to be a custom type with a bool operator for
success, and a `.error` attribute containing some error string, but
for the most part the current code wasn't doing much useful with the
failure string).
- changed local detection (for automatic trusted daemon determination)
to just look for localhost, and to not try to resolve anything.
Trusting non-public IPs does not work well (e.g. with lokinet where
all .loki addresses resolve to a local IP).
- ssl fingerprint option is removed; this isn't supported by curl
(because it is essentially just duplicating what a custom cainfo
bundle does)
- --daemon-ssl-allow-chained is removed; it wasn't a useful option (if
you don't want chaining, don't specify a cainfo chain).
- --daemon-address is now a URL instead of just host:port. (If you omit
the protocol, http:// is prepended).
- --daemon-host and --daemon-port are now deprecated and produce a
warning (in simplewallet) if used; the replacement is to use
--daemon-address.
- --daemon-ssl is deprecated; specify --daemon-address=https://whatever
instead.
- the above three are now hidden from --help
- reordered the wallet connection options to make more logical sense.
2020-07-26 22:29:49 +02:00
auto con = zone.m_connect(zone, na);
2019-04-09 10:07:13 +02:00
if (!con) {
2017-01-21 00:59:04 +01:00
bool is_priority = is_priority_node(na);
2019-04-09 10:07:13 +02:00
LOG_PRINT_CC_PRIORITY_NODE(is_priority, p2p_connection_context{}, "Connect failed to " << na.str());
2020-01-15 15:42:52 +01:00
record_addr_failed(na);
2017-01-21 00:59:04 +01:00
return false;
}
2019-04-09 10:07:13 +02:00
con->m_anchor = false;
2019-10-31 23:26:58 +01:00
peerid_type pi{};
2019-04-09 10:07:13 +02:00
const bool res = do_handshake_with_peer(pi, *con, true);
2017-01-21 00:59:04 +01:00
if (!res) {
bool is_priority = is_priority_node(na);
2019-04-09 10:07:13 +02:00
LOG_PRINT_CC_PRIORITY_NODE(is_priority, *con, "Failed to HANDSHAKE with peer " << na.str());
2020-01-15 15:42:52 +01:00
record_addr_failed(na);
2017-01-21 00:59:04 +01:00
return false;
}
2019-04-09 10:07:13 +02:00
zone.m_net_server.get_config_object().close(con->m_connection_id);
2017-01-21 00:59:04 +01:00
2019-04-09 10:07:13 +02:00
LOG_DEBUG_CC(*con, "CONNECTION HANDSHAKED OK AND CLOSED.");
2017-01-21 00:59:04 +01:00
return true;
}
2014-05-25 19:06:40 +02:00
#undef LOG_PRINT_CC_PRIORITY_NODE
2020-01-15 15:42:52 +01:00
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
void node_server<t_payload_net_handler>::record_addr_failed(const epee::net_utils::network_address& addr)
{
Purge epee::critical_crap and CRITICAL_CRAP
This purges epee::critical_region/epee::critical_section and the awful
CRITICAL_REGION_LOCAL and CRITICAL_REGION_LOCAL1 and
CRITICAL_REGION_BEGIN1 and all that crap from epee code.
This wrapper class around a mutex is just painful, macro-infested
indirection that accomplishes nothing (and, worse, forces all using code
to use a std::recursive_mutex even when a different mutex type is more
appropriate).
This commit purges it, replacing the "critical_section" mutex wrappers
with either std::mutex, std::recursive_mutex, or std::shared_mutex as
appropriate. I kept anything that looked uncertain as a
recursive_mutex, simple cases that obviously don't recurse as
std::mutex, and simple cases with reader/writing mechanics as a
shared_mutex.
Ideally all the recursive_mutexes should be eliminated because a
recursive_mutex is almost always a design flaw where someone has let the
locking code get impossibly tangled, but that requires a lot more time
to properly trace down all the ways the mutexes are used.
Other notable changes:
- There was one NIH promise/future-like class here that was used in
example one place in p2p/net_node; I replaced it with a
std::promise/future.
- moved the mutex for LMDB resizing into LMDB itself; having it in the
abstract base class is bad design, and also made it impossible to make a
moveable base class (which gets used for the fake db classes in the test
code).
2020-06-22 21:40:31 +02:00
std::unique_lock lock{m_conn_fails_cache_lock};
2020-01-15 15:42:52 +01:00
m_conn_fails_cache[addr.host_str()] = time(NULL);
}
2015-11-23 18:34:55 +01:00
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2017-05-27 12:35:54 +02:00
bool node_server<t_payload_net_handler>::is_addr_recently_failed(const epee::net_utils::network_address& addr)
2015-11-23 18:34:55 +01:00
{
Purge epee::critical_crap and CRITICAL_CRAP
This purges epee::critical_region/epee::critical_section and the awful
CRITICAL_REGION_LOCAL and CRITICAL_REGION_LOCAL1 and
CRITICAL_REGION_BEGIN1 and all that crap from epee code.
This wrapper class around a mutex is just painful, macro-infested
indirection that accomplishes nothing (and, worse, forces all using code
to use a std::recursive_mutex even when a different mutex type is more
appropriate).
This commit purges it, replacing the "critical_section" mutex wrappers
with either std::mutex, std::recursive_mutex, or std::shared_mutex as
appropriate. I kept anything that looked uncertain as a
recursive_mutex, simple cases that obviously don't recurse as
std::mutex, and simple cases with reader/writing mechanics as a
shared_mutex.
Ideally all the recursive_mutexes should be eliminated because a
recursive_mutex is almost always a design flaw where someone has let the
locking code get impossibly tangled, but that requires a lot more time
to properly trace down all the ways the mutexes are used.
Other notable changes:
- There was one NIH promise/future-like class here that was used in
example one place in p2p/net_node; I replaced it with a
std::promise/future.
- moved the mutex for LMDB resizing into LMDB itself; having it in the
abstract base class is bad design, and also made it impossible to make a
moveable base class (which gets used for the fake db classes in the test
code).
2020-06-22 21:40:31 +02:00
std::shared_lock lock{m_conn_fails_cache_lock};
2019-09-16 21:20:23 +02:00
auto it = m_conn_fails_cache.find(addr.host_str());
2015-11-23 18:34:55 +01:00
if(it == m_conn_fails_cache.end())
return false;
if(time(NULL) - it->second > P2P_FAILED_ADDR_FORGET_SECONDS)
return false;
else
return true;
}
2014-03-03 23:07:58 +01:00
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2017-02-09 01:11:58 +01:00
bool node_server<t_payload_net_handler>::make_new_connection_from_anchor_peerlist(const std::vector<anchor_peerlist_entry>& anchor_peerlist)
{
for (const auto& pe: anchor_peerlist) {
2020-06-02 20:37:36 +02:00
MDEBUG("Considering connecting (out) to anchor peer: " << peerid_to_string(pe.id) << " " << pe.adr.str());
2017-02-09 01:11:58 +01:00
if(is_peer_used(pe)) {
2020-06-02 20:37:36 +02:00
MDEBUG("Peer is used");
2017-02-09 01:11:58 +01:00
continue;
}
2017-05-27 12:35:54 +02:00
if(!is_remote_host_allowed(pe.adr)) {
2017-02-09 01:11:58 +01:00
continue;
}
if(is_addr_recently_failed(pe.adr)) {
continue;
}
2017-08-20 22:15:53 +02:00
MDEBUG("Selected peer: " << peerid_to_string(pe.id) << " " << pe.adr.str()
2017-02-09 01:11:58 +01:00
<< "[peer_type=" << anchor
<< "] first_seen: " << epee::misc_utils::get_time_interval_string(time(NULL) - pe.first_seen));
if(!try_to_connect_and_handshake_with_new_peer(pe.adr, false, 0, anchor, pe.first_seen)) {
2020-06-02 20:37:36 +02:00
MDEBUG("Handshake failed");
2017-02-09 01:11:58 +01:00
continue;
}
return true;
}
return false;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2019-04-09 10:07:13 +02:00
bool node_server<t_payload_net_handler>::make_new_connection_from_peerlist(network_zone& zone, bool use_white_list)
2014-03-03 23:07:58 +01:00
{
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-30 00:30:51 +02:00
size_t max_random_index = 0;
2014-03-03 23:07:58 +01:00
std::set<size_t> tried_peers;
size_t try_count = 0;
size_t rand_count = 0;
2019-04-09 10:07:13 +02:00
while(rand_count < (max_random_index+1)*3 && try_count < 10 && !zone.m_net_server.is_stop_signal_sent())
2014-03-03 23:07:58 +01:00
{
++rand_count;
2017-02-28 17:39:39 +01:00
size_t random_index;
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-30 00:30:51 +02:00
const uint32_t next_needed_pruning_stripe = m_payload_handler.get_next_needed_pruning_stripe().second;
2017-02-28 17:39:39 +01:00
2019-07-05 20:19:40 +02:00
// build a set of all the /16 we're connected to, and prefer a peer that's not in that set
std::set<uint32_t> classB;
if (&zone == &m_network_zones.at(epee::net_utils::zone::public_)) // at returns reference, not copy
{
zone.m_net_server.get_config_object().foreach_connection([&](const p2p_connection_context& cntxt)
{
if (cntxt.m_remote_address.get_type_id() == epee::net_utils::ipv4_network_address::get_type_id())
{
const epee::net_utils::network_address na = cntxt.m_remote_address;
const uint32_t actual_ip = na.as<const epee::net_utils::ipv4_network_address>().ip();
classB.insert(actual_ip & 0x0000ffff);
}
return true;
});
}
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-30 00:30:51 +02:00
std::deque<size_t> filtered;
const size_t limit = use_white_list ? 20 : std::numeric_limits<size_t>::max();
2019-07-05 20:19:40 +02:00
for (int step = 0; step < 2; ++step)
{
2019-10-03 01:32:32 +02:00
bool skip_duplicate_class_B = step == 0 && m_nettype == cryptonote::MAINNET;
2020-01-03 18:56:21 +01:00
size_t idx = 0, skipped = 0;
2019-07-05 20:19:40 +02:00
zone.m_peerlist.foreach (use_white_list, [&classB, &filtered, &idx, &skipped, skip_duplicate_class_B, limit, next_needed_pruning_stripe](const peerlist_entry &pe){
if (filtered.size() >= limit)
return false;
bool skip = false;
if (skip_duplicate_class_B && pe.adr.get_type_id() == epee::net_utils::ipv4_network_address::get_type_id())
{
const epee::net_utils::network_address na = pe.adr;
uint32_t actual_ip = na.as<const epee::net_utils::ipv4_network_address>().ip();
skip = classB.find(actual_ip & 0x0000ffff) != classB.end();
}
if (skip)
++skipped;
else if (next_needed_pruning_stripe == 0 || pe.pruning_seed == 0)
filtered.push_back(idx);
else if (next_needed_pruning_stripe == tools::get_pruning_stripe(pe.pruning_seed))
filtered.push_front(idx);
++idx;
return true;
});
if (skipped == 0 || !filtered.empty())
break;
if (skipped)
2019-08-21 16:00:43 +02:00
MINFO("Skipping " << skipped << " possible peers as they share a class B with existing peers");
2019-07-05 20:19:40 +02:00
}
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-30 00:30:51 +02:00
if (filtered.empty())
{
MDEBUG("No available peer in " << (use_white_list ? "white" : "gray") << " list filtered by " << next_needed_pruning_stripe);
return false;
}
if (use_white_list)
{
// if using the white list, we first pick in the set of peers we've already been using earlier
random_index = get_random_index_with_fixed_probability(std::min<uint64_t>(filtered.size() - 1, 20));
Purge epee::critical_crap and CRITICAL_CRAP
This purges epee::critical_region/epee::critical_section and the awful
CRITICAL_REGION_LOCAL and CRITICAL_REGION_LOCAL1 and
CRITICAL_REGION_BEGIN1 and all that crap from epee code.
This wrapper class around a mutex is just painful, macro-infested
indirection that accomplishes nothing (and, worse, forces all using code
to use a std::recursive_mutex even when a different mutex type is more
appropriate).
This commit purges it, replacing the "critical_section" mutex wrappers
with either std::mutex, std::recursive_mutex, or std::shared_mutex as
appropriate. I kept anything that looked uncertain as a
recursive_mutex, simple cases that obviously don't recurse as
std::mutex, and simple cases with reader/writing mechanics as a
shared_mutex.
Ideally all the recursive_mutexes should be eliminated because a
recursive_mutex is almost always a design flaw where someone has let the
locking code get impossibly tangled, but that requires a lot more time
to properly trace down all the ways the mutexes are used.
Other notable changes:
- There was one NIH promise/future-like class here that was used in
example one place in p2p/net_node; I replaced it with a
std::promise/future.
- moved the mutex for LMDB resizing into LMDB itself; having it in the
abstract base class is bad design, and also made it impossible to make a
moveable base class (which gets used for the fake db classes in the test
code).
2020-06-22 21:40:31 +02:00
std::lock_guard lock{m_used_stripe_peers_mutex};
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-30 00:30:51 +02:00
if (next_needed_pruning_stripe > 0 && next_needed_pruning_stripe <= (1ul << CRYPTONOTE_PRUNING_LOG_STRIPES) && !m_used_stripe_peers[next_needed_pruning_stripe-1].empty())
{
const epee::net_utils::network_address na = m_used_stripe_peers[next_needed_pruning_stripe-1].front();
m_used_stripe_peers[next_needed_pruning_stripe-1].pop_front();
for (size_t i = 0; i < filtered.size(); ++i)
{
peerlist_entry pe;
2019-04-09 10:07:13 +02:00
if (zone.m_peerlist.get_white_peer_by_index(pe, filtered[i]) && pe.adr == na)
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-30 00:30:51 +02:00
{
MDEBUG("Reusing stripe " << next_needed_pruning_stripe << " peer " << pe.adr.str());
random_index = i;
break;
}
}
}
2017-02-28 17:39:39 +01:00
}
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-30 00:30:51 +02:00
else
2019-04-03 07:10:24 +02:00
random_index = crypto::rand_idx(filtered.size());
2017-02-28 17:39:39 +01:00
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-30 00:30:51 +02:00
CHECK_AND_ASSERT_MES(random_index < filtered.size(), false, "random_index < filtered.size() failed!!");
random_index = filtered[random_index];
2019-04-09 10:07:13 +02:00
CHECK_AND_ASSERT_MES(random_index < (use_white_list ? zone.m_peerlist.get_white_peers_count() : zone.m_peerlist.get_gray_peers_count()),
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-30 00:30:51 +02:00
false, "random_index < peers size failed!!");
2014-03-03 23:07:58 +01:00
if(tried_peers.count(random_index))
continue;
tried_peers.insert(random_index);
2019-10-31 23:26:58 +01:00
peerlist_entry pe{};
2019-04-09 10:07:13 +02:00
bool r = use_white_list ? zone.m_peerlist.get_white_peer_by_index(pe, random_index):zone.m_peerlist.get_gray_peer_by_index(pe, random_index);
2014-03-03 23:07:58 +01:00
CHECK_AND_ASSERT_MES(r, false, "Failed to get random peer from peerlist(white:" << use_white_list << ")");
++try_count;
2020-06-02 20:37:36 +02:00
MDEBUG("Considering connecting (out) to " << (use_white_list ? "white" : "gray") << " list peer: " <<
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-30 00:30:51 +02:00
peerid_to_string(pe.id) << " " << pe.adr.str() << ", pruning seed " << epee::string_tools::to_string_hex(pe.pruning_seed) <<
" (stripe " << next_needed_pruning_stripe << " needed)");
2015-02-12 20:59:39 +01:00
if(is_peer_used(pe)) {
2020-06-02 20:37:36 +02:00
MDEBUG("Peer is used");
2014-03-03 23:07:58 +01:00
continue;
2015-12-14 05:54:39 +01:00
}
2014-03-03 23:07:58 +01:00
2017-05-27 12:35:54 +02:00
if(!is_remote_host_allowed(pe.adr))
2015-11-23 18:34:55 +01:00
continue;
if(is_addr_recently_failed(pe.adr))
continue;
2017-08-20 22:15:53 +02:00
MDEBUG("Selected peer: " << peerid_to_string(pe.id) << " " << pe.adr.str()
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-30 00:30:51 +02:00
<< ", pruning seed " << epee::string_tools::to_string_hex(pe.pruning_seed) << " "
2017-02-09 01:11:58 +01:00
<< "[peer_list=" << (use_white_list ? white : gray)
2014-05-25 19:06:40 +02:00
<< "] last_seen: " << (pe.last_seen ? epee::misc_utils::get_time_interval_string(time(NULL) - pe.last_seen) : "never"));
2015-12-14 05:54:39 +01:00
2017-02-09 01:11:58 +01:00
if(!try_to_connect_and_handshake_with_new_peer(pe.adr, false, pe.last_seen, use_white_list ? white : gray)) {
2020-06-02 20:37:36 +02:00
MDEBUG("Handshake failed");
2014-03-03 23:07:58 +01:00
continue;
2015-12-14 05:54:39 +01:00
}
2014-03-03 23:07:58 +01:00
return true;
}
return false;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2017-08-09 23:44:39 +02:00
bool node_server<t_payload_net_handler>::connect_to_seed()
2014-03-03 23:07:58 +01:00
{
2020-04-10 04:24:00 +02:00
if (!m_seed_nodes_initialized)
{
2020-06-22 02:42:23 +02:00
std::unique_lock lock{m_seed_nodes_mutex};
if (!m_seed_nodes_initialized)
2020-04-10 04:24:00 +02:00
{
2020-06-22 02:42:23 +02:00
for (const auto& full_addr : get_seed_nodes())
{
MDEBUG("Seed node: " << full_addr);
append_net_address(m_seed_nodes, full_addr, cryptonote::get_config(m_nettype).P2P_DEFAULT_PORT);
}
MDEBUG("Number of seed nodes: " << m_seed_nodes.size());
m_seed_nodes_initialized = true;
2020-04-10 04:24:00 +02:00
}
}
2020-06-22 02:42:23 +02:00
std::shared_lock shlock{m_seed_nodes_mutex};
2018-01-17 12:17:21 +01:00
if (m_seed_nodes.empty() || m_offline || !m_exclusive_peers.empty())
2017-08-09 23:44:39 +02:00
return true;
2014-05-25 19:06:40 +02:00
2014-03-03 23:07:58 +01:00
size_t try_count = 0;
2020-01-03 18:58:13 +01:00
bool is_connected_to_at_least_one_seed_node = false;
2019-04-03 07:10:24 +02:00
size_t current_index = crypto::rand_idx(m_seed_nodes.size());
2019-04-09 10:07:13 +02:00
const net_server& server = m_network_zones.at(epee::net_utils::zone::public_).m_net_server;
2014-03-03 23:07:58 +01:00
while(true)
2015-12-14 05:54:39 +01:00
{
2019-04-09 10:07:13 +02:00
if(server.is_stop_signal_sent())
2014-03-03 23:07:58 +01:00
return false;
2020-01-03 18:58:13 +01:00
peerlist_entry pe_seed{};
pe_seed.adr = m_seed_nodes[current_index];
if (is_peer_used(pe_seed))
is_connected_to_at_least_one_seed_node = true;
else if (try_to_connect_and_handshake_with_new_peer(m_seed_nodes[current_index], true))
2014-03-03 23:07:58 +01:00
break;
if(++try_count > m_seed_nodes.size())
{
2020-04-10 04:24:00 +02:00
if (!m_fallback_seed_nodes_added.test_and_set())
2017-03-18 00:39:47 +01:00
{
MWARNING("Failed to connect to any of seed peers, trying fallback seeds");
2020-01-07 15:38:31 +01:00
current_index = m_seed_nodes.size() - 1;
2017-03-18 00:39:47 +01:00
{
2020-06-22 02:42:23 +02:00
shlock.unlock();
2020-04-10 04:24:00 +02:00
{
2020-06-22 02:42:23 +02:00
std::unique_lock lock{m_seed_nodes_mutex};
for (const auto &peer: get_seed_nodes(m_nettype))
{
MDEBUG("Fallback seed node: " << peer);
append_net_address(m_seed_nodes, peer, cryptonote::get_config(m_nettype).P2P_DEFAULT_PORT);
}
2020-04-10 04:24:00 +02:00
}
2020-06-22 02:42:23 +02:00
shlock.lock();
2017-03-18 00:39:47 +01:00
}
2020-01-07 15:38:31 +01:00
if (current_index == m_seed_nodes.size() - 1)
2018-04-29 15:57:08 +02:00
{
MWARNING("No fallback seeds, continuing without seeds");
break;
}
2017-03-18 00:39:47 +01:00
// continue for another few cycles
}
else
{
2020-01-03 18:58:13 +01:00
if (!is_connected_to_at_least_one_seed_node)
MWARNING("Failed to connect to any of seed peers, continuing without seeds");
2017-03-18 00:39:47 +01:00
break;
}
2014-03-03 23:07:58 +01:00
}
2014-03-20 12:46:11 +01:00
if(++current_index >= m_seed_nodes.size())
2014-03-03 23:07:58 +01:00
current_index = 0;
}
2017-08-09 23:44:39 +02:00
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::connections_maker()
{
2019-04-09 10:07:13 +02:00
using zone_type = epee::net_utils::zone;
2018-02-01 12:48:03 +01:00
if (m_offline) return true;
2017-08-09 23:44:39 +02:00
if (!connect_to_peerlist(m_exclusive_peers)) return false;
if (!m_exclusive_peers.empty()) return true;
2019-04-09 10:07:13 +02:00
// Only have seeds in the public zone right now.
size_t start_conn_count = get_public_outgoing_connections_count();
2020-04-10 04:24:00 +02:00
if(!get_public_white_peers_count() && !connect_to_seed())
2017-08-09 23:44:39 +02:00
{
2020-04-10 04:24:00 +02:00
return false;
2014-03-03 23:07:58 +01:00
}
2014-05-25 19:06:40 +02:00
if (!connect_to_peerlist(m_priority_peers)) return false;
2014-03-03 23:07:58 +01:00
2019-04-09 10:07:13 +02:00
for(auto& zone : m_network_zones)
2014-03-03 23:07:58 +01:00
{
2019-04-09 10:07:13 +02:00
size_t base_expected_white_connections = (zone.second.m_config.m_net_config.max_out_connection_count*P2P_DEFAULT_WHITELIST_CONNECTIONS_PERCENT)/100;
size_t conn_count = get_outgoing_connections_count(zone.second);
while(conn_count < zone.second.m_config.m_net_config.max_out_connection_count)
2014-03-03 23:07:58 +01:00
{
2019-04-09 10:07:13 +02:00
const size_t expected_white_connections = m_payload_handler.get_next_needed_pruning_stripe().second ? zone.second.m_config.m_net_config.max_out_connection_count : base_expected_white_connections;
if(conn_count < expected_white_connections)
{
//start from anchor list
while (get_outgoing_connections_count(zone.second) < P2P_DEFAULT_ANCHOR_CONNECTIONS_COUNT
&& make_expected_connections_count(zone.second, anchor, P2P_DEFAULT_ANCHOR_CONNECTIONS_COUNT));
//then do white list
while (get_outgoing_connections_count(zone.second) < expected_white_connections
&& make_expected_connections_count(zone.second, white, expected_white_connections));
//then do grey list
while (get_outgoing_connections_count(zone.second) < zone.second.m_config.m_net_config.max_out_connection_count
&& make_expected_connections_count(zone.second, gray, zone.second.m_config.m_net_config.max_out_connection_count));
}else
{
//start from grey list
while (get_outgoing_connections_count(zone.second) < zone.second.m_config.m_net_config.max_out_connection_count
&& make_expected_connections_count(zone.second, gray, zone.second.m_config.m_net_config.max_out_connection_count));
//and then do white list
while (get_outgoing_connections_count(zone.second) < zone.second.m_config.m_net_config.max_out_connection_count
&& make_expected_connections_count(zone.second, white, zone.second.m_config.m_net_config.max_out_connection_count));
}
if(zone.second.m_net_server.is_stop_signal_sent())
return false;
2018-09-16 12:48:04 +02:00
size_t new_conn_count = get_outgoing_connections_count(zone.second);
if (new_conn_count <= conn_count)
{
// we did not make any connection, sleep a bit to avoid a busy loop in case we don't have
// any peers to try, then break so we will try seeds to get more peers
2020-06-03 04:51:35 +02:00
std::this_thread::sleep_for(1s);
2018-09-16 12:48:04 +02:00
break;
}
conn_count = new_conn_count;
2014-03-03 23:07:58 +01:00
}
}
2019-04-09 10:07:13 +02:00
if (start_conn_count == get_public_outgoing_connections_count() && start_conn_count < m_network_zones.at(zone_type::public_).m_config.m_net_config.max_out_connection_count)
2017-08-09 23:44:39 +02:00
{
MINFO("Failed to connect to any, trying seeds");
if (!connect_to_seed())
return false;
}
2014-03-03 23:07:58 +01:00
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2019-04-09 10:07:13 +02:00
bool node_server<t_payload_net_handler>::make_expected_connections_count(network_zone& zone, PeerType peer_type, size_t expected_connections)
2014-03-03 23:07:58 +01:00
{
2015-12-07 21:21:45 +01:00
if (m_offline)
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-30 00:30:51 +02:00
return false;
2015-12-07 21:21:45 +01:00
2017-02-09 01:11:58 +01:00
std::vector<anchor_peerlist_entry> apl;
if (peer_type == anchor) {
2019-04-09 10:07:13 +02:00
zone.m_peerlist.get_and_empty_anchor_peerlist(apl);
2017-02-09 01:11:58 +01:00
}
2019-04-09 10:07:13 +02:00
size_t conn_count = get_outgoing_connections_count(zone);
2014-03-03 23:07:58 +01:00
//add new connections from white peers
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-30 00:30:51 +02:00
if(conn_count < expected_connections)
2014-03-03 23:07:58 +01:00
{
2019-04-09 10:07:13 +02:00
if(zone.m_net_server.is_stop_signal_sent())
2014-03-03 23:07:58 +01:00
return false;
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-30 00:30:51 +02:00
MDEBUG("Making expected connection, type " << peer_type << ", " << conn_count << "/" << expected_connections << " connections");
2017-02-09 01:11:58 +01:00
if (peer_type == anchor && !make_new_connection_from_anchor_peerlist(apl)) {
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-30 00:30:51 +02:00
return false;
2017-02-09 01:11:58 +01:00
}
2019-04-09 10:07:13 +02:00
if (peer_type == white && !make_new_connection_from_peerlist(zone, true)) {
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-30 00:30:51 +02:00
return false;
2017-02-09 01:11:58 +01:00
}
2019-04-09 10:07:13 +02:00
if (peer_type == gray && !make_new_connection_from_peerlist(zone, false)) {
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-30 00:30:51 +02:00
return false;
2017-02-09 01:11:58 +01:00
}
2014-03-03 23:07:58 +01:00
}
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2019-04-09 10:07:13 +02:00
size_t node_server<t_payload_net_handler>::get_public_outgoing_connections_count()
{
auto public_zone = m_network_zones.find(epee::net_utils::zone::public_);
if (public_zone == m_network_zones.end())
return 0;
return get_outgoing_connections_count(public_zone->second);
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
size_t node_server<t_payload_net_handler>::get_incoming_connections_count(network_zone& zone)
2014-03-03 23:07:58 +01:00
{
size_t count = 0;
2019-04-09 10:07:13 +02:00
zone.m_net_server.get_config_object().foreach_connection([&](const p2p_connection_context& cntxt)
2014-03-03 23:07:58 +01:00
{
2019-04-09 10:07:13 +02:00
if(cntxt.m_is_income)
2014-03-03 23:07:58 +01:00
++count;
return true;
});
return count;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2019-04-09 10:07:13 +02:00
size_t node_server<t_payload_net_handler>::get_outgoing_connections_count(network_zone& zone)
2018-01-20 22:44:23 +01:00
{
size_t count = 0;
2019-04-09 10:07:13 +02:00
zone.m_net_server.get_config_object().foreach_connection([&](const p2p_connection_context& cntxt)
2018-01-20 22:44:23 +01:00
{
2019-04-09 10:07:13 +02:00
if(!cntxt.m_is_income)
2018-01-20 22:44:23 +01:00
++count;
return true;
});
return count;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2019-04-09 10:07:13 +02:00
size_t node_server<t_payload_net_handler>::get_outgoing_connections_count()
{
size_t count = 0;
for(auto& zone : m_network_zones)
count += get_outgoing_connections_count(zone.second);
return count;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
size_t node_server<t_payload_net_handler>::get_incoming_connections_count()
{
size_t count = 0;
for (auto& zone : m_network_zones)
{
zone.second.m_net_server.get_config_object().foreach_connection([&](const p2p_connection_context& cntxt)
{
if(cntxt.m_is_income)
++count;
return true;
});
}
return count;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
size_t node_server<t_payload_net_handler>::get_public_white_peers_count()
{
auto public_zone = m_network_zones.find(epee::net_utils::zone::public_);
if (public_zone == m_network_zones.end())
return 0;
return public_zone->second.m_peerlist.get_white_peers_count();
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
size_t node_server<t_payload_net_handler>::get_public_gray_peers_count()
{
auto public_zone = m_network_zones.find(epee::net_utils::zone::public_);
if (public_zone == m_network_zones.end())
return 0;
return public_zone->second.m_peerlist.get_gray_peers_count();
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
void node_server<t_payload_net_handler>::get_public_peerlist(std::vector<peerlist_entry>& gray, std::vector<peerlist_entry>& white)
{
auto public_zone = m_network_zones.find(epee::net_utils::zone::public_);
if (public_zone != m_network_zones.end())
public_zone->second.m_peerlist.get_peerlist(gray, white);
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2019-06-26 11:14:23 +02:00
void node_server<t_payload_net_handler>::get_peerlist(std::vector<peerlist_entry>& gray, std::vector<peerlist_entry>& white)
{
for (auto &zone: m_network_zones)
{
zone.second.m_peerlist.get_peerlist(gray, white); // appends
}
}
//-----------------------------------------------------------------------------------
2018-12-16 18:57:44 +01:00
template<class t_payload_net_handler>
2014-03-03 23:07:58 +01:00
bool node_server<t_payload_net_handler>::idle_worker()
{
2020-06-03 05:10:08 +02:00
m_peer_handshake_idle_maker_interval.do_call([this] { return peer_sync_idle_maker(); });
m_connections_maker_interval.do_call([this] { return connections_maker(); });
m_gray_peerlist_housekeeping_interval.do_call([this] { return gray_peerlist_housekeeping(); });
m_peerlist_store_interval.do_call([this] { return store_config(); });
m_incoming_connections_interval.do_call([this] { return check_incoming_connections(); });
2018-05-25 13:34:52 +02:00
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::check_incoming_connections()
{
2018-11-01 15:51:08 +01:00
if (m_offline)
2018-05-25 13:34:52 +02:00
return true;
2019-04-09 10:07:13 +02:00
const auto public_zone = m_network_zones.find(epee::net_utils::zone::public_);
if (public_zone != m_network_zones.end() && get_incoming_connections_count(public_zone->second) == 0)
2018-05-25 13:34:52 +02:00
{
2019-04-09 10:07:13 +02:00
if (m_hide_my_port || public_zone->second.m_config.m_net_config.max_in_connection_count == 0)
2018-11-01 15:51:08 +01:00
{
MGINFO("Incoming connections disabled, enable them for full connectivity");
}
else
{
2019-06-06 12:28:02 +02:00
if (m_igd == delayed_igd)
{
MWARNING("No incoming connections, trying to setup IGD");
add_upnp_port_mapping(m_listening_port);
m_igd = igd;
}
else
{
const el::Level level = el::Level::Warning;
MCLOG_RED(level, "global", "No incoming connections - check firewalls/routers allow port " << get_this_peer_port());
}
2018-11-01 15:51:08 +01:00
}
2018-05-25 13:34:52 +02:00
}
2014-03-03 23:07:58 +01:00
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::peer_sync_idle_maker()
{
2019-11-18 23:18:41 +01:00
// TODO: this sync code is rather dumb: every 60s we trigger a sync with every connected peer
// all at once which results in a sudden spike of activity every 60s then not much in between.
// This really should be spaced out, i.e. the 60s sync timing should apply per peer, not
// globally.
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 17:34:23 +01:00
MDEBUG("STARTED PEERLIST IDLE HANDSHAKE");
2014-05-25 19:06:40 +02:00
typedef std::list<std::pair<epee::net_utils::connection_context_base, peerid_type> > local_connects_type;
2014-03-03 23:07:58 +01:00
local_connects_type cncts;
2019-04-09 10:07:13 +02:00
for(auto& zone : m_network_zones)
2014-03-03 23:07:58 +01:00
{
2019-04-09 10:07:13 +02:00
zone.second.m_net_server.get_config_object().foreach_connection([&](p2p_connection_context& cntxt)
2017-06-04 23:37:53 +02:00
{
2019-04-09 10:07:13 +02:00
if(cntxt.peer_id && !cntxt.m_in_timedsync)
{
cntxt.m_in_timedsync = true;
cncts.push_back(local_connects_type::value_type(cntxt, cntxt.peer_id));//do idle sync only with handshaked connections
}
return true;
});
}
2014-03-03 23:07:58 +01:00
std::for_each(cncts.begin(), cncts.end(), [&](const typename local_connects_type::value_type& vl){do_peer_timed_sync(vl.first, vl.second);});
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 17:34:23 +01:00
MDEBUG("FINISHED PEERLIST IDLE HANDSHAKE");
2014-03-03 23:07:58 +01:00
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2019-06-19 00:11:18 +02:00
bool node_server<t_payload_net_handler>::sanitize_peerlist(std::vector<peerlist_entry>& local_peerlist)
2014-03-03 23:07:58 +01:00
{
2019-06-19 00:11:18 +02:00
for (size_t i = 0; i < local_peerlist.size(); ++i)
2014-03-03 23:07:58 +01:00
{
2019-06-19 00:11:18 +02:00
bool ignore = false;
peerlist_entry &be = local_peerlist[i];
epee::net_utils::network_address &na = be.adr;
if (na.is_loopback() || na.is_local())
2014-03-03 23:07:58 +01:00
{
2019-06-19 00:11:18 +02:00
ignore = true;
}
else if (be.adr.get_type_id() == epee::net_utils::ipv4_network_address::get_type_id())
{
const epee::net_utils::ipv4_network_address &ipv4 = na.as<const epee::net_utils::ipv4_network_address>();
if (ipv4.ip() == 0)
ignore = true;
2019-10-02 21:04:57 +02:00
else if (ipv4.port() == be.rpc_port)
ignore = true;
2014-03-03 23:07:58 +01:00
}
2019-10-02 18:37:52 +02:00
if (be.pruning_seed && (be.pruning_seed < tools::make_pruning_seed(1, CRYPTONOTE_PRUNING_LOG_STRIPES) || be.pruning_seed > tools::make_pruning_seed(1ul << CRYPTONOTE_PRUNING_LOG_STRIPES, CRYPTONOTE_PRUNING_LOG_STRIPES)))
ignore = true;
2019-06-19 00:11:18 +02:00
if (ignore)
{
MDEBUG("Ignoring " << be.adr.str());
std::swap(local_peerlist[i], local_peerlist[local_peerlist.size() - 1]);
local_peerlist.resize(local_peerlist.size() - 1);
--i;
continue;
}
2019-11-11 13:43:46 +01:00
local_peerlist[i].last_seen = 0;
2019-06-19 00:11:18 +02:00
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-30 00:30:51 +02:00
#ifdef CRYPTONOTE_PRUNING_DEBUG_SPOOF_SEED
be.pruning_seed = tools::make_pruning_seed(1 + (be.adr.as<epee::net_utils::ipv4_network_address>().ip()) % (1ul << CRYPTONOTE_PRUNING_LOG_STRIPES), CRYPTONOTE_PRUNING_LOG_STRIPES);
#endif
2014-03-03 23:07:58 +01:00
}
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2019-12-04 13:51:45 +01:00
bool node_server<t_payload_net_handler>::handle_remote_peerlist(const std::vector<peerlist_entry>& peerlist, const epee::net_utils::connection_context_base& context)
2014-03-03 23:07:58 +01:00
{
2018-12-05 23:25:27 +01:00
std::vector<peerlist_entry> peerlist_ = peerlist;
2019-06-19 00:11:18 +02:00
if(!sanitize_peerlist(peerlist_))
2014-03-03 23:07:58 +01:00
return false;
2019-04-09 10:07:13 +02:00
const epee::net_utils::zone zone = context.m_remote_address.get_zone();
for(const auto& peer : peerlist_)
{
if(peer.adr.get_zone() != zone)
{
MWARNING(context << " sent peerlist from another zone, dropping");
return false;
}
}
2019-06-19 00:11:18 +02:00
LOG_DEBUG_CC(context, "REMOTE PEERLIST: remote peerlist size=" << peerlist_.size());
2020-05-28 03:48:41 +02:00
LOG_TRACE_CC(context, "REMOTE PEERLIST: \n" << print_peerlist_to_string(peerlist_));
2020-01-15 15:42:52 +01:00
return m_network_zones.at(context.m_remote_address.get_zone()).m_peerlist.merge_peerlist(peerlist_, [this](const peerlist_entry &pe) { return !is_addr_recently_failed(pe.adr); });
2014-03-03 23:07:58 +01:00
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2019-04-09 10:07:13 +02:00
bool node_server<t_payload_net_handler>::get_local_node_data(basic_node_data& node_data, const network_zone& zone)
2014-03-03 23:07:58 +01:00
{
2019-04-09 10:07:13 +02:00
node_data.peer_id = zone.m_config.m_peer_id;
if(!m_hide_my_port && zone.m_can_pingback)
2017-09-01 09:50:22 +02:00
node_data.my_port = m_external_port ? m_external_port : m_listening_port;
2015-12-14 05:54:39 +01:00
else
2014-03-03 23:07:58 +01:00
node_data.my_port = 0;
2019-02-24 09:47:49 +01:00
node_data.rpc_port = zone.m_can_pingback ? m_rpc_port : 0;
2014-07-16 19:30:15 +02:00
node_data.network_id = m_network_id;
2014-03-03 23:07:58 +01:00
return true;
}
//-----------------------------------------------------------------------------------
2016-10-26 21:00:08 +02:00
template<class t_payload_net_handler>
int node_server<t_payload_net_handler>::handle_get_support_flags(int command, COMMAND_REQUEST_SUPPORT_FLAGS::request& arg, COMMAND_REQUEST_SUPPORT_FLAGS::response& rsp, p2p_connection_context& context)
{
2019-04-09 10:07:13 +02:00
rsp.support_flags = m_network_zones.at(context.m_remote_address.get_zone()).m_config.m_support_flags;
2016-10-26 21:00:08 +02:00
return 1;
}
2014-03-03 23:07:58 +01:00
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
void node_server<t_payload_net_handler>::request_callback(const epee::net_utils::connection_context_base& context)
{
2019-04-09 10:07:13 +02:00
m_network_zones.at(context.m_remote_address.get_zone()).m_net_server.get_config_object().request_callback(context.m_connection_id);
2014-03-03 23:07:58 +01:00
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2019-04-09 10:07:13 +02:00
bool node_server<t_payload_net_handler>::relay_notify_to_list(int command, const epee::span<const uint8_t> data_buff, std::vector<std::pair<epee::net_utils::zone, boost::uuids::uuid>> connections)
2016-11-29 17:21:33 +01:00
{
2019-04-09 10:07:13 +02:00
std::sort(connections.begin(), connections.end());
auto zone = m_network_zones.begin();
2017-01-22 21:38:10 +01:00
for(const auto& c_id: connections)
2016-11-29 17:21:33 +01:00
{
2019-04-09 10:07:13 +02:00
for (;;)
{
if (zone == m_network_zones.end())
{
MWARNING("Unable to relay all messages, " << epee::net_utils::zone_to_string(c_id.first) << " not available");
return false;
}
if (c_id.first <= zone->first)
break;
2019-05-16 22:34:22 +02:00
2019-04-09 10:07:13 +02:00
++zone;
}
if (zone->first == c_id.first)
zone->second.m_net_server.get_config_object().notify(command, data_buff, c_id.second);
2016-11-29 17:21:33 +01:00
}
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2019-05-16 22:34:22 +02:00
epee::net_utils::zone node_server<t_payload_net_handler>::send_txs(std::vector<cryptonote::blobdata> txs, const epee::net_utils::zone origin, const boost::uuids::uuid& source, const bool pad_txs)
{
namespace enet = epee::net_utils;
const auto send = [&txs, &source, pad_txs] (std::pair<const enet::zone, network_zone>& network)
{
if (network.second.m_notifier.send_txs(std::move(txs), source, (pad_txs || network.first != enet::zone::public_)))
return network.first;
return enet::zone::invalid;
};
if (m_network_zones.empty())
return enet::zone::invalid;
if (origin != enet::zone::invalid)
return send(*m_network_zones.begin()); // send all txs received via p2p over public network
if (m_network_zones.size() <= 2)
return send(*m_network_zones.rbegin()); // see static asserts below; sends over anonymity network iff enabled
/* These checks are to ensure that i2p is highest priority if multiple
zones are selected. Make sure to update logic if the values cannot be
in the same relative order. `m_network_zones` must be sorted map too. */
static_assert(std::is_same<std::underlying_type<enet::zone>::type, std::uint8_t>{}, "expected uint8_t zone");
static_assert(unsigned(enet::zone::invalid) == 0, "invalid expected to be 0");
static_assert(unsigned(enet::zone::public_) == 1, "public_ expected to be 1");
static_assert(unsigned(enet::zone::i2p) == 2, "i2p expected to be 2");
static_assert(unsigned(enet::zone::tor) == 3, "tor expected to be 3");
// check for anonymity networks with noise and connections
for (auto network = ++m_network_zones.begin(); network != m_network_zones.end(); ++network)
{
if (enet::zone::tor < network->first)
break; // unknown network
const auto status = network->second.m_notifier.get_status();
if (status.has_noise && status.connections_filled)
return send(*network);
}
// use the anonymity network with outbound support
for (auto network = ++m_network_zones.begin(); network != m_network_zones.end(); ++network)
{
if (enet::zone::tor < network->first)
break; // unknown network
if (network->second.m_connect)
return send(*network);
}
// configuration should not allow this scenario
return enet::zone::invalid;
}
//-----------------------------------------------------------------------------------
2016-11-29 17:21:33 +01:00
template<class t_payload_net_handler>
2014-03-03 23:07:58 +01:00
void node_server<t_payload_net_handler>::callback(p2p_connection_context& context)
{
m_payload_handler.on_callback(context);
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2018-12-06 19:04:33 +01:00
bool node_server<t_payload_net_handler>::invoke_notify_to_peer(int command, const epee::span<const uint8_t> req_buff, const epee::net_utils::connection_context_base& context)
2014-03-03 23:07:58 +01:00
{
2019-04-09 10:07:13 +02:00
if(is_filtered_command(context.m_remote_address, command))
return false;
network_zone& zone = m_network_zones.at(context.m_remote_address.get_zone());
int res = zone.m_net_server.get_config_object().notify(command, req_buff, context.m_connection_id);
2014-03-03 23:07:58 +01:00
return res > 0;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2018-12-06 19:04:33 +01:00
bool node_server<t_payload_net_handler>::invoke_command_to_peer(int command, const epee::span<const uint8_t> req_buff, std::string& resp_buff, const epee::net_utils::connection_context_base& context)
2014-03-03 23:07:58 +01:00
{
2019-04-09 10:07:13 +02:00
if(is_filtered_command(context.m_remote_address, command))
return false;
network_zone& zone = m_network_zones.at(context.m_remote_address.get_zone());
int res = zone.m_net_server.get_config_object().invoke(command, req_buff, resp_buff, context.m_connection_id);
2014-03-03 23:07:58 +01:00
return res > 0;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::drop_connection(const epee::net_utils::connection_context_base& context)
{
2019-04-09 10:07:13 +02:00
m_network_zones.at(context.m_remote_address.get_zone()).m_net_server.get_config_object().close(context.m_connection_id);
2014-03-03 23:07:58 +01:00
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler> template<class t_callback>
2018-02-02 19:45:12 +01:00
bool node_server<t_payload_net_handler>::try_ping(basic_node_data& node_data, p2p_connection_context& context, const t_callback &cb)
2014-03-03 23:07:58 +01:00
{
if(!node_data.my_port)
return false;
2019-04-11 00:34:30 +02:00
bool address_ok = (context.m_remote_address.get_type_id() == epee::net_utils::ipv4_network_address::get_type_id() || context.m_remote_address.get_type_id() == epee::net_utils::ipv6_network_address::get_type_id());
CHECK_AND_ASSERT_MES(address_ok, false,
"Only IPv4 or IPv6 addresses are supported here");
2017-05-27 12:35:54 +02:00
2019-04-09 08:58:38 +02:00
const epee::net_utils::network_address na = context.m_remote_address;
2019-04-11 00:34:30 +02:00
std::string ip;
2020-09-25 03:50:04 +02:00
std::optional<uint32_t> ipv4_addr;
2019-04-11 00:34:30 +02:00
boost::asio::ip::address_v6 ipv6_addr;
if (na.get_type_id() == epee::net_utils::ipv4_network_address::get_type_id())
{
ipv4_addr = na.as<const epee::net_utils::ipv4_network_address>().ip();
2020-09-25 03:50:04 +02:00
ip = epee::string_tools::get_ip_string_from_int32(*ipv4_addr);
2019-04-11 00:34:30 +02:00
}
else
{
ipv6_addr = na.as<const epee::net_utils::ipv6_network_address>().ip();
ip = ipv6_addr.to_string();
}
2019-04-09 10:07:13 +02:00
network_zone& zone = m_network_zones.at(na.get_zone());
if(!zone.m_peerlist.is_host_allowed(context.m_remote_address))
2019-04-09 08:58:38 +02:00
return false;
2019-04-09 10:07:13 +02:00
2020-10-22 23:31:59 +02:00
std::string port = std::to_string(node_data.my_port);
2019-04-11 00:34:30 +02:00
epee::net_utils::network_address address;
2020-09-25 03:50:04 +02:00
if (ipv4_addr)
2019-04-11 00:34:30 +02:00
{
2020-09-25 03:50:04 +02:00
address = epee::net_utils::network_address{epee::net_utils::ipv4_network_address(*ipv4_addr, node_data.my_port)};
2019-04-11 00:34:30 +02:00
}
else
{
address = epee::net_utils::network_address{epee::net_utils::ipv6_network_address(ipv6_addr, node_data.my_port)};
}
2019-04-09 08:58:38 +02:00
peerid_type pr = node_data.peer_id;
2019-04-09 10:07:13 +02:00
bool r = zone.m_net_server.connect_async(ip, port, zone.m_config.m_net_config.ping_connection_timeout, [cb, /*context,*/ address, pr, this](
2014-03-03 23:07:58 +01:00
const typename net_server::t_connection_context& ping_context,
const boost::system::error_code& ec)->bool
{
if(ec)
{
2017-05-27 12:35:54 +02:00
LOG_WARNING_CC(ping_context, "back ping connect failed to " << address.str());
2014-03-03 23:07:58 +01:00
return false;
}
RPC overhaul
High-level details:
This redesigns the RPC layer to make it much easier to work with,
decouples it from an embedded HTTP server, and gets the vast majority of
the RPC serialization and dispatch code out of a very commonly included
header.
There is unfortunately rather a lot of interconnected code here that
cannot be easily separated out into separate commits. The full details
of what happens here are as follows:
Major details:
- All of the RPC code is now in a `cryptonote::rpc` namespace; this
renames quite a bit to be less verbose: e.g. CORE_RPC_STATUS_OK
becomes `rpc::STATUS_OK`, and `cryptonote::COMMAND_RPC_SOME_LONG_NAME`
becomes `rpc::SOME_LONG_NAME` (or just SOME_LONG_NAME for code already
working in the `rpc` namespace).
- `core_rpc_server` is now completely decoupled from providing any
request protocol: it is now *just* the core RPC call handler.
- The HTTP RPC interface now lives in a new rpc/http_server.h; this code
handles listening for HTTP requests and dispatching them to
core_rpc_server, then sending the results back to the caller.
- There is similarly a rpc/lmq_server.h for LMQ RPC code; more details
on this (and other LMQ specifics) below.
- RPC implementing code now returns the response object and throws when
things go wrong which simplifies much of the rpc error handling. They
can throw anything; generic exceptions get logged and a generic
"internal error" message gets returned to the caller, but there is
also an `rpc_error` class to return an error code and message used by
some json-rpc commands.
- RPC implementing functions now overload `core_rpc_server::invoke`
following the pattern:
RPC_BLAH_BLAH::response core_rpc_server::invoke(RPC_BLAH_BLAH::request&& req, rpc_context context);
This overloading makes the code vastly simpler: all instantiations are
now done with a small amount of generic instantiation code in a single
.cpp rather than needing to go to hell and back with a nest of epee
macros in a core header.
- each RPC endpoint is now defined by the RPC types themselves,
including its accessible names and permissions, in
core_rpc_server_commands_defs.h:
- every RPC structure now has a static `names()` function that returns
the names by which the end point is accessible. (The first one is
the primary, the others are for deprecated aliases).
- RPC command wrappers define their permissions and type by inheriting
from special tag classes:
- rpc::RPC_COMMAND is a basic, admin-only, JSON command, available
via JSON RPC. *All* JSON commands are now available via JSON RPC,
instead of the previous mix of some being at /foo and others at
/json_rpc. (Ones that were previously at /foo are still there for
backwards compatibility; see `rpc::LEGACY` below).
- rpc::PUBLIC specifies that the command should be available via a
restricted RPC connection.
- rpc::BINARY specifies that the command is not JSON, but rather is
accessible as /name and takes and returns values in the magic epee
binary "portable storage" (lol) data format.
- rpc::LEGACY specifies that the command should be available via the
non-json-rpc interface at `/name` for backwards compatibility (in
addition to the JSON-RPC interface).
- some epee serialization got unwrapped and de-templatized so that it
can be moved into a .cpp file with just declarations in the .h. (This
makes a *huge* difference for core_rpc_server_commands_defs.h and for
every compilation unit that includes it which previously had to
compile all the serialization code and then throw all by one copy away
at link time). This required some new macros so as to not break a ton
of places that will use the old way putting everything in the headers;
The RPC code uses this as does a few other places; there are comments
in contrib/epee/include/serialization/keyvalue_serialization.h as to
how to use it.
- Detemplatized a bunch of epee/storages code. Most of it should have
have been using templates at all (because it can only ever be called
with one type!), and now it isn't. This broke some things that didn't
properly compile because of missing headers or (in one case) a messed
up circular dependency.
- Significantly simplified a bunch of over-templatized serialization
code.
- All RPC serialization definitions is now out of
core_rpc_server_commands_defs.h and into a single .cpp file
(core_rpc_server_commands_defs.cpp).
- core RPC no longer uses the disgusting
BEGIN_URI_MAP2/MAP_URI_BLAH_BLAH macros. This was a terrible design
that forced slamming tons of code into a common header that didn't
need to be there.
- epee::struct_init is gone. It was a horrible hack that instiated
multiple templates just so the coder could be so lazy and write
`some_type var;` instead of properly value initializing with
`some_type var{};`.
- Removed a bunch of useless crap from epee. In particular, forcing
extra template instantiations all over the place in order to nest
return objects inside JSON RPC values is no longer needed, as are a
bunch of stuff related to the above de-macroization of the code.
- get_all_service_nodes, get_service_nodes, and get_n_service_nodes are
now combined into a single `get_service_nodes` (with deprecated
aliases for the others), which eliminates a fair amount of
duplication. The biggest obstacle here was getting the requested
fields reference passed through: this is now done by a new ability to
stash a context in the serialization object that can be retrieved by a
sub-serialized type.
LMQ-specifics:
- The LokiMQ instance moves into `cryptonote::core` rather than being
inside cryptonote_protocol. Currently the instance is used both for
qnet and rpc calls (and so needs to be in a common place), but I also
intend future PRs to use the batching code for job processing
(replacing the current threaded job queue).
- rpc/lmq_server.h handles the actual LMQ-request-to-core-RPC glue.
Unlike http_server it isn't technically running the whole LMQ stack
from here, but the parallel name with http_server seemed appropriate.
- All RPC endpoints are supported by LMQ under the same names as defined
generically, but prefixed with `rpc.` for public commands and `admin.`
for restricted ones.
- service node keys are now always available, even when not running in
`--service-node` mode: this is because we want the x25519 key for
being able to offer CURVE encryption for lmq RPC end-points, and
because it doesn't hurt to have them available all the time. In the
RPC layer this is now called "get_service_keys" (with
"get_service_node_key" as an alias) since they aren't strictly only
for service nodes. This also means code needs to check
m_service_node, and not m_service_node_keys, to tell if it is running
as a service node. (This is also easier to notice because
m_service_node_keys got renamed to `m_service_keys`).
- Added block and mempool monitoring LMQ RPC endpoints: `sub.block` and
`sub.mempool` subscribes the connection for new block and new mempool
TX notifications. The latter can notify on just blink txes, or all
new mempool txes (but only new ones -- txes dumped from a block don't
trigger it). The client gets pushed a [`notify.block`, `height`,
`hash`] or [`notify.tx`, `txhash`, `blob`] message when something
arrives.
Minor details:
- rpc::version_t is now a {major,minor} pair. Forcing everyone to pack
and unpack a uint32_t was gross.
- Changed some macros to constexprs (e.g. CORE_RPC_ERROR_CODE_...).
(This immediately revealed a couple of bugs in the RPC code that was
assigning CORE_RPC_ERROR_CODE_... to a string, and it worked because
the macro allows implicit conversion to a char).
- De-templatizing useless templates in epee (i.e. a bunch of templated
types that were never invoked with different types) revealed a painful
circular dependency between epee and non-epee code for tor_address and
i2p_address. This crap is now handled in a suitably named
`net/epee_network_address_hack.cpp` hack because it really isn't
trivial to extricate this mess.
- Removed `epee/include/serialization/serialize_base.h`. Amazingly the
code somehow still all works perfectly with this previously vital
header removed.
- Removed bitrotted, unused epee "crypted_storage" and
"gzipped_inmemstorage" code.
- Replaced a bunch of epee::misc_utils::auto_scope_leave_caller with
LOKI_DEFERs. The epee version involves quite a bit more instantiation
and is ugly as sin. Also made the `loki::defer` class invokable for
some edge cases that need calling before destruction in particular
conditions.
- Moved the systemd code around; it makes much more sense to do the
systemd started notification as in daemon.cpp as late as possible
rather than in core (when we can still have startup failures, e.g. if
the RPC layer can't start).
- Made the systemd short status string available in the get_info RPC
(and no longer require building with systemd).
- during startup, print (only) the x25519 when not in SN mode, and
continue to print all three when in SN mode.
- DRYed out some RPC implementation code (such as set_limit)
- Made wallet_rpc stop using a raw m_wallet pointer
2020-04-28 01:25:43 +02:00
COMMAND_PING::request req{};
COMMAND_PING::response rsp{};
2014-03-03 23:07:58 +01:00
//vc2010 workaround
/*std::string ip_ = ip;
std::string port_=port;
peerid_type pr_ = pr;
auto cb_ = cb;*/
2015-05-26 07:07:17 +02:00
// GCC 5.1.0 gives error with second use of uint64_t (peerid_type) variable.
peerid_type pr_ = pr;
2019-04-09 10:07:13 +02:00
network_zone& zone = m_network_zones.at(address.get_zone());
bool inv_call_res = epee::net_utils::async_invoke_remote_command2<COMMAND_PING::response>(ping_context.m_connection_id, COMMAND_PING::ID, req, zone.m_net_server.get_config_object(),
2014-03-03 23:07:58 +01:00
[=](int code, const COMMAND_PING::response& rsp, p2p_connection_context& context)
{
if(code <= 0)
{
2017-08-01 11:39:36 +02:00
LOG_WARNING_CC(ping_context, "Failed to invoke COMMAND_PING to " << address.str() << "(" << code << ", " << epee::levin::get_err_descr(code) << ")");
2014-03-03 23:07:58 +01:00
return;
}
2019-04-09 10:07:13 +02:00
network_zone& zone = m_network_zones.at(address.get_zone());
2014-03-03 23:07:58 +01:00
if(rsp.status != PING_OK_RESPONSE_STATUS_TEXT || pr != rsp.peer_id)
{
2019-06-18 23:16:25 +02:00
LOG_WARNING_CC(ping_context, "back ping invoke wrong response \"" << rsp.status << "\" from" << address.str() << ", hsh_peer_id=" << pr_ << ", rsp.peer_id=" << peerid_to_string(rsp.peer_id));
2019-04-09 10:07:13 +02:00
zone.m_net_server.get_config_object().close(ping_context.m_connection_id);
2014-03-03 23:07:58 +01:00
return;
}
2019-04-09 10:07:13 +02:00
zone.m_net_server.get_config_object().close(ping_context.m_connection_id);
2014-03-03 23:07:58 +01:00
cb();
});
if(!inv_call_res)
{
2017-08-01 11:39:36 +02:00
LOG_WARNING_CC(ping_context, "back ping invoke failed to " << address.str());
2019-04-09 10:07:13 +02:00
zone.m_net_server.get_config_object().close(ping_context.m_connection_id);
2014-03-03 23:07:58 +01:00
return false;
}
return true;
2019-07-11 06:16:08 +02:00
}, zone.m_bind_ip);
2014-03-03 23:07:58 +01:00
if(!r)
{
2017-08-01 11:39:36 +02:00
LOG_WARNING_CC(context, "Failed to call connect_async, network error.");
2014-03-03 23:07:58 +01:00
}
return r;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2016-10-26 21:00:08 +02:00
bool node_server<t_payload_net_handler>::try_get_support_flags(const p2p_connection_context& context, std::function<void(p2p_connection_context&, const uint32_t&)> f)
{
2019-04-09 10:07:13 +02:00
if(context.m_remote_address.get_zone() != epee::net_utils::zone::public_)
return false;
RPC overhaul
High-level details:
This redesigns the RPC layer to make it much easier to work with,
decouples it from an embedded HTTP server, and gets the vast majority of
the RPC serialization and dispatch code out of a very commonly included
header.
There is unfortunately rather a lot of interconnected code here that
cannot be easily separated out into separate commits. The full details
of what happens here are as follows:
Major details:
- All of the RPC code is now in a `cryptonote::rpc` namespace; this
renames quite a bit to be less verbose: e.g. CORE_RPC_STATUS_OK
becomes `rpc::STATUS_OK`, and `cryptonote::COMMAND_RPC_SOME_LONG_NAME`
becomes `rpc::SOME_LONG_NAME` (or just SOME_LONG_NAME for code already
working in the `rpc` namespace).
- `core_rpc_server` is now completely decoupled from providing any
request protocol: it is now *just* the core RPC call handler.
- The HTTP RPC interface now lives in a new rpc/http_server.h; this code
handles listening for HTTP requests and dispatching them to
core_rpc_server, then sending the results back to the caller.
- There is similarly a rpc/lmq_server.h for LMQ RPC code; more details
on this (and other LMQ specifics) below.
- RPC implementing code now returns the response object and throws when
things go wrong which simplifies much of the rpc error handling. They
can throw anything; generic exceptions get logged and a generic
"internal error" message gets returned to the caller, but there is
also an `rpc_error` class to return an error code and message used by
some json-rpc commands.
- RPC implementing functions now overload `core_rpc_server::invoke`
following the pattern:
RPC_BLAH_BLAH::response core_rpc_server::invoke(RPC_BLAH_BLAH::request&& req, rpc_context context);
This overloading makes the code vastly simpler: all instantiations are
now done with a small amount of generic instantiation code in a single
.cpp rather than needing to go to hell and back with a nest of epee
macros in a core header.
- each RPC endpoint is now defined by the RPC types themselves,
including its accessible names and permissions, in
core_rpc_server_commands_defs.h:
- every RPC structure now has a static `names()` function that returns
the names by which the end point is accessible. (The first one is
the primary, the others are for deprecated aliases).
- RPC command wrappers define their permissions and type by inheriting
from special tag classes:
- rpc::RPC_COMMAND is a basic, admin-only, JSON command, available
via JSON RPC. *All* JSON commands are now available via JSON RPC,
instead of the previous mix of some being at /foo and others at
/json_rpc. (Ones that were previously at /foo are still there for
backwards compatibility; see `rpc::LEGACY` below).
- rpc::PUBLIC specifies that the command should be available via a
restricted RPC connection.
- rpc::BINARY specifies that the command is not JSON, but rather is
accessible as /name and takes and returns values in the magic epee
binary "portable storage" (lol) data format.
- rpc::LEGACY specifies that the command should be available via the
non-json-rpc interface at `/name` for backwards compatibility (in
addition to the JSON-RPC interface).
- some epee serialization got unwrapped and de-templatized so that it
can be moved into a .cpp file with just declarations in the .h. (This
makes a *huge* difference for core_rpc_server_commands_defs.h and for
every compilation unit that includes it which previously had to
compile all the serialization code and then throw all by one copy away
at link time). This required some new macros so as to not break a ton
of places that will use the old way putting everything in the headers;
The RPC code uses this as does a few other places; there are comments
in contrib/epee/include/serialization/keyvalue_serialization.h as to
how to use it.
- Detemplatized a bunch of epee/storages code. Most of it should have
have been using templates at all (because it can only ever be called
with one type!), and now it isn't. This broke some things that didn't
properly compile because of missing headers or (in one case) a messed
up circular dependency.
- Significantly simplified a bunch of over-templatized serialization
code.
- All RPC serialization definitions is now out of
core_rpc_server_commands_defs.h and into a single .cpp file
(core_rpc_server_commands_defs.cpp).
- core RPC no longer uses the disgusting
BEGIN_URI_MAP2/MAP_URI_BLAH_BLAH macros. This was a terrible design
that forced slamming tons of code into a common header that didn't
need to be there.
- epee::struct_init is gone. It was a horrible hack that instiated
multiple templates just so the coder could be so lazy and write
`some_type var;` instead of properly value initializing with
`some_type var{};`.
- Removed a bunch of useless crap from epee. In particular, forcing
extra template instantiations all over the place in order to nest
return objects inside JSON RPC values is no longer needed, as are a
bunch of stuff related to the above de-macroization of the code.
- get_all_service_nodes, get_service_nodes, and get_n_service_nodes are
now combined into a single `get_service_nodes` (with deprecated
aliases for the others), which eliminates a fair amount of
duplication. The biggest obstacle here was getting the requested
fields reference passed through: this is now done by a new ability to
stash a context in the serialization object that can be retrieved by a
sub-serialized type.
LMQ-specifics:
- The LokiMQ instance moves into `cryptonote::core` rather than being
inside cryptonote_protocol. Currently the instance is used both for
qnet and rpc calls (and so needs to be in a common place), but I also
intend future PRs to use the batching code for job processing
(replacing the current threaded job queue).
- rpc/lmq_server.h handles the actual LMQ-request-to-core-RPC glue.
Unlike http_server it isn't technically running the whole LMQ stack
from here, but the parallel name with http_server seemed appropriate.
- All RPC endpoints are supported by LMQ under the same names as defined
generically, but prefixed with `rpc.` for public commands and `admin.`
for restricted ones.
- service node keys are now always available, even when not running in
`--service-node` mode: this is because we want the x25519 key for
being able to offer CURVE encryption for lmq RPC end-points, and
because it doesn't hurt to have them available all the time. In the
RPC layer this is now called "get_service_keys" (with
"get_service_node_key" as an alias) since they aren't strictly only
for service nodes. This also means code needs to check
m_service_node, and not m_service_node_keys, to tell if it is running
as a service node. (This is also easier to notice because
m_service_node_keys got renamed to `m_service_keys`).
- Added block and mempool monitoring LMQ RPC endpoints: `sub.block` and
`sub.mempool` subscribes the connection for new block and new mempool
TX notifications. The latter can notify on just blink txes, or all
new mempool txes (but only new ones -- txes dumped from a block don't
trigger it). The client gets pushed a [`notify.block`, `height`,
`hash`] or [`notify.tx`, `txhash`, `blob`] message when something
arrives.
Minor details:
- rpc::version_t is now a {major,minor} pair. Forcing everyone to pack
and unpack a uint32_t was gross.
- Changed some macros to constexprs (e.g. CORE_RPC_ERROR_CODE_...).
(This immediately revealed a couple of bugs in the RPC code that was
assigning CORE_RPC_ERROR_CODE_... to a string, and it worked because
the macro allows implicit conversion to a char).
- De-templatizing useless templates in epee (i.e. a bunch of templated
types that were never invoked with different types) revealed a painful
circular dependency between epee and non-epee code for tor_address and
i2p_address. This crap is now handled in a suitably named
`net/epee_network_address_hack.cpp` hack because it really isn't
trivial to extricate this mess.
- Removed `epee/include/serialization/serialize_base.h`. Amazingly the
code somehow still all works perfectly with this previously vital
header removed.
- Removed bitrotted, unused epee "crypted_storage" and
"gzipped_inmemstorage" code.
- Replaced a bunch of epee::misc_utils::auto_scope_leave_caller with
LOKI_DEFERs. The epee version involves quite a bit more instantiation
and is ugly as sin. Also made the `loki::defer` class invokable for
some edge cases that need calling before destruction in particular
conditions.
- Moved the systemd code around; it makes much more sense to do the
systemd started notification as in daemon.cpp as late as possible
rather than in core (when we can still have startup failures, e.g. if
the RPC layer can't start).
- Made the systemd short status string available in the get_info RPC
(and no longer require building with systemd).
- during startup, print (only) the x25519 when not in SN mode, and
continue to print all three when in SN mode.
- DRYed out some RPC implementation code (such as set_limit)
- Made wallet_rpc stop using a raw m_wallet pointer
2020-04-28 01:25:43 +02:00
COMMAND_REQUEST_SUPPORT_FLAGS::request support_flags_request{};
2016-10-26 21:00:08 +02:00
bool r = epee::net_utils::async_invoke_remote_command2<typename COMMAND_REQUEST_SUPPORT_FLAGS::response>
(
context.m_connection_id,
COMMAND_REQUEST_SUPPORT_FLAGS::ID,
support_flags_request,
2019-04-09 10:07:13 +02:00
m_network_zones.at(epee::net_utils::zone::public_).m_net_server.get_config_object(),
2016-10-26 21:00:08 +02:00
[=](int code, const typename COMMAND_REQUEST_SUPPORT_FLAGS::response& rsp, p2p_connection_context& context_)
{
if(code < 0)
{
2017-08-01 11:39:36 +02:00
LOG_WARNING_CC(context_, "COMMAND_REQUEST_SUPPORT_FLAGS invoke failed. (" << code << ", " << epee::levin::get_err_descr(code) << ")");
2016-10-26 21:00:08 +02:00
return;
}
f(context_, rsp.support_flags);
},
P2P_DEFAULT_HANDSHAKE_INVOKE_TIMEOUT
);
return r;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2014-03-03 23:07:58 +01:00
int node_server<t_payload_net_handler>::handle_timed_sync(int command, typename COMMAND_TIMED_SYNC::request& arg, typename COMMAND_TIMED_SYNC::response& rsp, p2p_connection_context& context)
{
2019-11-06 07:28:33 +01:00
if(!m_payload_handler.process_payload_sync_data(std::move(arg.payload_data), context, false))
2014-03-03 23:07:58 +01:00
{
2017-08-01 11:39:36 +02:00
LOG_WARNING_CC(context, "Failed to process_payload_sync_data(), dropping connection");
2014-03-03 23:07:58 +01:00
drop_connection(context);
return 1;
}
//fill response
2019-04-09 10:07:13 +02:00
const epee::net_utils::zone zone_type = context.m_remote_address.get_zone();
network_zone& zone = m_network_zones.at(zone_type);
2019-12-04 22:22:55 +01:00
std::vector<peerlist_entry> local_peerlist_new;
zone.m_peerlist.get_peerlist_head(local_peerlist_new, true, P2P_DEFAULT_PEERS_IN_HANDSHAKE);
//only include out peers we did not already send
rsp.local_peerlist_new.reserve(local_peerlist_new.size());
for (auto &pe: local_peerlist_new)
{
if (!context.sent_addresses.insert(pe.adr).second)
continue;
rsp.local_peerlist_new.push_back(std::move(pe));
}
2014-03-03 23:07:58 +01:00
m_payload_handler.get_payload_sync_data(rsp.payload_data);
2019-04-09 10:07:13 +02:00
/* Tor/I2P nodes receiving connections via forwarding (from tor/i2p daemon)
do not know the address of the connecting peer. This is relayed to them,
iff the node has setup an inbound hidden service. The other peer will have
to use the random peer_id value to link the two. My initial thought is that
the inbound peer should leave the other side marked as `<unknown tor host>`,
etc., because someone could give faulty addresses over Tor/I2P to get the
real peer with that identity banned/blacklisted. */
if(!context.m_is_income && zone.m_our_address.get_zone() == zone_type)
rsp.local_peerlist_new.push_back(peerlist_entry{zone.m_our_address, zone.m_config.m_peer_id, std::time(nullptr)});
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 17:34:23 +01:00
LOG_DEBUG_CC(context, "COMMAND_TIMED_SYNC");
2014-03-03 23:07:58 +01:00
return 1;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
int node_server<t_payload_net_handler>::handle_handshake(int command, typename COMMAND_HANDSHAKE::request& arg, typename COMMAND_HANDSHAKE::response& rsp, p2p_connection_context& context)
{
2014-07-16 19:30:15 +02:00
if(arg.node_data.network_id != m_network_id)
2014-03-03 23:07:58 +01:00
{
2018-12-18 01:05:27 +01:00
LOG_INFO_CC(context, "WRONG NETWORK AGENT CONNECTED! id=" << arg.node_data.network_id);
2014-03-03 23:07:58 +01:00
drop_connection(context);
2017-05-27 12:35:54 +02:00
add_host_fail(context.m_remote_address);
2014-03-03 23:07:58 +01:00
return 1;
}
if(!context.m_is_income)
{
2017-08-01 11:39:36 +02:00
LOG_WARNING_CC(context, "COMMAND_HANDSHAKE came not from incoming connection");
2014-03-03 23:07:58 +01:00
drop_connection(context);
2017-05-27 12:35:54 +02:00
add_host_fail(context.m_remote_address);
2014-03-03 23:07:58 +01:00
return 1;
}
if(context.peer_id)
{
2017-08-01 11:39:36 +02:00
LOG_WARNING_CC(context, "COMMAND_HANDSHAKE came, but seems that connection already have associated peer_id (double COMMAND_HANDSHAKE?)");
2014-03-03 23:07:58 +01:00
drop_connection(context);
return 1;
}
2019-04-09 10:07:13 +02:00
network_zone& zone = m_network_zones.at(context.m_remote_address.get_zone());
2019-08-21 20:19:36 +02:00
// test only the remote end's zone, otherwise an attacker could connect to you on clearnet
// and pass in a tor connection's peer id, and deduce the two are the same if you reject it
if(arg.node_data.peer_id == zone.m_config.m_peer_id)
{
LOG_DEBUG_CC(context, "Connection to self detected, dropping connection");
drop_connection(context);
return 1;
}
2019-04-09 10:07:13 +02:00
if (zone.m_current_number_of_in_peers >= zone.m_config.m_net_config.max_in_connection_count) // in peers limit
2018-01-20 22:44:23 +01:00
{
LOG_WARNING_CC(context, "COMMAND_HANDSHAKE came, but already have max incoming connections, so dropping this one.");
drop_connection(context);
return 1;
}
2019-11-06 07:28:33 +01:00
if(!m_payload_handler.process_payload_sync_data(std::move(arg.payload_data), context, true))
2014-03-03 23:07:58 +01:00
{
2017-08-01 11:39:36 +02:00
LOG_WARNING_CC(context, "COMMAND_HANDSHAKE came, but process_payload_sync_data returned false, dropping connection.");
2014-03-03 23:07:58 +01:00
drop_connection(context);
return 1;
}
2017-01-14 13:21:20 +01:00
2021-01-04 04:19:42 +01:00
#if !defined(OXEN_ENABLE_INTEGRATION_TEST_HOOKS)
2017-05-27 12:35:54 +02:00
if(has_too_many_connections(context.m_remote_address))
2017-01-14 13:21:20 +01:00
{
2017-05-27 12:35:54 +02:00
LOG_PRINT_CCONTEXT_L1("CONNECTION FROM " << context.m_remote_address.host_str() << " REFUSED, too many connections from the same address");
2017-01-14 13:21:20 +01:00
drop_connection(context);
return 1;
}
2019-04-09 07:47:23 +02:00
#endif
2017-01-14 13:21:20 +01:00
2014-03-03 23:07:58 +01:00
//associate peer_id with this connection
context.peer_id = arg.node_data.peer_id;
2017-08-08 18:23:02 +02:00
context.m_in_timedsync = false;
2019-02-24 09:47:49 +01:00
context.m_rpc_port = arg.node_data.rpc_port;
2014-03-03 23:07:58 +01:00
2019-08-21 20:19:36 +02:00
if(arg.node_data.my_port && zone.m_can_pingback)
2014-03-03 23:07:58 +01:00
{
peerid_type peer_id_l = arg.node_data.peer_id;
2017-08-25 17:14:46 +02:00
uint32_t port_l = arg.node_data.my_port;
2014-03-03 23:07:58 +01:00
//try ping to be sure that we can add this peer to peer_list
try_ping(arg.node_data, context, [peer_id_l, port_l, context, this]()
{
2019-04-11 00:34:30 +02:00
CHECK_AND_ASSERT_MES((context.m_remote_address.get_type_id() == epee::net_utils::ipv4_network_address::get_type_id() || context.m_remote_address.get_type_id() == epee::net_utils::ipv6_network_address::get_type_id()), void(),
"Only IPv4 or IPv6 addresses are supported here");
2014-03-03 23:07:58 +01:00
//called only(!) if success pinged, update local peerlist
peerlist_entry pe;
2017-05-27 12:35:54 +02:00
const epee::net_utils::network_address na = context.m_remote_address;
2019-04-11 00:34:30 +02:00
if (context.m_remote_address.get_type_id() == epee::net_utils::ipv4_network_address::get_type_id())
{
pe.adr = epee::net_utils::ipv4_network_address(na.as<epee::net_utils::ipv4_network_address>().ip(), port_l);
}
else
{
pe.adr = epee::net_utils::ipv6_network_address(na.as<epee::net_utils::ipv6_network_address>().ip(), port_l);
}
2014-08-20 17:57:29 +02:00
time_t last_seen;
time(&last_seen);
pe.last_seen = static_cast<int64_t>(last_seen);
2014-03-03 23:07:58 +01:00
pe.id = peer_id_l;
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-30 00:30:51 +02:00
pe.pruning_seed = context.m_pruning_seed;
2019-02-24 09:47:49 +01:00
pe.rpc_port = context.m_rpc_port;
2019-04-09 10:07:13 +02:00
this->m_network_zones.at(context.m_remote_address.get_zone()).m_peerlist.append_with_peer_white(pe);
2017-05-27 12:35:54 +02:00
LOG_DEBUG_CC(context, "PING SUCCESS " << context.m_remote_address.host_str() << ":" << port_l);
2014-03-03 23:07:58 +01:00
});
}
2016-10-26 21:00:08 +02:00
try_get_support_flags(context, [](p2p_connection_context& flags_context, const uint32_t& support_flags)
{
flags_context.support_flags = support_flags;
});
2014-03-03 23:07:58 +01:00
//fill response
2019-04-23 13:07:44 +02:00
zone.m_peerlist.get_peerlist_head(rsp.local_peerlist_new, true);
2019-12-04 22:22:55 +01:00
for (const auto &e: rsp.local_peerlist_new)
context.sent_addresses.insert(e.adr);
2019-04-09 10:07:13 +02:00
get_local_node_data(rsp.node_data, zone);
2014-03-03 23:07:58 +01:00
m_payload_handler.get_payload_sync_data(rsp.payload_data);
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 17:34:23 +01:00
LOG_DEBUG_CC(context, "COMMAND_HANDSHAKE");
2014-03-03 23:07:58 +01:00
return 1;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
int node_server<t_payload_net_handler>::handle_ping(int command, COMMAND_PING::request& arg, COMMAND_PING::response& rsp, p2p_connection_context& context)
{
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 17:34:23 +01:00
LOG_DEBUG_CC(context, "COMMAND_PING");
2014-03-03 23:07:58 +01:00
rsp.status = PING_OK_RESPONSE_STATUS_TEXT;
2019-04-09 10:07:13 +02:00
rsp.peer_id = m_network_zones.at(context.m_remote_address.get_zone()).m_config.m_peer_id;
2014-03-03 23:07:58 +01:00
return 1;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::log_peerlist()
{
2018-12-05 23:25:27 +01:00
std::vector<peerlist_entry> pl_white;
std::vector<peerlist_entry> pl_gray;
2019-04-09 10:07:13 +02:00
for (auto& zone : m_network_zones)
zone.second.m_peerlist.get_peerlist(pl_gray, pl_white);
2020-04-03 18:30:11 +02:00
MINFO("\nPeerlist white:\n" << print_peerlist_to_string(pl_white) << "\nPeerlist gray:\n" << print_peerlist_to_string(pl_gray) );
2014-03-03 23:07:58 +01:00
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::log_connections()
{
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 17:34:23 +01:00
MINFO("Connections: \r\n" << print_connections_container() );
2014-03-03 23:07:58 +01:00
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
std::string node_server<t_payload_net_handler>::print_connections_container()
{
std::stringstream ss;
2019-04-09 10:07:13 +02:00
for (auto& zone : m_network_zones)
2014-03-03 23:07:58 +01:00
{
2019-04-09 10:07:13 +02:00
zone.second.m_net_server.get_config_object().foreach_connection([&](const p2p_connection_context& cntxt)
{
ss << cntxt.m_remote_address.str()
2019-06-18 23:16:25 +02:00
<< " \t\tpeer_id " << peerid_to_string(cntxt.peer_id)
2019-04-09 10:07:13 +02:00
<< " \t\tconn_id " << cntxt.m_connection_id << (cntxt.m_is_income ? " INC":" OUT")
<< std::endl;
return true;
});
}
2014-03-03 23:07:58 +01:00
std::string s = ss.str();
return s;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
void node_server<t_payload_net_handler>::on_connection_new(p2p_connection_context& context)
{
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 17:34:23 +01:00
MINFO("["<< epee::net_utils::print_connection_context(context) << "] NEW CONNECTION");
2014-03-03 23:07:58 +01:00
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
void node_server<t_payload_net_handler>::on_connection_close(p2p_connection_context& context)
{
2019-04-09 10:07:13 +02:00
network_zone& zone = m_network_zones.at(context.m_remote_address.get_zone());
if (!zone.m_net_server.is_stop_signal_sent() && !context.m_is_income) {
2019-10-31 23:26:58 +01:00
epee::net_utils::network_address na{};
2017-05-27 12:35:54 +02:00
na = context.m_remote_address;
2017-02-09 01:11:58 +01:00
2019-04-09 10:07:13 +02:00
zone.m_peerlist.remove_from_peer_anchor(na);
2017-02-09 01:11:58 +01:00
}
2017-08-18 21:14:23 +02:00
m_payload_handler.on_connection_close(context);
Change logging to easylogging++
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
2017-01-01 17:34:23 +01:00
MINFO("["<< epee::net_utils::print_connection_context(context) << "] CLOSE CONNECTION");
2014-05-25 19:06:40 +02:00
}
template<class t_payload_net_handler>
2017-05-27 12:35:54 +02:00
bool node_server<t_payload_net_handler>::is_priority_node(const epee::net_utils::network_address& na)
2014-05-25 19:06:40 +02:00
{
return (std::find(m_priority_peers.begin(), m_priority_peers.end(), na) != m_priority_peers.end()) || (std::find(m_exclusive_peers.begin(), m_exclusive_peers.end(), na) != m_exclusive_peers.end());
}
template<class t_payload_net_handler> template <class Container>
bool node_server<t_payload_net_handler>::connect_to_peerlist(const Container& peers)
{
2019-04-09 10:07:13 +02:00
const network_zone& public_zone = m_network_zones.at(epee::net_utils::zone::public_);
2017-05-27 12:35:54 +02:00
for(const epee::net_utils::network_address& na: peers)
2014-05-25 19:06:40 +02:00
{
2019-04-09 10:07:13 +02:00
if(public_zone.m_net_server.is_stop_signal_sent())
2014-05-25 19:06:40 +02:00
return false;
if(is_addr_connected(na))
continue;
try_to_connect_and_handshake_with_new_peer(na);
}
return true;
}
template<class t_payload_net_handler> template <class Container>
bool node_server<t_payload_net_handler>::parse_peers_and_add_to_container(const boost::program_options::variables_map& vm, const command_line::arg_descriptor<std::vector<std::string> > & arg, Container& container)
{
std::vector<std::string> perrs = command_line::get_arg(vm, arg);
for(const std::string& pr_str: perrs)
{
2018-06-11 05:16:29 +02:00
const uint16_t default_port = cryptonote::get_config(m_nettype).P2P_DEFAULT_PORT;
2019-04-09 10:07:13 +02:00
expect<epee::net_utils::network_address> adr = net::get_network_address(pr_str, default_port);
if (adr)
2018-06-11 05:43:18 +02:00
{
2019-04-09 10:07:13 +02:00
add_zone(adr->get_zone());
container.push_back(std::move(*adr));
2018-06-11 05:43:18 +02:00
continue;
}
std::vector<epee::net_utils::network_address> resolved_addrs;
2019-04-09 10:07:13 +02:00
bool r = append_net_address(resolved_addrs, pr_str, default_port);
2018-06-11 05:43:18 +02:00
CHECK_AND_ASSERT_MES(r, false, "Failed to parse or resolve address from string: " << pr_str);
for (const epee::net_utils::network_address& addr : resolved_addrs)
{
container.push_back(addr);
}
2014-05-25 19:06:40 +02:00
}
return true;
2014-03-03 23:07:58 +01:00
}
2015-12-14 05:54:39 +01:00
2015-01-05 20:30:17 +01:00
template<class t_payload_net_handler>
2019-04-09 10:07:13 +02:00
bool node_server<t_payload_net_handler>::set_max_out_peers(network_zone& zone, int64_t max)
2015-12-14 05:54:39 +01:00
{
2019-08-13 07:04:20 +02:00
if (max == -1)
max = P2P_DEFAULT_CONNECTIONS_COUNT_OUT;
2019-04-09 10:07:13 +02:00
zone.m_config.m_net_config.max_out_connection_count = max;
2015-12-14 05:54:39 +01:00
return true;
}
2018-01-20 22:44:23 +01:00
template<class t_payload_net_handler>
2019-04-09 10:07:13 +02:00
bool node_server<t_payload_net_handler>::set_max_in_peers(network_zone& zone, int64_t max)
2018-01-20 22:44:23 +01:00
{
2019-08-13 07:04:20 +02:00
if (max == -1)
max = P2P_DEFAULT_CONNECTIONS_COUNT_IN;
2019-04-09 10:07:13 +02:00
zone.m_config.m_net_config.max_in_connection_count = max;
2018-01-20 22:44:23 +01:00
return true;
}
2015-01-05 20:30:17 +01:00
template<class t_payload_net_handler>
2019-04-09 10:07:13 +02:00
void node_server<t_payload_net_handler>::change_max_out_public_peers(size_t count)
2015-01-05 20:30:17 +01:00
{
2019-04-09 10:07:13 +02:00
auto public_zone = m_network_zones.find(epee::net_utils::zone::public_);
if (public_zone != m_network_zones.end())
{
2019-06-19 00:47:05 +02:00
const auto current = public_zone->second.m_net_server.get_config_object().get_out_connections_count();
2019-04-09 10:07:13 +02:00
public_zone->second.m_config.m_net_config.max_out_connection_count = count;
if(current > count)
public_zone->second.m_net_server.get_config_object().del_out_connections(current - count);
2019-06-19 00:27:47 +02:00
m_payload_handler.set_max_out_peers(count);
2019-04-09 10:07:13 +02:00
}
2015-01-05 20:30:17 +01:00
}
2015-12-14 05:54:39 +01:00
2019-05-28 19:54:41 +02:00
template<class t_payload_net_handler>
uint32_t node_server<t_payload_net_handler>::get_max_out_public_peers() const
{
const auto public_zone = m_network_zones.find(epee::net_utils::zone::public_);
if (public_zone == m_network_zones.end())
return 0;
return public_zone->second.m_config.m_net_config.max_out_connection_count;
}
2018-01-20 22:44:23 +01:00
template<class t_payload_net_handler>
2019-04-09 10:07:13 +02:00
void node_server<t_payload_net_handler>::change_max_in_public_peers(size_t count)
2018-01-20 22:44:23 +01:00
{
2019-04-09 10:07:13 +02:00
auto public_zone = m_network_zones.find(epee::net_utils::zone::public_);
if (public_zone != m_network_zones.end())
{
2019-06-19 00:47:05 +02:00
const auto current = public_zone->second.m_net_server.get_config_object().get_in_connections_count();
2019-04-09 10:07:13 +02:00
public_zone->second.m_config.m_net_config.max_in_connection_count = count;
if(current > count)
public_zone->second.m_net_server.get_config_object().del_in_connections(current - count);
}
2018-01-20 22:44:23 +01:00
}
2019-05-28 19:54:41 +02:00
template<class t_payload_net_handler>
uint32_t node_server<t_payload_net_handler>::get_max_in_public_peers() const
{
const auto public_zone = m_network_zones.find(epee::net_utils::zone::public_);
if (public_zone == m_network_zones.end())
return 0;
return public_zone->second.m_config.m_net_config.max_in_connection_count;
}
2015-12-14 05:54:39 +01:00
template<class t_payload_net_handler>
2015-01-05 20:30:17 +01:00
bool node_server<t_payload_net_handler>::set_tos_flag(const boost::program_options::variables_map& vm, int flag)
2015-12-14 05:54:39 +01:00
{
if(flag==-1){
return true;
}
epee::net_utils::connection<epee::levin::async_protocol_handler<p2p_connection_context> >::set_tos_flag(flag);
2020-06-02 20:37:36 +02:00
MDEBUG("Set ToS flag " << flag);
2015-12-14 05:54:39 +01:00
return true;
}
2015-01-05 20:30:17 +01:00
template<class t_payload_net_handler>
2015-12-14 05:54:39 +01:00
bool node_server<t_payload_net_handler>::set_rate_up_limit(const boost::program_options::variables_map& vm, int64_t limit)
{
2019-03-20 16:40:59 +01:00
this->islimitup=(limit != -1) && (limit != default_limit_up);
2015-12-14 05:54:39 +01:00
if (limit==-1) {
limit=default_limit_up;
}
epee::net_utils::connection<epee::levin::async_protocol_handler<p2p_connection_context> >::set_rate_up_limit( limit );
2017-11-26 15:26:17 +01:00
MINFO("Set limit-up to " << limit << " kB/s");
2015-12-14 05:54:39 +01:00
return true;
}
2015-01-05 20:30:17 +01:00
template<class t_payload_net_handler>
2015-12-14 05:54:39 +01:00
bool node_server<t_payload_net_handler>::set_rate_down_limit(const boost::program_options::variables_map& vm, int64_t limit)
{
2019-03-20 16:40:59 +01:00
this->islimitdown=(limit != -1) && (limit != default_limit_down);
2015-12-14 05:54:39 +01:00
if(limit==-1) {
limit=default_limit_down;
}
epee::net_utils::connection<epee::levin::async_protocol_handler<p2p_connection_context> >::set_rate_down_limit( limit );
2017-11-26 15:26:17 +01:00
MINFO("Set limit-down to " << limit << " kB/s");
2015-12-14 05:54:39 +01:00
return true;
}
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::set_rate_limit(const boost::program_options::variables_map& vm, int64_t limit)
{
int64_t limit_up = 0;
int64_t limit_down = 0;
2015-01-05 20:30:17 +01:00
2015-12-14 05:54:39 +01:00
if(limit == -1)
{
2017-11-26 15:26:17 +01:00
limit_up = default_limit_up;
limit_down = default_limit_down;
2015-12-14 05:54:39 +01:00
}
else
{
2017-11-26 15:26:17 +01:00
limit_up = limit;
limit_down = limit;
2015-12-14 05:54:39 +01:00
}
if(!this->islimitup) {
epee::net_utils::connection<epee::levin::async_protocol_handler<p2p_connection_context> >::set_rate_up_limit(limit_up);
2017-11-26 15:26:17 +01:00
MINFO("Set limit-up to " << limit_up << " kB/s");
2015-12-14 05:54:39 +01:00
}
if(!this->islimitdown) {
epee::net_utils::connection<epee::levin::async_protocol_handler<p2p_connection_context> >::set_rate_down_limit(limit_down);
2017-11-26 15:26:17 +01:00
MINFO("Set limit-down to " << limit_down << " kB/s");
2015-12-14 05:54:39 +01:00
}
return true;
}
2017-01-14 13:21:20 +01:00
template<class t_payload_net_handler>
2017-05-27 12:35:54 +02:00
bool node_server<t_payload_net_handler>::has_too_many_connections(const epee::net_utils::network_address &address)
2017-01-14 13:21:20 +01:00
{
2019-04-09 10:07:13 +02:00
if (address.get_zone() != epee::net_utils::zone::public_)
return false; // Unable to determine how many connections from host
2019-10-03 01:32:32 +02:00
const size_t max_connections = m_nettype == cryptonote::MAINNET ? 1 : 20;
2017-12-07 23:44:55 +01:00
size_t count = 0;
2017-01-14 13:21:20 +01:00
2019-04-09 10:07:13 +02:00
m_network_zones.at(epee::net_utils::zone::public_).m_net_server.get_config_object().foreach_connection([&](const p2p_connection_context& cntxt)
2017-01-14 13:21:20 +01:00
{
2017-05-27 12:35:54 +02:00
if (cntxt.m_is_income && cntxt.m_remote_address.is_same_host(address)) {
2017-01-14 13:21:20 +01:00
count++;
if (count > max_connections) {
return false;
}
}
return true;
});
return count > max_connections;
}
2017-01-21 00:59:04 +01:00
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::gray_peerlist_housekeeping()
{
2018-02-01 12:48:03 +01:00
if (m_offline) return true;
2017-09-10 14:11:42 +02:00
if (!m_exclusive_peers.empty()) return true;
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-30 00:30:51 +02:00
if (m_payload_handler.needs_new_sync_connections()) return true;
2017-09-10 14:11:42 +02:00
2019-04-09 10:07:13 +02:00
for (auto& zone : m_network_zones)
{
if (zone.second.m_net_server.is_stop_signal_sent())
return false;
2017-01-21 00:59:04 +01:00
2019-04-09 10:07:13 +02:00
if (zone.second.m_connect == nullptr)
continue;
2017-01-21 00:59:04 +01:00
2019-04-09 10:07:13 +02:00
peerlist_entry pe{};
if (!zone.second.m_peerlist.get_random_gray_peer(pe))
continue;
2017-01-21 00:59:04 +01:00
2019-04-09 10:07:13 +02:00
if (!check_connection_and_handshake_with_peer(pe.adr, pe.last_seen))
{
zone.second.m_peerlist.remove_from_peer_gray(pe);
2019-06-18 23:16:25 +02:00
LOG_PRINT_L2("PEER EVICTED FROM GRAY PEER LIST: address: " << pe.adr.host_str() << " Peer ID: " << peerid_to_string(pe.id));
2019-04-09 10:07:13 +02:00
}
else
{
2019-02-24 09:47:49 +01:00
zone.second.m_peerlist.set_peer_just_seen(pe.id, pe.adr, pe.pruning_seed, pe.rpc_port);
2019-06-18 23:16:25 +02:00
LOG_PRINT_L2("PEER PROMOTED TO WHITE PEER LIST IP address: " << pe.adr.host_str() << " Peer ID: " << peerid_to_string(pe.id));
2019-04-09 10:07:13 +02:00
}
2017-01-21 00:59:04 +01:00
}
return true;
}
2017-08-29 23:28:23 +02:00
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-30 00:30:51 +02:00
template<class t_payload_net_handler>
void node_server<t_payload_net_handler>::add_used_stripe_peer(const typename t_payload_net_handler::connection_context &context)
{
const uint32_t stripe = tools::get_pruning_stripe(context.m_pruning_seed);
if (stripe == 0 || stripe > (1ul << CRYPTONOTE_PRUNING_LOG_STRIPES))
return;
const uint32_t index = stripe - 1;
Purge epee::critical_crap and CRITICAL_CRAP
This purges epee::critical_region/epee::critical_section and the awful
CRITICAL_REGION_LOCAL and CRITICAL_REGION_LOCAL1 and
CRITICAL_REGION_BEGIN1 and all that crap from epee code.
This wrapper class around a mutex is just painful, macro-infested
indirection that accomplishes nothing (and, worse, forces all using code
to use a std::recursive_mutex even when a different mutex type is more
appropriate).
This commit purges it, replacing the "critical_section" mutex wrappers
with either std::mutex, std::recursive_mutex, or std::shared_mutex as
appropriate. I kept anything that looked uncertain as a
recursive_mutex, simple cases that obviously don't recurse as
std::mutex, and simple cases with reader/writing mechanics as a
shared_mutex.
Ideally all the recursive_mutexes should be eliminated because a
recursive_mutex is almost always a design flaw where someone has let the
locking code get impossibly tangled, but that requires a lot more time
to properly trace down all the ways the mutexes are used.
Other notable changes:
- There was one NIH promise/future-like class here that was used in
example one place in p2p/net_node; I replaced it with a
std::promise/future.
- moved the mutex for LMDB resizing into LMDB itself; having it in the
abstract base class is bad design, and also made it impossible to make a
moveable base class (which gets used for the fake db classes in the test
code).
2020-06-22 21:40:31 +02:00
std::lock_guard lock{m_used_stripe_peers_mutex};
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-30 00:30:51 +02:00
MINFO("adding stripe " << stripe << " peer: " << context.m_remote_address.str());
std::remove_if(m_used_stripe_peers[index].begin(), m_used_stripe_peers[index].end(),
[&context](const epee::net_utils::network_address &na){ return context.m_remote_address == na; });
m_used_stripe_peers[index].push_back(context.m_remote_address);
}
template<class t_payload_net_handler>
void node_server<t_payload_net_handler>::remove_used_stripe_peer(const typename t_payload_net_handler::connection_context &context)
{
const uint32_t stripe = tools::get_pruning_stripe(context.m_pruning_seed);
if (stripe == 0 || stripe > (1ul << CRYPTONOTE_PRUNING_LOG_STRIPES))
return;
const uint32_t index = stripe - 1;
Purge epee::critical_crap and CRITICAL_CRAP
This purges epee::critical_region/epee::critical_section and the awful
CRITICAL_REGION_LOCAL and CRITICAL_REGION_LOCAL1 and
CRITICAL_REGION_BEGIN1 and all that crap from epee code.
This wrapper class around a mutex is just painful, macro-infested
indirection that accomplishes nothing (and, worse, forces all using code
to use a std::recursive_mutex even when a different mutex type is more
appropriate).
This commit purges it, replacing the "critical_section" mutex wrappers
with either std::mutex, std::recursive_mutex, or std::shared_mutex as
appropriate. I kept anything that looked uncertain as a
recursive_mutex, simple cases that obviously don't recurse as
std::mutex, and simple cases with reader/writing mechanics as a
shared_mutex.
Ideally all the recursive_mutexes should be eliminated because a
recursive_mutex is almost always a design flaw where someone has let the
locking code get impossibly tangled, but that requires a lot more time
to properly trace down all the ways the mutexes are used.
Other notable changes:
- There was one NIH promise/future-like class here that was used in
example one place in p2p/net_node; I replaced it with a
std::promise/future.
- moved the mutex for LMDB resizing into LMDB itself; having it in the
abstract base class is bad design, and also made it impossible to make a
moveable base class (which gets used for the fake db classes in the test
code).
2020-06-22 21:40:31 +02:00
std::lock_guard lock{m_used_stripe_peers_mutex};
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-30 00:30:51 +02:00
MINFO("removing stripe " << stripe << " peer: " << context.m_remote_address.str());
std::remove_if(m_used_stripe_peers[index].begin(), m_used_stripe_peers[index].end(),
[&context](const epee::net_utils::network_address &na){ return context.m_remote_address == na; });
}
template<class t_payload_net_handler>
void node_server<t_payload_net_handler>::clear_used_stripe_peers()
{
Purge epee::critical_crap and CRITICAL_CRAP
This purges epee::critical_region/epee::critical_section and the awful
CRITICAL_REGION_LOCAL and CRITICAL_REGION_LOCAL1 and
CRITICAL_REGION_BEGIN1 and all that crap from epee code.
This wrapper class around a mutex is just painful, macro-infested
indirection that accomplishes nothing (and, worse, forces all using code
to use a std::recursive_mutex even when a different mutex type is more
appropriate).
This commit purges it, replacing the "critical_section" mutex wrappers
with either std::mutex, std::recursive_mutex, or std::shared_mutex as
appropriate. I kept anything that looked uncertain as a
recursive_mutex, simple cases that obviously don't recurse as
std::mutex, and simple cases with reader/writing mechanics as a
shared_mutex.
Ideally all the recursive_mutexes should be eliminated because a
recursive_mutex is almost always a design flaw where someone has let the
locking code get impossibly tangled, but that requires a lot more time
to properly trace down all the ways the mutexes are used.
Other notable changes:
- There was one NIH promise/future-like class here that was used in
example one place in p2p/net_node; I replaced it with a
std::promise/future.
- moved the mutex for LMDB resizing into LMDB itself; having it in the
abstract base class is bad design, and also made it impossible to make a
moveable base class (which gets used for the fake db classes in the test
code).
2020-06-22 21:40:31 +02:00
std::lock_guard lock{m_used_stripe_peers_mutex};
Pruning
The blockchain prunes seven eighths of prunable tx data.
This saves about two thirds of the blockchain size, while
keeping the node useful as a sync source for an eighth
of the blockchain.
No other data is currently pruned.
There are three ways to prune a blockchain:
- run monerod with --prune-blockchain
- run "prune_blockchain" in the monerod console
- run the monero-blockchain-prune utility
The first two will prune in place. Due to how LMDB works, this
will not reduce the blockchain size on disk. Instead, it will
mark parts of the file as free, so that future data will use
that free space, causing the file to not grow until free space
grows scarce.
The third way will create a second database, a pruned copy of
the original one. Since this is a new file, this one will be
smaller than the original one.
Once the database is pruned, it will stay pruned as it syncs.
That is, there is no need to use --prune-blockchain again, etc.
2018-04-30 00:30:51 +02:00
MINFO("clearing used stripe peers");
for (auto &e: m_used_stripe_peers)
e.clear();
}
2017-08-29 23:28:23 +02:00
template<class t_payload_net_handler>
2019-04-11 00:34:30 +02:00
void node_server<t_payload_net_handler>::add_upnp_port_mapping_impl(uint32_t port, bool ipv6) // if ipv6 false, do ipv4
2017-08-29 23:28:23 +02:00
{
2020-10-24 20:29:47 +02:00
#ifdef WITHOUT_MINIUPNPC
(void) port;
(void) ipv6;
#else
2019-04-11 00:34:30 +02:00
std::string ipversion = ipv6 ? "(IPv6)" : "(IPv4)";
MDEBUG("Attempting to add IGD port mapping " << ipversion << ".");
2017-08-29 23:28:23 +02:00
int result;
2019-04-11 00:34:30 +02:00
const int ipv6_arg = ipv6 ? 1 : 0;
2018-11-16 07:42:50 +01:00
2017-08-29 23:28:23 +02:00
#if MINIUPNPC_API_VERSION > 13
// default according to miniupnpc.h
unsigned char ttl = 2;
2019-04-11 00:34:30 +02:00
UPNPDev* deviceList = upnpDiscover(1000, NULL, NULL, 0, ipv6_arg, ttl, &result);
2017-08-29 23:28:23 +02:00
#else
2019-04-11 00:34:30 +02:00
UPNPDev* deviceList = upnpDiscover(1000, NULL, NULL, 0, ipv6_arg, &result);
2017-08-29 23:28:23 +02:00
#endif
UPNPUrls urls;
IGDdatas igdData;
char lanAddress[64];
result = UPNP_GetValidIGD(deviceList, &urls, &igdData, lanAddress, sizeof lanAddress);
freeUPNPDevlist(deviceList);
2018-10-16 00:39:51 +02:00
if (result > 0) {
2017-08-29 23:28:23 +02:00
if (result == 1) {
std::ostringstream portString;
portString << port;
// Delete the port mapping before we create it, just in case we have dangling port mapping from the daemon not being shut down correctly
UPNP_DeletePortMapping(urls.controlURL, igdData.first.servicetype, portString.str().c_str(), "TCP", 0);
int portMappingResult;
portMappingResult = UPNP_AddPortMapping(urls.controlURL, igdData.first.servicetype, portString.str().c_str(), portString.str().c_str(), lanAddress, CRYPTONOTE_NAME, "TCP", 0, "0");
if (portMappingResult != 0) {
LOG_ERROR("UPNP_AddPortMapping failed, error: " << strupnperror(portMappingResult));
} else {
MLOG_GREEN(el::Level::Info, "Added IGD port mapping.");
}
} else if (result == 2) {
MWARNING("IGD was found but reported as not connected.");
} else if (result == 3) {
MWARNING("UPnP device was found but not recognized as IGD.");
} else {
MWARNING("UPNP_GetValidIGD returned an unknown result code.");
}
FreeUPNPUrls(&urls);
} else {
MINFO("No IGD was found.");
}
2020-10-24 20:29:47 +02:00
#endif
2017-08-29 23:28:23 +02:00
}
template<class t_payload_net_handler>
2019-04-11 00:34:30 +02:00
void node_server<t_payload_net_handler>::add_upnp_port_mapping_v4(uint32_t port)
{
add_upnp_port_mapping_impl(port, false);
}
template<class t_payload_net_handler>
void node_server<t_payload_net_handler>::add_upnp_port_mapping_v6(uint32_t port)
{
add_upnp_port_mapping_impl(port, true);
}
template<class t_payload_net_handler>
void node_server<t_payload_net_handler>::add_upnp_port_mapping(uint32_t port, bool ipv4, bool ipv6)
{
if (ipv4) add_upnp_port_mapping_v4(port);
if (ipv6) add_upnp_port_mapping_v6(port);
}
template<class t_payload_net_handler>
void node_server<t_payload_net_handler>::delete_upnp_port_mapping_impl(uint32_t port, bool ipv6)
2017-08-29 23:28:23 +02:00
{
2020-10-24 20:29:47 +02:00
#ifdef WITHOUT_MINIUPNPC
(void) port;
(void) ipv6;
#else
2019-04-11 00:34:30 +02:00
std::string ipversion = ipv6 ? "(IPv6)" : "(IPv4)";
MDEBUG("Attempting to delete IGD port mapping " << ipversion << ".");
2017-08-29 23:28:23 +02:00
int result;
2019-04-11 00:34:30 +02:00
const int ipv6_arg = ipv6 ? 1 : 0;
2017-08-29 23:28:23 +02:00
#if MINIUPNPC_API_VERSION > 13
// default according to miniupnpc.h
unsigned char ttl = 2;
2019-04-11 00:34:30 +02:00
UPNPDev* deviceList = upnpDiscover(1000, NULL, NULL, 0, ipv6_arg, ttl, &result);
2017-08-29 23:28:23 +02:00
#else
2019-04-11 00:34:30 +02:00
UPNPDev* deviceList = upnpDiscover(1000, NULL, NULL, 0, ipv6_arg, &result);
2017-08-29 23:28:23 +02:00
#endif
UPNPUrls urls;
IGDdatas igdData;
char lanAddress[64];
result = UPNP_GetValidIGD(deviceList, &urls, &igdData, lanAddress, sizeof lanAddress);
freeUPNPDevlist(deviceList);
2018-10-16 00:39:51 +02:00
if (result > 0) {
2017-08-29 23:28:23 +02:00
if (result == 1) {
std::ostringstream portString;
portString << port;
int portMappingResult;
portMappingResult = UPNP_DeletePortMapping(urls.controlURL, igdData.first.servicetype, portString.str().c_str(), "TCP", 0);
if (portMappingResult != 0) {
LOG_ERROR("UPNP_DeletePortMapping failed, error: " << strupnperror(portMappingResult));
} else {
MLOG_GREEN(el::Level::Info, "Deleted IGD port mapping.");
}
} else if (result == 2) {
MWARNING("IGD was found but reported as not connected.");
} else if (result == 3) {
MWARNING("UPnP device was found but not recognized as IGD.");
} else {
MWARNING("UPNP_GetValidIGD returned an unknown result code.");
}
FreeUPNPUrls(&urls);
} else {
MINFO("No IGD was found.");
}
2020-10-24 20:29:47 +02:00
#endif
2017-08-29 23:28:23 +02:00
}
2019-04-09 10:07:13 +02:00
2019-04-11 00:34:30 +02:00
template<class t_payload_net_handler>
void node_server<t_payload_net_handler>::delete_upnp_port_mapping_v4(uint32_t port)
{
delete_upnp_port_mapping_impl(port, false);
}
template<class t_payload_net_handler>
void node_server<t_payload_net_handler>::delete_upnp_port_mapping_v6(uint32_t port)
{
delete_upnp_port_mapping_impl(port, true);
}
template<class t_payload_net_handler>
void node_server<t_payload_net_handler>::delete_upnp_port_mapping(uint32_t port)
{
delete_upnp_port_mapping_v4(port);
delete_upnp_port_mapping_v6(port);
}
2019-04-09 10:07:13 +02:00
template<typename t_payload_net_handler>
2020-06-02 00:30:19 +02:00
std::optional<p2p_connection_context_t<typename t_payload_net_handler::connection_context>>
Replace epee http client with curl-based client
In short: epee's http client is garbage, standard violating, and
unreliable.
This completely removes the epee http client support and replaces it
with cpr, a curl-based C++ wrapper. rpc/http_client.h wraps cpr for RPC
requests specifically, but it is also usable directly.
This replacement has a number of advantages:
- requests are considerably more reliable. The epee http client code
assumes that a connection will be kept alive forever, and returns a
failure if a connection is ever closed. This results in some very
annoying things: for example, preparing a transaction and then waiting
a long tim before confirming it will usually result in an error
communication with the daemon. This is just terribly behaviour: the
right thing to do on a connection failure is to resubmit the request.
- epee's http client is broken in lots of other ways: for example, it
tries throwing SSL at the port to see if it is HTTPS, but this is
protocol violating and just breaks (with a several second timeout) on
anything that *isn't* epee http server (for example, when lokid is
behind a proxying server).
- even when it isn't doing the above, the client breaks in other ways:
for example, there is a comment (replaced in this PR) in the Trezor PR
code that forces a connection close after every request because epee's
http client doesn't do proper keep-alive request handling.
- it seems noticeably faster to me in practical use in this PR; both
simple requests (for example, when running `lokid status`) and
wallet<->daemon connections are faster, probably because of crappy
code in epee. (I think this is also related to the throw-ssl-at-it
junk above: the epee client always generates an ssl certificate during
static initialization because it might need one at some point).
- significantly reduces the amount of code we have to maintain.
- removes all the epee ssl option code: curl can handle all of that just
fine.
- removes the epee socks proxy code; curl can handle that just fine.
(And can do more: it also supports using HTTP/HTTPS proxies).
- When a cli wallet connection fails we know show why it failed (which
now is an error message from curl), which could have all sorts of
reasons like hostname resolution failure, bad ssl certificate, etc.
Previously you just got a useless generic error that tells you
nothing.
Other related changes in this PR:
- Drops the check-for-update and download-update code. To the best of
my knowledge these have never been supported in loki-core and so it
didn't seem worth the trouble to convert them to use cpr for the
requests.
- Cleaned up node_rpc_proxy return values: there was an inconsistent mix
of ways to return errors and how the returned strings were handled.
Instead this cleans it up to return a pair<bool, val>, which (with
C++17) can be transparently captured as:
auto [success, val] = node.whatever(req);
This drops the failure message string, but it was almost always set to
something fairly useless (if we want to resurrect it we could easily
change the first element to be a custom type with a bool operator for
success, and a `.error` attribute containing some error string, but
for the most part the current code wasn't doing much useful with the
failure string).
- changed local detection (for automatic trusted daemon determination)
to just look for localhost, and to not try to resolve anything.
Trusting non-public IPs does not work well (e.g. with lokinet where
all .loki addresses resolve to a local IP).
- ssl fingerprint option is removed; this isn't supported by curl
(because it is essentially just duplicating what a custom cainfo
bundle does)
- --daemon-ssl-allow-chained is removed; it wasn't a useful option (if
you don't want chaining, don't specify a cainfo chain).
- --daemon-address is now a URL instead of just host:port. (If you omit
the protocol, http:// is prepended).
- --daemon-host and --daemon-port are now deprecated and produce a
warning (in simplewallet) if used; the replacement is to use
--daemon-address.
- --daemon-ssl is deprecated; specify --daemon-address=https://whatever
instead.
- the above three are now hidden from --help
- reordered the wallet connection options to make more logical sense.
2020-07-26 22:29:49 +02:00
node_server<t_payload_net_handler>::socks_connect(network_zone& zone, const epee::net_utils::network_address& remote)
2019-04-09 10:07:13 +02:00
{
auto result = socks_connect_internal(zone.m_net_server.get_stop_signal(), zone.m_net_server.get_io_service(), zone.m_proxy_address, remote);
if (result) // if no error
{
p2p_connection_context context{};
Replace epee http client with curl-based client
In short: epee's http client is garbage, standard violating, and
unreliable.
This completely removes the epee http client support and replaces it
with cpr, a curl-based C++ wrapper. rpc/http_client.h wraps cpr for RPC
requests specifically, but it is also usable directly.
This replacement has a number of advantages:
- requests are considerably more reliable. The epee http client code
assumes that a connection will be kept alive forever, and returns a
failure if a connection is ever closed. This results in some very
annoying things: for example, preparing a transaction and then waiting
a long tim before confirming it will usually result in an error
communication with the daemon. This is just terribly behaviour: the
right thing to do on a connection failure is to resubmit the request.
- epee's http client is broken in lots of other ways: for example, it
tries throwing SSL at the port to see if it is HTTPS, but this is
protocol violating and just breaks (with a several second timeout) on
anything that *isn't* epee http server (for example, when lokid is
behind a proxying server).
- even when it isn't doing the above, the client breaks in other ways:
for example, there is a comment (replaced in this PR) in the Trezor PR
code that forces a connection close after every request because epee's
http client doesn't do proper keep-alive request handling.
- it seems noticeably faster to me in practical use in this PR; both
simple requests (for example, when running `lokid status`) and
wallet<->daemon connections are faster, probably because of crappy
code in epee. (I think this is also related to the throw-ssl-at-it
junk above: the epee client always generates an ssl certificate during
static initialization because it might need one at some point).
- significantly reduces the amount of code we have to maintain.
- removes all the epee ssl option code: curl can handle all of that just
fine.
- removes the epee socks proxy code; curl can handle that just fine.
(And can do more: it also supports using HTTP/HTTPS proxies).
- When a cli wallet connection fails we know show why it failed (which
now is an error message from curl), which could have all sorts of
reasons like hostname resolution failure, bad ssl certificate, etc.
Previously you just got a useless generic error that tells you
nothing.
Other related changes in this PR:
- Drops the check-for-update and download-update code. To the best of
my knowledge these have never been supported in loki-core and so it
didn't seem worth the trouble to convert them to use cpr for the
requests.
- Cleaned up node_rpc_proxy return values: there was an inconsistent mix
of ways to return errors and how the returned strings were handled.
Instead this cleans it up to return a pair<bool, val>, which (with
C++17) can be transparently captured as:
auto [success, val] = node.whatever(req);
This drops the failure message string, but it was almost always set to
something fairly useless (if we want to resurrect it we could easily
change the first element to be a custom type with a bool operator for
success, and a `.error` attribute containing some error string, but
for the most part the current code wasn't doing much useful with the
failure string).
- changed local detection (for automatic trusted daemon determination)
to just look for localhost, and to not try to resolve anything.
Trusting non-public IPs does not work well (e.g. with lokinet where
all .loki addresses resolve to a local IP).
- ssl fingerprint option is removed; this isn't supported by curl
(because it is essentially just duplicating what a custom cainfo
bundle does)
- --daemon-ssl-allow-chained is removed; it wasn't a useful option (if
you don't want chaining, don't specify a cainfo chain).
- --daemon-address is now a URL instead of just host:port. (If you omit
the protocol, http:// is prepended).
- --daemon-host and --daemon-port are now deprecated and produce a
warning (in simplewallet) if used; the replacement is to use
--daemon-address.
- --daemon-ssl is deprecated; specify --daemon-address=https://whatever
instead.
- the above three are now hidden from --help
- reordered the wallet connection options to make more logical sense.
2020-07-26 22:29:49 +02:00
if (zone.m_net_server.add_connection(context, std::move(*result), remote))
2019-04-09 10:07:13 +02:00
return {std::move(context)};
}
2020-06-02 00:30:19 +02:00
return std::nullopt;
2019-04-09 10:07:13 +02:00
}
template<typename t_payload_net_handler>
2020-06-02 00:30:19 +02:00
std::optional<p2p_connection_context_t<typename t_payload_net_handler::connection_context>>
Replace epee http client with curl-based client
In short: epee's http client is garbage, standard violating, and
unreliable.
This completely removes the epee http client support and replaces it
with cpr, a curl-based C++ wrapper. rpc/http_client.h wraps cpr for RPC
requests specifically, but it is also usable directly.
This replacement has a number of advantages:
- requests are considerably more reliable. The epee http client code
assumes that a connection will be kept alive forever, and returns a
failure if a connection is ever closed. This results in some very
annoying things: for example, preparing a transaction and then waiting
a long tim before confirming it will usually result in an error
communication with the daemon. This is just terribly behaviour: the
right thing to do on a connection failure is to resubmit the request.
- epee's http client is broken in lots of other ways: for example, it
tries throwing SSL at the port to see if it is HTTPS, but this is
protocol violating and just breaks (with a several second timeout) on
anything that *isn't* epee http server (for example, when lokid is
behind a proxying server).
- even when it isn't doing the above, the client breaks in other ways:
for example, there is a comment (replaced in this PR) in the Trezor PR
code that forces a connection close after every request because epee's
http client doesn't do proper keep-alive request handling.
- it seems noticeably faster to me in practical use in this PR; both
simple requests (for example, when running `lokid status`) and
wallet<->daemon connections are faster, probably because of crappy
code in epee. (I think this is also related to the throw-ssl-at-it
junk above: the epee client always generates an ssl certificate during
static initialization because it might need one at some point).
- significantly reduces the amount of code we have to maintain.
- removes all the epee ssl option code: curl can handle all of that just
fine.
- removes the epee socks proxy code; curl can handle that just fine.
(And can do more: it also supports using HTTP/HTTPS proxies).
- When a cli wallet connection fails we know show why it failed (which
now is an error message from curl), which could have all sorts of
reasons like hostname resolution failure, bad ssl certificate, etc.
Previously you just got a useless generic error that tells you
nothing.
Other related changes in this PR:
- Drops the check-for-update and download-update code. To the best of
my knowledge these have never been supported in loki-core and so it
didn't seem worth the trouble to convert them to use cpr for the
requests.
- Cleaned up node_rpc_proxy return values: there was an inconsistent mix
of ways to return errors and how the returned strings were handled.
Instead this cleans it up to return a pair<bool, val>, which (with
C++17) can be transparently captured as:
auto [success, val] = node.whatever(req);
This drops the failure message string, but it was almost always set to
something fairly useless (if we want to resurrect it we could easily
change the first element to be a custom type with a bool operator for
success, and a `.error` attribute containing some error string, but
for the most part the current code wasn't doing much useful with the
failure string).
- changed local detection (for automatic trusted daemon determination)
to just look for localhost, and to not try to resolve anything.
Trusting non-public IPs does not work well (e.g. with lokinet where
all .loki addresses resolve to a local IP).
- ssl fingerprint option is removed; this isn't supported by curl
(because it is essentially just duplicating what a custom cainfo
bundle does)
- --daemon-ssl-allow-chained is removed; it wasn't a useful option (if
you don't want chaining, don't specify a cainfo chain).
- --daemon-address is now a URL instead of just host:port. (If you omit
the protocol, http:// is prepended).
- --daemon-host and --daemon-port are now deprecated and produce a
warning (in simplewallet) if used; the replacement is to use
--daemon-address.
- --daemon-ssl is deprecated; specify --daemon-address=https://whatever
instead.
- the above three are now hidden from --help
- reordered the wallet connection options to make more logical sense.
2020-07-26 22:29:49 +02:00
node_server<t_payload_net_handler>::public_connect(network_zone& zone, epee::net_utils::network_address const& na)
2019-04-09 10:07:13 +02:00
{
2019-04-11 00:34:30 +02:00
bool is_ipv4 = na.get_type_id() == epee::net_utils::ipv4_network_address::get_type_id();
bool is_ipv6 = na.get_type_id() == epee::net_utils::ipv6_network_address::get_type_id();
2020-06-02 00:30:19 +02:00
CHECK_AND_ASSERT_MES(is_ipv4 || is_ipv6, std::nullopt,
2019-04-11 00:34:30 +02:00
"Only IPv4 or IPv6 addresses are supported here");
std::string address;
std::string port;
if (is_ipv4)
{
const epee::net_utils::ipv4_network_address &ipv4 = na.as<const epee::net_utils::ipv4_network_address>();
address = epee::string_tools::get_ip_string_from_int32(ipv4.ip());
2020-10-22 23:31:59 +02:00
port = std::to_string(ipv4.port());
2019-04-11 00:34:30 +02:00
}
2020-09-25 03:50:04 +02:00
else
2019-04-11 00:34:30 +02:00
{
const epee::net_utils::ipv6_network_address &ipv6 = na.as<const epee::net_utils::ipv6_network_address>();
address = ipv6.ip().to_string();
2020-10-22 23:31:59 +02:00
port = std::to_string(ipv6.port());
2019-04-11 00:34:30 +02:00
}
2019-04-09 10:07:13 +02:00
typename net_server::t_connection_context con{};
2019-04-11 00:34:30 +02:00
const bool res = zone.m_net_server.connect(address, port,
2019-04-09 10:07:13 +02:00
zone.m_config.m_net_config.connection_timeout,
Replace epee http client with curl-based client
In short: epee's http client is garbage, standard violating, and
unreliable.
This completely removes the epee http client support and replaces it
with cpr, a curl-based C++ wrapper. rpc/http_client.h wraps cpr for RPC
requests specifically, but it is also usable directly.
This replacement has a number of advantages:
- requests are considerably more reliable. The epee http client code
assumes that a connection will be kept alive forever, and returns a
failure if a connection is ever closed. This results in some very
annoying things: for example, preparing a transaction and then waiting
a long tim before confirming it will usually result in an error
communication with the daemon. This is just terribly behaviour: the
right thing to do on a connection failure is to resubmit the request.
- epee's http client is broken in lots of other ways: for example, it
tries throwing SSL at the port to see if it is HTTPS, but this is
protocol violating and just breaks (with a several second timeout) on
anything that *isn't* epee http server (for example, when lokid is
behind a proxying server).
- even when it isn't doing the above, the client breaks in other ways:
for example, there is a comment (replaced in this PR) in the Trezor PR
code that forces a connection close after every request because epee's
http client doesn't do proper keep-alive request handling.
- it seems noticeably faster to me in practical use in this PR; both
simple requests (for example, when running `lokid status`) and
wallet<->daemon connections are faster, probably because of crappy
code in epee. (I think this is also related to the throw-ssl-at-it
junk above: the epee client always generates an ssl certificate during
static initialization because it might need one at some point).
- significantly reduces the amount of code we have to maintain.
- removes all the epee ssl option code: curl can handle all of that just
fine.
- removes the epee socks proxy code; curl can handle that just fine.
(And can do more: it also supports using HTTP/HTTPS proxies).
- When a cli wallet connection fails we know show why it failed (which
now is an error message from curl), which could have all sorts of
reasons like hostname resolution failure, bad ssl certificate, etc.
Previously you just got a useless generic error that tells you
nothing.
Other related changes in this PR:
- Drops the check-for-update and download-update code. To the best of
my knowledge these have never been supported in loki-core and so it
didn't seem worth the trouble to convert them to use cpr for the
requests.
- Cleaned up node_rpc_proxy return values: there was an inconsistent mix
of ways to return errors and how the returned strings were handled.
Instead this cleans it up to return a pair<bool, val>, which (with
C++17) can be transparently captured as:
auto [success, val] = node.whatever(req);
This drops the failure message string, but it was almost always set to
something fairly useless (if we want to resurrect it we could easily
change the first element to be a custom type with a bool operator for
success, and a `.error` attribute containing some error string, but
for the most part the current code wasn't doing much useful with the
failure string).
- changed local detection (for automatic trusted daemon determination)
to just look for localhost, and to not try to resolve anything.
Trusting non-public IPs does not work well (e.g. with lokinet where
all .loki addresses resolve to a local IP).
- ssl fingerprint option is removed; this isn't supported by curl
(because it is essentially just duplicating what a custom cainfo
bundle does)
- --daemon-ssl-allow-chained is removed; it wasn't a useful option (if
you don't want chaining, don't specify a cainfo chain).
- --daemon-address is now a URL instead of just host:port. (If you omit
the protocol, http:// is prepended).
- --daemon-host and --daemon-port are now deprecated and produce a
warning (in simplewallet) if used; the replacement is to use
--daemon-address.
- --daemon-ssl is deprecated; specify --daemon-address=https://whatever
instead.
- the above three are now hidden from --help
- reordered the wallet connection options to make more logical sense.
2020-07-26 22:29:49 +02:00
con, zone.m_bind_ip);
2019-04-09 10:07:13 +02:00
if (res)
return {std::move(con)};
2020-06-02 00:30:19 +02:00
return std::nullopt;
2019-04-09 10:07:13 +02:00
}
2014-03-03 23:07:58 +01:00
}